X-Git-Url: https://git.saurik.com/redis.git/blobdiff_plain/f1907057221cc1601a914c7626d896483999fb39..59132e42125c43299c428e86f9107493e173883e:/TODO?ds=sidebyside diff --git a/TODO b/TODO index f24cec23..145ec524 100644 --- a/TODO +++ b/TODO @@ -2,7 +2,7 @@ Redis TODO ---------- WARNING: are you a possible Redis contributor? - Before implementing what is listed what is listed in this file + Before implementing what is listed in this file please drop a message in the Redis google group or chat with antirez or pietern on irc.freenode.org #redis to check if the work is already in progress and if the feature is still interesting for @@ -10,21 +10,6 @@ WARNING: are you a possible Redis contributor? of a merge. Otherwise it is probably wasted work! Thank you -API CHANGES -=========== - -* Turn commands into variadic versions when it makes sense, that is, when - the variable number of arguments represent values, and there is no conflict - with the return value of the command. - -2.6 -=== - -* Everything under the "SCRIPTING" section. -* Float increments (INCRBYFLOAT). -* Fix BRPOPLPUSH + vararg LPUSH semantics. -* AOF everysec fsync in background (either the aof-bg branch or something else). - CLUSTER ======= @@ -39,40 +24,22 @@ CLUSTER SCRIPTING ========= -* MULTI/EXEC/...: should we do more than simply ignoring it? -* Prevent Lua from calling itself with redis("eval",...) * SCRIPT FLUSH or alike to start a fresh interpreter? -* Check better the replication handling. -* Prevent execution of writes if random commands are used. - -APPEND ONLY FILE -================ - -* in AOF rewirte use HMSET to rewrite small hashes instead of multiple calls - to HSET. OPTIMIZATIONS ============= -* Avoid COW due to incrementing the dict iterators counter. * SORT: Don't copy the list into a vector when BY argument is constant. * Write the hash table size of every db in the dump, so that Redis can resize the hash table just one time when loading a big DB. * Read-only mode for slaves. * Redis big lists as linked lists of small ziplists? Possibly a simple heuristic that join near nodes when some node gets smaller than the low_level, and split it into two if gets bigger than high_level. -RANDOM -====== - -* Server should abort when getcwd() fails if there is some kind of persistence configured. Check this in the cron loop. -* Clients should be closed as far as the output buffer list is bigger than a given number of elements (configurable in redis.conf) -* Should the redis default configuration, and the default redis.conf, just bind 127.0.0.1? - KNOWN BUGS ========== * #519: Slave may have expired keys that were never read in the master (so a DEL is not sent in the replication channel) but are already expired since - a lot of time. Maybe after a given delay that is undoubltly greater than + a lot of time. Maybe after a given delay that is undoubtably greater than the replication link latency we should expire this key on the slave on access?