X-Git-Url: https://git.saurik.com/redis.git/blobdiff_plain/70ecddc9f42750d77b1c6c003488b380dba1dca1..9542d9d8d7e5035e5f884eec3e4f922d51145bd5:/TODO diff --git a/TODO b/TODO index d45b3504..145ec524 100644 --- a/TODO +++ b/TODO @@ -2,7 +2,7 @@ Redis TODO ---------- WARNING: are you a possible Redis contributor? - Before implementing what is listed what is listed in this file + Before implementing what is listed in this file please drop a message in the Redis google group or chat with antirez or pietern on irc.freenode.org #redis to check if the work is already in progress and if the feature is still interesting for @@ -10,13 +10,6 @@ WARNING: are you a possible Redis contributor? of a merge. Otherwise it is probably wasted work! Thank you -API CHANGES -=========== - -* Turn commands into variadic versions when it makes sense, that is, when - the variable number of arguments represent values, and there is no conflict - with the return value of the command. - CLUSTER ======= @@ -31,63 +24,22 @@ CLUSTER SCRIPTING ========= -* MULTI/EXEC/...: should we do more than simply ignoring it? -* Prevent Lua from calling itself with redis("eval",...) * SCRIPT FLUSH or alike to start a fresh interpreter? -* http://redis.io/topics/sponsors - -APPEND ONLY FILE -================ - -* in AOF rewirte use HMSET to rewrite small hashes instead of multiple calls - to HSET. OPTIMIZATIONS ============= -* Avoid COW due to incrementing the dict iterators counter. * SORT: Don't copy the list into a vector when BY argument is constant. * Write the hash table size of every db in the dump, so that Redis can resize the hash table just one time when loading a big DB. * Read-only mode for slaves. * Redis big lists as linked lists of small ziplists? Possibly a simple heuristic that join near nodes when some node gets smaller than the low_level, and split it into two if gets bigger than high_level. -REPORTING -========= - -* Better INFO output with sections. - -RANDOM -====== - -* Clients should be closed as far as the output buffer list is bigger than a given number of elements (configurable in redis.conf) -* Should the redis default configuration, and the default redis.conf, just bind 127.0.0.1? - KNOWN BUGS ========== -* What happens in the following scenario: - 1) We are reading an AOF file. - 2) SETEX FOO 5 BAR - 3) APPEND FOO ZAP - What happens if between 1 and 2 for some reason (system under huge load - or alike) too many time passes? We should prevent expires while the - AOF is loading. * #519: Slave may have expired keys that were never read in the master (so a DEL is not sent in the replication channel) but are already expired since - a lot of time. Maybe after a given delay that is undoubltly greater than + a lot of time. Maybe after a given delay that is undoubtably greater than the replication link latency we should expire this key on the slave on access? - -DISKSTORE TODO -============== - -* Fix FLUSHALL/FLUSHDB: the queue of pending reads/writes should be handled. -* Check that 00/00 and ff/ff exist at startup, otherwise exit with error. -* Implement sync flush option, where data is written synchronously on disk when a command is executed. -* Implement MULTI/EXEC as transaction abstract API to diskstore.c, with transaction_start, transaction_end, and a journal to recover. -* Stop BGSAVE thread on shutdown and any other condition where the child is killed during normal bgsave. -* Fix RANDOMKEY to really do something interesting -* Fix DBSIZE to really do something interesting -* Add a DEBUG command to check if an entry is or not in memory currently -* dscache.c near 236, kobj = createStringObject... we could use static obj.