X-Git-Url: https://git.saurik.com/redis.git/blobdiff_plain/36c17a53b6aece050b79b667fd32064f6eb116c2..2ebd2720b37dcca3b6e0c18377bd69e9eaf541fc:/TODO diff --git a/TODO b/TODO index 2402a9d4..145ec524 100644 --- a/TODO +++ b/TODO @@ -2,28 +2,29 @@ Redis TODO ---------- WARNING: are you a possible Redis contributor? - Before implementing what is listed what is listed in this file + Before implementing what is listed in this file please drop a message in the Redis google group or chat with antirez or pietern on irc.freenode.org #redis to check if the work is already in progress and if the feature is still interesting for us, and *how* exactly this can be implemented to have good changes of a merge. Otherwise it is probably wasted work! Thank you -DISKSTORE TODO -============== -* Check that 00/00 and ff/ff exist at startup, otherwise exit with error. -* Implement sync flush option, where data is written synchronously on disk when a command is executed. -* Implement MULTI/EXEC as transaction abstract API to diskstore.c, with transaction_start, transaction_end, and a journal to recover. -* Stop BGSAVE thread on shutdown and any other condition where the child is killed during normal bgsave. -* Use a mutex to log on the file, so that we don't get overlapping messages, or even better make sure to use a single write against it. +CLUSTER +======= -REPLICATION -=========== +* Implement rehashing and cluster check in redis-trib. +* Reimplement MIGRATE / RESTORE to use just in memory buffers (no disk at + all). This will require touching a lot of the RDB stuff around, but we may + hand with faster persistence for RDB. +* Implement the slave nodes semantics and election. +* Allow redis-trib to create a cluster-wide snapshot (using SYNC). +* Allow redis-trib to restore a cluster-wide snapshot (implement UPLOAD?). -* PING between master and slave from time to time, so we can subject the -master-slave link to timeout, and detect when the connection is gone even -if the socket is still up. +SCRIPTING +========= + +* SCRIPT FLUSH or alike to start a fresh interpreter? OPTIMIZATIONS ============= @@ -31,15 +32,14 @@ OPTIMIZATIONS * SORT: Don't copy the list into a vector when BY argument is constant. * Write the hash table size of every db in the dump, so that Redis can resize the hash table just one time when loading a big DB. * Read-only mode for slaves. +* Redis big lists as linked lists of small ziplists? + Possibly a simple heuristic that join near nodes when some node gets smaller than the low_level, and split it into two if gets bigger than high_level. KNOWN BUGS ========== -* What happens in the following scenario: - 1) We are reading an AOF file. - 2) SETEX FOO 5 BAR - 3) APPEND FOO ZAP - What happens if between 1 and 2 for some reason (system under huge load - or alike) too many time passes? We should prevent expires while the - AOF is loading. - +* #519: Slave may have expired keys that were never read in the master (so a DEL + is not sent in the replication channel) but are already expired since + a lot of time. Maybe after a given delay that is undoubtably greater than + the replication link latency we should expire this key on the slave on + access?