X-Git-Url: https://git.saurik.com/redis.git/blobdiff_plain/4c22fd3409d7d84e682fbc45f3014bbdd4a1c397..e0ba14557e2b24d21e92be01afd2307f1cc57aac:/TODO diff --git a/TODO b/TODO index 9e6b0561..aeb58229 100644 --- a/TODO +++ b/TODO @@ -9,36 +9,61 @@ WARNING: are you a possible Redis contributor? us, and *how* exactly this can be implemented to have good changes of a merge. Otherwise it is probably wasted work! Thank you -VM TODO + +API CHANGES +=========== + +* Turn commands into variadic versions when it makes sense, that is, when + the variable number of arguments represent values, and there is no conflict + with the return value of the command. + +CLUSTER ======= -* Use multiple open FDs against the VM file, one for thread. -* Check what happens performance-wise if instead of creating threads again and again the same threads are reused forever. Note: this requires a way to disable this clients in the child, but waiting for empty new jobs queue can be enough. -* mmap the swap file. -* Use just a single IO Job to swap out a key, and add a mutex so that pages in the page table can be marked as used and scanned from the thread itself. +* Implement rehashing and cluster check in redis-trib. +* Reimplement MIGRATE / RESTORE to use just in memory buffers (no disk at + all). This will require touching a lot of the RDB stuff around, but we may + hand with faster persistence for RDB. +* Implement the slave nodes semantics and election. +* Allow redis-trib to create a cluster-wide snapshot (using SYNC). +* Allow redis-trib to restore a cluster-wide snapshot (implement UPLOAD?). -REPLICATION -=========== +SCRIPTING +========= -* PING between master and slave from time to time, so we can subject the -master-slave link to timeout, and detect when the connection is gone even -if the socket is still up. +* MULTI/EXEC/...: should we do more than simply ignoring it? +* Prevent Lua from calling itself with redis("eval",...) +* SCRIPT FLUSH or alike to start a fresh interpreter? +* http://redis.io/topics/sponsors + +APPEND ONLY FILE +================ + +* in AOF rewirte use HMSET to rewrite small hashes instead of multiple calls + to HSET. OPTIMIZATIONS ============= +* Avoid COW due to incrementing the dict iterators counter. * SORT: Don't copy the list into a vector when BY argument is constant. * Write the hash table size of every db in the dump, so that Redis can resize the hash table just one time when loading a big DB. * Read-only mode for slaves. +* Redis big lists as linked lists of small ziplists? + Possibly a simple heuristic that join near nodes when some node gets smaller than the low_level, and split it into two if gets bigger than high_level. + +RANDOM +====== + +* Server should abort when getcwd() fails if there is some kind of persistence configured. Check this in the cron loop. +* Clients should be closed as far as the output buffer list is bigger than a given number of elements (configurable in redis.conf) +* Should the redis default configuration, and the default redis.conf, just bind 127.0.0.1? KNOWN BUGS ========== -* What happens in the following scenario: - 1) We are reading an AOF file. - 2) SETEX FOO 5 BAR - 3) APPEND FOO ZAP - What happens if between 1 and 2 for some reason (system under huge load - or alike) too many time passes? We should prevent expires while the - AOF is loading. - +* #519: Slave may have expired keys that were never read in the master (so a DEL + is not sent in the replication channel) but are already expired since + a lot of time. Maybe after a given delay that is undoubltly greater than + the replication link latency we should expire this key on the slave on + access?