-Redis TODO and Roadmap
+Redis TODO
+----------
-VERSION 1.2 TODO (Zsets, Integer encoding, Append only journal)
-===============================================================
+WARNING: are you a possible Redis contributor?
+ Before implementing what is listed what is listed in this file
+ please drop a message in the Redis google group or chat with
+ antirez or pietern on irc.freenode.org #redis to check if the work
+ is already in progress and if the feature is still interesting for
+ us, and *how* exactly this can be implemented to have good changes
+ of a merge. Otherwise it is probably wasted work! Thank you
-Most of the features already implemented for this release. The following is a list of the missing things in order to release the first beta tar.gz:
-* Document "masterauth" in redis.conf, also merge the other related patch if it seems a safe one.
-* SETNX and MSETNX should use lookupKeWrite() in order to expire volatile keys when a write attempt is made.
+API CHANGES
+===========
-VERSION 1.4 TODO (Hash type)
-============================
+* Turn commands into variadic versions when it makes sense, that is, when
+ the variable number of arguments represent values, and there is no conflict
+ with the return value of the command.
-* Hashes (HSET, HGET, HEXISTS, HLEN, ...).
-* Specially encoded memory-saving integer sets.
-* An utility able to export an .rdb file into a text-only JSON dump, we can't live anymore without such a tool. Probably an extension to redis-cli.
-* List ops like L/RPUSH L/RPOP should return the new list length.
+2.6
+===
-VERSION 1.6 TODO (Virtual memory)
-=================================
+* Everything under the "SCRIPTING" section.
+* Float increments (INCRBYFLOAT).
+* Fix BRPOPLPUSH + vararg LPUSH semantics.
+* AOF everysec fsync in background (either the aof-bg branch or something else).
-* Redis Virtual Memory for datasets bigger than RAM (http://groups.google.com/group/redis-db/msg/752997c7b38553cd)
+CLUSTER
+=======
-VERSION 1.8 TODO (Fault tollerant sharding)
-===========================================
+* Implement rehashing and cluster check in redis-trib.
+* Reimplement MIGRATE / RESTORE to use just in memory buffers (no disk at
+ all). This will require touching a lot of the RDB stuff around, but we may
+ hand with faster persistence for RDB.
+* Implement the slave nodes semantics and election.
+* Allow redis-trib to create a cluster-wide snapshot (using SYNC).
+* Allow redis-trib to restore a cluster-wide snapshot (implement UPLOAD?).
-* Redis-cluster, a fast intermediate layer (proxy) that implements consistent hashing and fault tollerant nodes handling.
+SCRIPTING
+=========
-Interesting readings about this:
+* MULTI/EXEC/...: should we do more than simply ignoring it?
+* Prevent Lua from calling itself with redis("eval",...)
+* SCRIPT FLUSH or alike to start a fresh interpreter?
+* Check better the replication handling.
+* Prevent execution of writes if random commands are used.
- - http://ayende.com/Blog/archive/2009/04/06/designing-rhino-dht-a-fault-tolerant-dynamically-distributed-hash.aspx
+APPEND ONLY FILE
+================
-VERSION 2.0 TODO (Optimizations and latency)
-============================================
+* in AOF rewirte use HMSET to rewrite small hashes instead of multiple calls
+ to HSET.
-* Lower the CPU usage.
-* Lower the RAM usage everywhere possible.
-* Use epool and alike to rewrite ae.c for Linux and other platforms suppporting fater-than-select() mutiplexing APIs.
-* Implement an UDP interface for low-latency GET/SET operations.
+OPTIMIZATIONS
+=============
-VERSION 2.2 TODO (Optimizations and latency)
-============================================
-
-* JSON command able to access data serialized in JSON format. For instance if I've a key foobar with a json object I can alter the "name" file using somthing like: "JSON SET foobar name Kevin". We should have GET and INCRBY as well.
-
-SHORT/LONG TERM RANDOM TODO ITEMS
-=================================
-
-Most of this can be seen just as proposals, the fact they are in this list
-it's not a guarantee they'll ever get implemented ;)
-
-* Move dict.c from hash table to skip list, in order to avoid the blocking resize operation needed for the hash table.
-* FORK command (fork()s executing the commands received by the current
- client in the new process). Hint: large SORTs can use more cores,
- copy-on-write will avoid memory problems.
-* DUP command? DUP srckey dstkey, creates an exact clone of srckey value in dstkey.
+* Avoid COW due to incrementing the dict iterators counter.
* SORT: Don't copy the list into a vector when BY argument is constant.
* Write the hash table size of every db in the dump, so that Redis can resize the hash table just one time when loading a big DB.
-* LOCK / TRYLOCK / UNLOCK as described many times in the google group
-* Replication automated tests
-* Byte Array type (BA prefixed commands): BASETBIT BAGETBIT BASETU8 U16 U32 U64 S8 S16 S32 S64, ability to atomically INCRBY all the base types. BARANGE to get a range of bytes as a bulk value, BASETRANGE to set a range of bytes.
-* zmalloc() should avoid to add a private header for archs where there is some other kind of libc-specific way to get the size of a malloced block. Already done for Mac OS X.
-* Read-only mode.
-* Pattern-matching replication.
+* Read-only mode for slaves.
+* Redis big lists as linked lists of small ziplists?
+ Possibly a simple heuristic that join near nodes when some node gets smaller than the low_level, and split it into two if gets bigger than high_level.
+
+RANDOM
+======
-DOCUMENTATION WISHLIST
-======================
+* Server should abort when getcwd() fails if there is some kind of persistence configured. Check this in the cron loop.
+* Clients should be closed as far as the output buffer list is bigger than a given number of elements (configurable in redis.conf)
+* Should the redis default configuration, and the default redis.conf, just bind 127.0.0.1?
-* Page explaining tips to reduce memory usage.
-* A Sorted sets HOWTO
+KNOWN BUGS
+==========
+* #519: Slave may have expired keys that were never read in the master (so a DEL
+ is not sent in the replication channel) but are already expired since
+ a lot of time. Maybe after a given delay that is undoubltly greater than
+ the replication link latency we should expire this key on the slave on
+ access?