-* Redis Virtual Memory for datasets bigger than RAM (http://groups.google.com/group/redis-db/msg/752997c7b38553cd)
-
-VERSION 1.8 TODO (Fault tollerant sharding)
-===========================================
-
-* Redis-cluster, a fast intermediate layer (proxy) that implements consistent hashing and fault tollerant nodes handling.
-
-Interesting readings about this:
-
- - http://ayende.com/Blog/archive/2009/04/06/designing-rhino-dht-a-fault-tolerant-dynamically-distributed-hash.aspx
-
-VERSION 2.0 TODO (Optimizations and latency)
-============================================
-
-* Lower the CPU usage.
-* Lower the RAM usage everywhere possible.
-* Use epool and alike to rewrite ae.c for Linux and other platforms suppporting fater-than-select() mutiplexing APIs.
-* Implement an UDP interface for low-latency GET/SET operations.
-
-VERSION 2.2 TODO (Optimizations and latency)
-============================================
-
-* JSON command able to access data serialized in JSON format. For instance if I've a key foobar with a json object I can alter the "name" file using somthing like: "JSON SET foobar name Kevin". We should have GET and INCRBY as well.
-
-OTHER IMPORTANT THINGS THAT WILL BE ADDED BUT I'M NOT SURE WHEN
-===============================================================
-
-BIG ONES:
-
-* Specially encoded memory-saving integer sets.
-* A command to export a JSON dump (there should be mostly working patch needing major reworking).
-
-SMALL ONES:
+* SORT: Don't copy the list into a vector when BY argument is constant.
+* Write the hash table size of every db in the dump, so that Redis can resize the hash table just one time when loading a big DB.
+* Read-only mode for slaves.