]> git.saurik.com Git - redis.git/blob - TODO
TODO updated
[redis.git] / TODO
1 Redis TODO
2 ----------
3
4 WARNING: are you a possible Redis contributor?
5 Before implementing what is listed what is listed in this file
6 please drop a message in the Redis google group or chat with
7 antirez or pietern on irc.freenode.org #redis to check if the work
8 is already in progress and if the feature is still interesting for
9 us, and *how* exactly this can be implemented to have good changes
10 of a merge. Otherwise it is probably wasted work! Thank you
11
12
13 API CHANGES
14 ===========
15
16 * Turn commands into variadic versions when it makes sense, that is, when
17 the variable number of arguments represent values, and there is no conflict
18 with the return value of the command.
19
20 2.6
21 ===
22
23 * Everything under the "SCRIPTING" section.
24 * Float increments (INCRBYFLOAT).
25 * Fix BRPOPLPUSH + vararg LPUSH semantics.
26
27 CLUSTER
28 =======
29
30 * Implement rehashing and cluster check in redis-trib.
31 * Reimplement MIGRATE / RESTORE to use just in memory buffers (no disk at
32 all). This will require touching a lot of the RDB stuff around, but we may
33 hand with faster persistence for RDB.
34 * Implement the slave nodes semantics and election.
35 * Allow redis-trib to create a cluster-wide snapshot (using SYNC).
36 * Allow redis-trib to restore a cluster-wide snapshot (implement UPLOAD?).
37
38 SCRIPTING
39 =========
40
41 * MULTI/EXEC/...: should we do more than simply ignoring it?
42 * Prevent Lua from calling itself with redis("eval",...)
43 * SCRIPT FLUSH or alike to start a fresh interpreter?
44 * Check better the replication handling.
45 * Prevent execution of writes if random commands are used.
46
47 APPEND ONLY FILE
48 ================
49
50 * in AOF rewrite use HMSET to rewrite small hashes instead of multiple calls
51 to HSET.
52
53 OPTIMIZATIONS
54 =============
55
56 * Avoid COW due to incrementing the dict iterators counter.
57 * SORT: Don't copy the list into a vector when BY argument is constant.
58 * Write the hash table size of every db in the dump, so that Redis can resize the hash table just one time when loading a big DB.
59 * Read-only mode for slaves.
60 * Redis big lists as linked lists of small ziplists?
61 Possibly a simple heuristic that join near nodes when some node gets smaller than the low_level, and split it into two if gets bigger than high_level.
62
63 RANDOM
64 ======
65
66 * Server should abort when getcwd() fails if there is some kind of persistence configured. Check this in the cron loop.
67 * Clients should be closed as far as the output buffer list is bigger than a given number of elements (configurable in redis.conf)
68 * Should the redis default configuration, and the default redis.conf, just bind 127.0.0.1?
69
70 KNOWN BUGS
71 ==========
72
73 * #519: Slave may have expired keys that were never read in the master (so a DEL
74 is not sent in the replication channel) but are already expired since
75 a lot of time. Maybe after a given delay that is undoubtably greater than
76 the replication link latency we should expire this key on the slave on
77 access?