]>
Commit | Line | Data |
---|---|---|
4c22fd34 | 1 | Redis TODO |
2 | ---------- | |
f284d963 | 3 | |
4c22fd34 | 4 | WARNING: are you a possible Redis contributor? |
5 | Before implementing what is listed what is listed in this file | |
6 | please drop a message in the Redis google group or chat with | |
7 | antirez or pietern on irc.freenode.org #redis to check if the work | |
8 | is already in progress and if the feature is still interesting for | |
9 | us, and *how* exactly this can be implemented to have good changes | |
10 | of a merge. Otherwise it is probably wasted work! Thank you | |
f6b141c5 | 11 | |
bb8716b6 | 12 | |
4e17be0e | 13 | API CHANGES |
14 | =========== | |
bb8716b6 | 15 | |
4e17be0e | 16 | * Turn commands into variadic versions when it makes sense, that is, when |
17 | the variable number of arguments represent values, and there is no conflict | |
18 | with the return value of the command. | |
e09b5186 | 19 | |
8905378c | 20 | CLUSTER |
21 | ======= | |
22 | ||
23 | * Implement rehashing and cluster check in redis-trib. | |
24 | * Reimplement MIGRATE / RESTORE to use just in memory buffers (no disk at | |
25 | all). This will require touching a lot of the RDB stuff around, but we may | |
26 | hand with faster persistence for RDB. | |
27 | * Implement the slave nodes semantics and election. | |
28 | * Allow redis-trib to create a cluster-wide snapshot (using SYNC). | |
29 | * Allow redis-trib to restore a cluster-wide snapshot (implement UPLOAD?). | |
30 | ||
70ecddc9 | 31 | SCRIPTING |
32 | ========= | |
33 | ||
34 | * MULTI/EXEC/...: should we do more than simply ignoring it? | |
35 | * Prevent Lua from calling itself with redis("eval",...) | |
36 | * SCRIPT FLUSH or alike to start a fresh interpreter? | |
37 | * http://redis.io/topics/sponsors | |
38 | ||
cc9f0eee | 39 | APPEND ONLY FILE |
40 | ================ | |
3f477979 | 41 | |
cc9f0eee | 42 | * in AOF rewirte use HMSET to rewrite small hashes instead of multiple calls |
43 | to HSET. | |
3f477979 | 44 | |
4c22fd34 | 45 | OPTIMIZATIONS |
46 | ============= | |
0188805d | 47 | |
bcde6378 | 48 | * Avoid COW due to incrementing the dict iterators counter. |
682ac724 | 49 | * SORT: Don't copy the list into a vector when BY argument is constant. |
50 | * Write the hash table size of every db in the dump, so that Redis can resize the hash table just one time when loading a big DB. | |
4c22fd34 | 51 | * Read-only mode for slaves. |
e9ee513b | 52 | * Redis big lists as linked lists of small ziplists? |
53 | Possibly a simple heuristic that join near nodes when some node gets smaller than the low_level, and split it into two if gets bigger than high_level. | |
cef34df0 | 54 | |
2d9fdb9d | 55 | REPORTING |
56 | ========= | |
57 | ||
58 | * Better INFO output with sections. | |
59 | ||
60 | RANDOM | |
61 | ====== | |
62 | ||
63 | * Clients should be closed as far as the output buffer list is bigger than a given number of elements (configurable in redis.conf) | |
64 | * Should the redis default configuration, and the default redis.conf, just bind 127.0.0.1? | |
65 | ||
cef34df0 | 66 | KNOWN BUGS |
67 | ========== | |
68 | ||
4c22fd34 | 69 | * What happens in the following scenario: |
70 | 1) We are reading an AOF file. | |
71 | 2) SETEX FOO 5 BAR | |
72 | 3) APPEND FOO ZAP | |
73 | What happens if between 1 and 2 for some reason (system under huge load | |
74 | or alike) too many time passes? We should prevent expires while the | |
75 | AOF is loading. | |
8231b1ef | 76 | * #519: Slave may have expired keys that were never read in the master (so a DEL |
77 | is not sent in the replication channel) but are already expired since | |
78 | a lot of time. Maybe after a given delay that is undoubltly greater than | |
79 | the replication link latency we should expire this key on the slave on | |
80 | access? | |
2cffe299 | 81 | |
4e17be0e | 82 | DISKSTORE TODO |
83 | ============== | |
84 | ||
85 | * Fix FLUSHALL/FLUSHDB: the queue of pending reads/writes should be handled. | |
86 | * Check that 00/00 and ff/ff exist at startup, otherwise exit with error. | |
87 | * Implement sync flush option, where data is written synchronously on disk when a command is executed. | |
88 | * Implement MULTI/EXEC as transaction abstract API to diskstore.c, with transaction_start, transaction_end, and a journal to recover. | |
89 | * Stop BGSAVE thread on shutdown and any other condition where the child is killed during normal bgsave. | |
90 | * Fix RANDOMKEY to really do something interesting | |
91 | * Fix DBSIZE to really do something interesting | |
92 | * Add a DEBUG command to check if an entry is or not in memory currently | |
93 | * dscache.c near 236, kobj = createStringObject... we could use static obj. |