]>
Commit | Line | Data |
---|---|---|
1 | Redis TODO | |
2 | ---------- | |
3 | ||
4 | WARNING: are you a possible Redis contributor? | |
5 | Before implementing what is listed what is listed in this file | |
6 | please drop a message in the Redis google group or chat with | |
7 | antirez or pietern on irc.freenode.org #redis to check if the work | |
8 | is already in progress and if the feature is still interesting for | |
9 | us, and *how* exactly this can be implemented to have good changes | |
10 | of a merge. Otherwise it is probably wasted work! Thank you | |
11 | ||
12 | ||
13 | API CHANGES | |
14 | =========== | |
15 | ||
16 | * Turn commands into variadic versions when it makes sense, that is, when | |
17 | the variable number of arguments represent values, and there is no conflict | |
18 | with the return value of the command. | |
19 | ||
20 | APPEND ONLY FILE | |
21 | ================ | |
22 | ||
23 | * in AOF rewirte use HMSET to rewrite small hashes instead of multiple calls | |
24 | to HSET. | |
25 | ||
26 | OPTIMIZATIONS | |
27 | ============= | |
28 | ||
29 | * Avoid COW due to incrementing the dict iterators counter. | |
30 | * SORT: Don't copy the list into a vector when BY argument is constant. | |
31 | * Write the hash table size of every db in the dump, so that Redis can resize the hash table just one time when loading a big DB. | |
32 | * Read-only mode for slaves. | |
33 | * Redis big lists as linked lists of small ziplists? | |
34 | Possibly a simple heuristic that join near nodes when some node gets smaller than the low_level, and split it into two if gets bigger than high_level. | |
35 | ||
36 | REPORTING | |
37 | ========= | |
38 | ||
39 | * Better INFO output with sections. | |
40 | ||
41 | RANDOM | |
42 | ====== | |
43 | ||
44 | * Clients should be closed as far as the output buffer list is bigger than a given number of elements (configurable in redis.conf) | |
45 | * Should the redis default configuration, and the default redis.conf, just bind 127.0.0.1? | |
46 | ||
47 | KNOWN BUGS | |
48 | ========== | |
49 | ||
50 | * What happens in the following scenario: | |
51 | 1) We are reading an AOF file. | |
52 | 2) SETEX FOO 5 BAR | |
53 | 3) APPEND FOO ZAP | |
54 | What happens if between 1 and 2 for some reason (system under huge load | |
55 | or alike) too many time passes? We should prevent expires while the | |
56 | AOF is loading. | |
57 | * #519: Slave may have expired keys that were never read in the master (so a DEL | |
58 | is not sent in the replication channel) but are already expired since | |
59 | a lot of time. Maybe after a given delay that is undoubltly greater than | |
60 | the replication link latency we should expire this key on the slave on | |
61 | access? | |
62 | ||
63 | DISKSTORE TODO | |
64 | ============== | |
65 | ||
66 | * Fix FLUSHALL/FLUSHDB: the queue of pending reads/writes should be handled. | |
67 | * Check that 00/00 and ff/ff exist at startup, otherwise exit with error. | |
68 | * Implement sync flush option, where data is written synchronously on disk when a command is executed. | |
69 | * Implement MULTI/EXEC as transaction abstract API to diskstore.c, with transaction_start, transaction_end, and a journal to recover. | |
70 | * Stop BGSAVE thread on shutdown and any other condition where the child is killed during normal bgsave. | |
71 | * Fix RANDOMKEY to really do something interesting | |
72 | * Fix DBSIZE to really do something interesting | |
73 | * Add a DEBUG command to check if an entry is or not in memory currently | |
74 | * dscache.c near 236, kobj = createStringObject... we could use static obj. |