4 WARNING: are you a possible Redis contributor?
5 Before implementing what is listed what is listed in this file
6 please drop a message in the Redis google group or chat with
7 antirez or pietern on irc.freenode.org #redis to check if the work
8 is already in progress and if the feature is still interesting for
9 us, and *how* exactly this can be implemented to have good changes
10 of a merge. Otherwise it is probably wasted work! Thank you
15 * Check that 00/00 and ff/ff exist at startup, otherwise exit with error.
16 * Implement sync flush option, where data is written synchronously on disk when a command is executed.
17 * Implement MULTI/EXEC as transaction abstract API to diskstore.c, with transaction_start, transaction_end, and a journal to recover.
18 * Stop BGSAVE thread on shutdown and any other condition where the child is killed during normal bgsave.
19 * Fix RANDOMKEY to really do something interesting
20 * Fix DBSIZE to really do something interesting
21 * Add a DEBUG command to check if an entry is or not in memory currently
23 * dscache.c near 236, kobj = createStringObject... we could use static obj.
28 * in AOF rewirte use HMSET to rewrite small hashes instead of multiple calls
34 * Avoid COW due to incrementing the dict iterators counter.
35 * SORT: Don't copy the list into a vector when BY argument is constant.
36 * Write the hash table size of every db in the dump, so that Redis can resize the hash table just one time when loading a big DB.
37 * Read-only mode for slaves.
42 * Better INFO output with sections.
47 * Clients should be closed as far as the output buffer list is bigger than a given number of elements (configurable in redis.conf)
48 * Should the redis default configuration, and the default redis.conf, just bind 127.0.0.1?
53 * What happens in the following scenario:
54 1) We are reading an AOF file.
57 What happens if between 1 and 2 for some reason (system under huge load
58 or alike) too many time passes? We should prevent expires while the