+28 Nov 2010: Ver 1.0 - initial version
+22 APr 2010: Ver 1.1 - more details and rationales
+
+Overview
+========
+
+Redis is a fast key-value store supporting complex aggregate data types as
+values. For instance keys can be bound to lists with many elements, sets,
+sub-dictionaries (hashes) and so forth.
+
+While Redis is very fast, currently it lacks scalability in the form of ability
+to transparently run across different nodes. This is desirable mainly for the
+following three rasons:
+
+A) Fault tolerance. Some node may go off line without affecting the operations.
+B) Holding bigger datasets without using a single box with a lot of RAM.
+C) Scaling writes.
+
+Since a single Redis instance supports 140,000 operations per second in a good
+Linux box costing less than $1000, the need for Redis Cluster arises more
+from "A" and "B". Scaling writes can also be useful in very high load
+environments. Scaling reads is already easily accomplished using Redis built-in
+replication.
+
+Design goals
+============
+
+Designing a DHT in 2010 is hard as there is too much bias towards good designs
+that are already well tested in practice, like the Amazon Dynamo design.
+Still a Dynamo alike DHT may not be the best fit for Redis.
+
+Redis is very simple and fast at its core, so Redis cluster should try to
+follow the same guidelines. The first problem with a Dynamo-alike DHT is that
+Redis supports complex data types. Merging complex values like lsits, where
+in the case of a netsplit may diverge in very complex ways, is not going to
+be easy. The "most recent data" wins is not applicable and all the resolution
+business should be in the application.
+
+Even a simple application can end up with complex schema of keys and complex
+values. Writing code in order to resolve conflicts is not going to be
+programmer friendly.
+
+So the author of this document claims that Redis does not need to resist to
+netsplits, but it is enough to resist to M-1 nodes going offline, where
+M is the number of nodes storing every key-value pair.
+
+For instance in a three nodes cluster I may configure the cluster in order to
+store every key into two instances (M=2). Such a cluster can resist to a single
+node going offline without interruption of the service.
+
+When more than M-1 nodes are off line the cluster should detect such a condition
+and refusing any further query. The system administrator should check why
+M-1 nodes are offline and bring them back again if possible.
+
+Once resisting to big net splits is no longer a requirement as there is no
+conflict resolution stage, since at least an original node responsible of
+holding every possible key must be online for the cluster to work, there is
+also no need for a design where every node can act as an independent entity
+receiving queries and forwarding this queries to other nodes as needed.
+
+Instead a more decoupled approach can be used, in the form of a Redis Proxy
+node (or multiple Proxy nodes) that is contacted by clients, and
+is responsible of forwarding queries and replies back and forth from data nodes.
+
+Data nodes can be just vanilla redis-server instances.
+