While Redis is very fast, currently it lacks scalability in the form of ability
to transparently run across different nodes. This is desirable mainly for the
While Redis is very fast, currently it lacks scalability in the form of ability
to transparently run across different nodes. This is desirable mainly for the
A) Fault tolerance. Some node may go off line without affecting the operations.
B) Holding bigger datasets without using a single box with a lot of RAM.
A) Fault tolerance. Some node may go off line without affecting the operations.
B) Holding bigger datasets without using a single box with a lot of RAM.
Redis is very simple and fast at its core, so Redis cluster should try to
follow the same guidelines. The first problem with a Dynamo-alike DHT is that
Redis is very simple and fast at its core, so Redis cluster should try to
follow the same guidelines. The first problem with a Dynamo-alike DHT is that
in the case of a netsplit may diverge in very complex ways, is not going to
be easy. The "most recent data" wins is not applicable and all the resolution
business should be in the application.
in the case of a netsplit may diverge in very complex ways, is not going to
be easy. The "most recent data" wins is not applicable and all the resolution
business should be in the application.
to time if there is no traffic. This way Proxy Nodes can understand asap if
there is a problem in some Data Node or in the Configuration Node.
to time if there is no traffic. This way Proxy Nodes can understand asap if
there is a problem in some Data Node or in the Configuration Node.
On startup a Proxy Node will also register itself in the Configuration node, and will make sure to refresh it's configuration every N seconds (via an EXPIREing key) so that it's possible to detect when a Proxy node fails.
On startup a Proxy Node will also register itself in the Configuration node, and will make sure to refresh it's configuration every N seconds (via an EXPIREing key) so that it's possible to detect when a Proxy node fails.
When a new Data node joins or leaves the cluster, and in general when the cluster configuration changes, all the Proxy nodes will receive a notification and will reload the configuration from the Configuration node.
When a new Data node joins or leaves the cluster, and in general when the cluster configuration changes, all the Proxy nodes will receive a notification and will reload the configuration from the Configuration node.
3a) The Proxy Node forwards the query to M Data Nodes at the same time, waiting for replies.
3b) Once all the replies are received the Proxy Node checks that the replies are consistent. For instance all the M nodes need to reply with OK and so forth. If the query fails in a subset of nodes but succeeds in other nodes, the failing nodes are considered unreliable and are put off line notifying the configuration node.
3a) The Proxy Node forwards the query to M Data Nodes at the same time, waiting for replies.
3b) Once all the replies are received the Proxy Node checks that the replies are consistent. For instance all the M nodes need to reply with OK and so forth. If the query fails in a subset of nodes but succeeds in other nodes, the failing nodes are considered unreliable and are put off line notifying the configuration node.