]> git.saurik.com Git - redis.git/blame - CLUSTER
Merge pull request #97 from jvain/redis-cli
[redis.git] / CLUSTER
CommitLineData
e6f0a7b2 1CLUSTER README
2==============
3
5215ab14 4Redis Cluster is currently a work in progress, however there are a few things
e6f0a7b2 5that you can do already with it to see how it works.
6
7The following guide show you how to setup a three nodes cluster and issue some
8basic command against it.
9
10... WORK IN PROGRESS ...
11
121) Show MIGRATE
132) Show CLUSTER MEET
143) Show link status detection with CLUSTER NODES
154) Show how to add slots with CLUSTER ADDSLOTS
165) Show redirection
176) Show cluster down
18
19... WORK IN PROGRESS ...
20
ecc91094 21TODO
e6f0a7b2 22====
ecc91094 23
5215ab14 24*** WARNING: all the following probably has some meaning only for
143d0077 25*** me (antirez), most info are not updated, so please consider this file
26*** as a private TODO list / brainstorming.
27
ecc91094 28- disconnect FAIL clients after some pong idle time.
29
30---------------------------------
31
32* Majority rule: the cluster con continue when there are all the hash slots covered AND when there are the majority of masters.
33* Shutdown on request rule: when a node sees many connections closed or even a timeout longer than usual on almost all the other nodes, it will usually wait for the normal timeout before to change the state, unless it receives a query from a client: in such a case it will put itself into error status.
34
35--------------------------------
36
37* When asked for a key that is not in a node's business it will reply:
38
39 -ASK 1.2.3.4:6379 (in case we want the client to ask just one time)
40 -MOVED <slotid> 1.2.3.4:6379 (in case the hash slot is permanently moved)
41
42So with -ASK a client should just retry the query against this new node, a single time.
43
44With -MOVED the client should update its hash slots table to reflect the fact that now the specified node is the one to contact for the specified hash slot.
45
46* Nodes communicate using a binary protocol.
47
48* Node failure detection.
49
50 1) Every node contains information about all the other nodes:
51 - If this node is believed to work ok or not
52 - The hash slots for which this node is responsible
53 - If the node is a master or a slave
54 - If it is a slave, the slave of which node
55 - if it is a master, the list of slave nodes
56 - The slaves are ordered for "<ip>:<port>" string from lower to higher
57 ordered lexicographically. When a master is down, the cluster will
58 try to elect the first slave in the list.
59
60 2) Every node also contains the unix time where every other node was
61 reported to work properly (that is, it replied to a ping or any other
62 protocol request correctly). For every node we also store the timestamp
63 at which we sent the latest ping, so we can easily compute the current
64 lag.
65
66 3) From time to time a node pings a random node, selected among the nodes
67 with the least recent "alive" time stamp. Three random nodes are selected
68 and the one with lower alive time stamp is pinged.
69
70 4) The ping packet contains also information about a few random nodes
71 alive time stamp. So that the receiver of the ping will update the
72 alive table if the received alive timestamp is more recent the
73 one present in the node local table.
74
5215ab14 75 In the ping packet every node "gossip" information is something like
ecc91094 76 this:
77
78 <ip>:<port>:<status>:<pingsent_timestamp>:<pongreceived_timestamp>
79
80 status is OK, POSSIBLE_FAILURE, FAILURE.
81
82 5) The node replies to ping with a pong packet, that also contains a random
83 selections of nodes timestamps.
84
85A given node thinks another node may be in a failure state once there is a
86ping timeout bigger than 30 seconds (configurable).
87
88When a possible failure is detected the node performs the following action:
89
90 1) Is the average between all the other nodes big? For instance bigger
91 than 30 seconds / 2 = 15 seconds? Probably *we* are disconnected.
92 In such a case we don't trust our lag data, and reset all the
93 timestamps of sent ping to zero. This way when we'll reconnect there
94 is no risk that we'll claim many nodes are down, taking inappropriate
95 actions.
96
97 2) Messages from nodes marked as failed are *always* ignored by the other
98 nodes. A new node needs to be "introduced" by a good online node.
99
100 3) If we are well connected (that is, condition "1" is not true) and a
101 node timeout is > 30 seconds, we mark the node as POSSIBLE_FAILURE
102 (a flat in the cluster node structure). Every time we sent a ping
103 to another node we inform this other nodes that we detected this
104 condition, as already stated.
105
106 4) Once a node receives a POSSIBLE_FAILURE status for a node that is
107 already marked as POSSIBLE_FAILURE locally, it sends a message
108 to all the other nodes of type NODE_FAILURE_DETECTED, communicating the
109 ip/port of the specified node.
110
111 All the nodes need to update the status of this node setting it into
112 FAILURE.
113
114 5) If the computer in FAILURE state is a master node, what is needed is
115 to perform a Slave Election.
116
117SLAVE ELECTION
118
119 1) The slave election is performed by the first slave (with slaves ordered
120 lexicographically). Actually it is the first functioning slave, so if
121 the first slave is marked as failing the next slave will perform the
122 election and so forth. Such a slave is called the "Successor".
123
124 2) The Successor starts checking that all the nodes in the cluster already
125 marked the master in FAILURE state. If at least one node does not agree
126 no action is performed.
127
128 3) If all the nodes agree that the master is failing, the Successor does
129 the following:
130
131 a) It will send a SUCCESSION message to all the other nodes, that will
132 upgrade the hash slot tables accordingly. It will make sure that all
133 the nodes are updated and if some node did not received the message
134 it will keep trying.
135 b) Once all nodes not marked as FAILURE accepted the SUCCESSION message
136 it will update his own table and will start acting as a master
137 accepting write queries.
138 c) Every node receiving the succession message, if not already informed
139 of the change will broadcast the same message to other three random
140 nodes. No action is performed if the specified host was already marked
141 as the master node.
142 d) A node that was a slave of the original master that failed will
143 switch master to the new one once the SUCCESSION message is received.
144
145RANDOM
146
147 1) When selecting a slave, the system will try to pick one with an IP different than the master and other slaves, if possible.
148
149 2) The PING packet also contains information about the local configuration checksum. This is the SHA1 of the current configuration, without the bits that normally change form one node to another (like latest ping reply, failure status of nodes, and so forth). From time to time the local config SHA1 is checked against the list of the other nodes, and if there is a mismatch between our configuration and the most common one that lasts for more than N seconds, the most common configuration is asked and retrieved from another node. The event is logged.
150
151 3) Every time a node updates its internal cluster configuration, it dumps such a config in the cluster.conf file. On startup the configuration is reloaded.
152 Nodes can share the cluster configuration when needed (for instance if SHA1 does not match) using this exact same format.
153
154CLIENTS
155
156 - Clients may be configured to use slaves to perform reads, when read-after-write consistency is not required.