]> git.saurik.com Git - redis.git/blame - design-documents/REDIS-CLUSTER-2
redis-cli tests commented out since there is a problem with the CI server and this...
[redis.git] / design-documents / REDIS-CLUSTER-2
CommitLineData
5bdb384f 1Redis Cluster - Alternative 1
2
328 Apr 2010: Ver 1.0 - initial version
4
5Overview
6========
7
8The motivations and design goals of Redis Cluster are already outlined in the
9first design document of Redis Cluster. This document is just an attempt to
10provide a completely alternative approach in order to explore more ideas.
11
12In this document the alternative explored is a cluster where communication is
13performed directly from client to the target node, without intermediate layer.
14
15The intermediate layer can be used, in the form of a proxy, in order to provide
16the same functionality to clients not able to directly use the cluster protocol.
17So in a first stage clients can use a proxy to implement the hash ring, but
18later this clients can switch to a native implementation, following a
19specification that the Redis project will provide.
20
21In this new design fault tolerance is achieved by replicating M-1 times every
22data node instead of storing the same key M times across nodes.
23
24From the point of view of CAP our biggest sacrifice is about "P", that is
25resistance to partitioning. Only M-1 nodes can go down for the cluster still
26be functional. Also when possible "A" is somewhat sacrificed for "L", that
27is, Latency. Not really in the CAP equation but a very important parameter.
28
29Network layout
30==============
31
32In this alternative design the network layout is simple as there are only
33clients talking directly to N data nodes. So we can imagine to have:
34
35- K Redis clients, directly talking to the data nodes.
36- N Redis data nodes, that are, normal Redis instances.
37
38Data nodes are replicate M-1 times (so there are a total of M copies for
39every node). If M is one, the system is not fault tolerant. If M is 2 one
40data node can go off line without affecting the operations. And so forth.
41
42Hash slots
43==========
44
45The key space is divided into 1024 slots.
46
47Given a key, the SHA1 function is applied to it.
48The first 10 bytes of the SHA1 digest are interpreted as an unsigned integer
49from 0 to 1023. This is the hash slot of the key.
50
51Data nodes
52==========
53
54Data nodes are normal Redis instances, but a few additional commands are
55provided.
56
57HASHRING ADD ... list of hash slots ...
58HASHRING DEL ... list of hash slots ...
59HASHRING REHASHING slot
60HASHRING SLOTS => returns the list of configured slots
61HSAHRING KEYS ... list of hash slots ...
62
63By default Redis instances are configured to accept operations about all
64the hash slots. With this commands it's possible to configure a Redis instance
65to accept only a subset of the key space.
66
67If an operation is performed against a key hashing to a slot that is not
68configured to be accepted, the Redis instance will reply with:
69
70 "-ERR wrong hash slot"
71
72More details on the HASHRING command and sub commands will be showed later
73in this document.
74
75Additionally three other commands are added:
76
77DUMP key
78RESTORE key <dump data>
79MIGRATE key host port
80
81DUMP is used to output a very compact binary representation of the data stored at key.
82
83RESTORE re-creates a value (storing it at key) starting from the output produced by DUMP.
84
85MIGRATE is like a server-side DUMP+RESTORE command. This atomic command moves one key from the connected instance to another instance, returning the status code of the operation (+OK or an error).
86
87The protocol described in this draft only uses the MIGRATE command, but this in turn will use RESTORE internally when connecting to another server, and DUMP is provided for symmetry.
88
89Querying the cluster
90====================
91
921) Reading the cluster config
93-----------------------------
94
95Clients of the cluster are required to have the cluster configuration loaded
96into memory. The cluster configuration is the sum of the following info:
97
98- Number of data nodes in the cluster, for instance, 10
99- A map between hash slots and nodes, so for instnace:
100 hash slot 1 -> node 0
101 hash slot 2 -> node 5
102 hash slot 3 -> node 3
103 ... and so forth ...
104- Physical address of nodes, and their replicas.
105 node 0 addr -> 192.168.1.100
106 node 0 replicas -> 192.168.1.101, 192.168.1.105
107- Configuration version: the SHA1 of the whole configuration
108
109The configuration is stored in every single data node of the cluster.
110
111A client without the configuration in memory is require, as a first step, to
112read the config. In order to do so the client requires to have a list of IPs
113that are with good probability data nodes of the cluster.
114
115The client will try to get the config from all this nodes. If no node is found
116responding, an error is reported to the user.
117
1182) Caching and refreshing the configuration
119-------------------------------------------
120
121A node is allowed to cache the configuration in memory or in a different way
122(for instance storing the configuration into a file), but every client is
123required to check if the configuration changed at max every 10 seconds, asking
124for the configuration version key with a single GET call, and checking if the
125configuration version matches the one loaded in memory.
126
127Also a client is required to refresh the configuration every time a node
128replies with:
129
130 "-ERR wrong hash slot"
131
132As this means that hash slots were reassigned in some way.
133
134Checking the configuration every 10 seconds is not required in theory but is
135a good protection against errors and failures that may happen in real world
136environments. It is also very cheap to perform, as a GET operation from time
137to time is going to have no impact in the overall performance.
138
1393) Read query
140-------------
141
142To perform a read query the client hashes the key argument from the command
143(in the intiial version of Redis Cluster only single-key commands are
144allowed). Using the in memory configuration it maps the hash key to the
145node ID.
146
147If the client is configured to support read-after-write consistency, then
148the "master" node for this hash slot is queried.
149
150Otherwise the client picks a random node from the master and the replicas
151available.
152
1534) Write query
154--------------
155
156A write query is exactly like a read query, with the difference that the
157write always targets the master node, instead of the replicas.
158
159Creating a cluster
160==================
161
162In order to create a new cluster, the redis-cluster command line utility is
163used. It gets a list of available nodes and replicas, in order to write the
164initial configuration in all the nodes.
165
166At this point the cluster is usable by clients.
167
168Adding nodes to the cluster
169===========================
170
171The command line utility redis-cluster is used in order to add a node to the
172cluster:
173
1741) The cluster configuration is loaded.
1752) A fair number of hash slots are assigned to the new data node.
1763) Hash slots moved to the new node are marked as "REHASHING" in the old
177 nodes, using the HASHRING command:
178
179 HASHRING SETREHASHING 1 192.168.1.103 6380
180
181The above command set the hash slot "1" in rehashing state, with the
182"forwarding address" to 192.168.1.103:6380. As a result if this node receives
183a query about a key hashing to hash slot 1, that *is not present* in the
184current data set, it replies with:
185
186 "-MIGRATED 192.168.1.103:6380"
187
188The client can then reissue the query against the new node.
189
190Instead even if the hash slot is marked as rehashing but the requested key
191is still there, the query is processed. This allows for non blocking
192rehashing.
193
194Note that no additional memory is used by Redis in order to provide such a
195feature.
196
1974) While the Hash slot is marked as "REHASHING", redis-cluster asks this node
198the list of all the keys matching the specified hash slot. Then all the keys
199are moved to the new node using the MIGRATE command.
2005) Once all the keys are migrated, the hash slot is deleted from the old
201node configuration with "HASHRING DEL 1". And the configuration is update.
202
203Using this algorithm all the hash slots are migrated one after the other to the new node. In practical implementation before to start the migration the
204redis-cluster utility should write a log into the configuration so that
205in case of crash or any other problem the utility is able to recover from
206were it left.
207
208Fault tolerance
209===============
210
211Fault tolerance is reached replicating every data node M-1 times, so that we
212have one master and M-1 replicas for a total of M nodes holding the same
213hash slots. Up to M-1 nodes can go down without affecting the cluster.
214
215The tricky part about fault tolerance is detecting when a node is failing and
216signaling it to all the other clients.
217
218When a master node is failing in a permanent way, promoting the first slave
219is easy:
2201) At some point a client will notice there are problems accessing a given node. It will try to refresh the config, but will notice that the config is already up to date.
2212) In order to make sure the problem is not about the client connectivity itself, it will try to reach other nodes as well. If more than M-1 nodes appear to be down, it's either a client networking problem or alternatively the cluster can't be fixed as too many nodes are down anyway. So no action is taken, but an error is reported.
2223) If instead only 1 or at max M-1 nodes appear to be down, the client promotes a slave as master and writes the new configuration to all the data nodes.
223
224All the other clients will see the data node not working, and as a first step will try to refresh the configuration. They will successful refresh the configuration and the cluster will work again.
225
226Every time a slave is promoted, the information is written in a log that is actually a Redis list, in all the data nodes, so that system administration tools can detect what happened in order to send notifications to the admin.
227
228Intermittent problems
229---------------------
230
231In the above scenario a master was failing in a permanent way. Now instead
232let's think to a case where a network cable is not working well so a node
233appears to be a few seconds up and a few seconds down.
234
235When this happens recovering can be much harder, as a client may notice the
236problem and will promote a slave to master as a result, but then the host
237will be up again and the other clients will not see the problem, writing to
238the old master for at max 10 seconds (after 10 seconds all the clients are
239required to perform a few GETs to check the configuration version of the
240cluster and update if needed).
241
242One way to fix this problem is to delegate the fail over mechanism to a
243failover agent. When clients notice problems will not take any active action
244but will just log the problem into a redis list in all the reachable nodes,
245wait, check for configuration change, and retry.
246
247The failover agent constantly monitor this logs: if some client is reporting
248a failing node, it can take appropriate actions, checking if the failure is
249permanent or not. If it's not he can send a SHUTDOWN command to the failing
250master if possible. The failover agent can also consider better the problem
251checking if the failing mode is advertised by all the clients or just a single
252one, and can check itself if there is a real problem before to proceed with
253the fail over.
254
255Redis proxy
256===========
257
258In order to make the switch to the clustered version of Redis simpler, and
259because the client-side protocol is non trivial to implement compared to the
260usual Redis client lib protocol (where a minimal lib can be as small as
261100 lines of code), a proxy will be provided to implement the cluster protocol
262as a proxy.
263
264Every client will talk to a redis-proxy node that is responsible of using
265the new protocol and forwarding back the replies.
266
267In the long run the aim is to switch all the major client libraries to the
268new protocol in a native way.
269
270Supported commands
271==================
272
273Because with this design we talk directly to data nodes and there is a single
274"master" version of every value (that's the big gain dropping "P" from CAP!)
275almost all the redis commands can be supported by the clustered version
276including MULTI/EXEC and multi key commands as long as all the keys will hash
277to the same hash slot. In order to guarantee this, key tags can be used,
278where when a specific pattern is present in the key name, only that part is
279hashed in order to obtain the hash index.
280
0ce76798 281Random remarks
282==============
283
284- It's still not clear how to perform an atomic election of a slave to master.
285- In normal conditions (all the nodes working) this new design is just
286 K clients talking to N nodes without intermediate layers, no routes:
287 this means it is horizontally scalable with O(1) lookups.
288- The cluster should optionally be able to work with manual fail over
289 for environments where it's desirable to do so. For instance it's possible
290 to setup periodic checks on all the nodes, and switch IPs when needed
291 or other advanced configurations that can not be the default as they
292 are too environment dependent.
293
294A few ideas about client-side slave election
295============================================
296
297Detecting failures in a collaborative way
298-----------------------------------------
299
300In order to take the node failure detection and slave election a distributed
301effort, without any "control program" that is in some way a single point
302of failure (the cluster will not stop when it stops, but errors are not
303corrected without it running), it's possible to use a few consensus-alike
304algorithms.
305
306For instance all the nodes may take a list of errors detected by clients.
307
308If Client-1 detects some failure accessing Node-3, for instance a connection
309refused error or a timeout, it logs what happened with LPUSH commands against
310all the other nodes. This "error messages" will have a timestamp and the Node
311id. Something like:
312
313 LPUSH __cluster__:errors 3:1272545939
314
315So if the error is reported many times in a small amount of time, at some
316point a client can have enough hints about the need of performing a
317slave election.
318
319Atomic slave election
320---------------------
321
322In order to avoid races when electing a slave to master (that is in order to
323avoid that some client can still contact the old master for that node in
324the 10 seconds timeframe), the client performing the election may write
325some hint in the configuration, change the configuration SHA1 accordingly and
326wait for more than 10 seconds, in order to be sure all the clients will
327refresh the configuration before a new access.
328
329The config hint may be something like:
330
331"we are switching to a new master, that is x.y.z.k:port, in a few seconds"
332
333When a client updates the config and finds such a flag set, it starts to
334continuously refresh the config until a change is noticed (this will take
335at max 10-15 seconds).
336
337The client performing the election will wait that famous 10 seconds time frame
338and finally will update the config in a definitive way setting the new
339slave as mater. All the clients at this point are guaranteed to have the new
340config either because they refreshed or because in the next query their config
341is already expired and they'll update the configuration.
342
5bdb384f 343EOF