antirez [Thu, 15 Nov 2012 19:11:05 +0000 (20:11 +0100)]
Safer handling of MULTI/EXEC on errors.
After the transcation starts with a MULIT, the previous behavior was to
return an error on problems such as maxmemory limit reached. But still
to execute the transaction with the subset of queued commands on EXEC.
While it is true that the client was able to check for errors
distinguish QUEUED by an error reply, MULTI/EXEC in most client
implementations uses pipelining for speed, so all the commands and EXEC
are sent without caring about replies.
With this change:
1) EXEC fails if at least one command was not queued because of an
error. The EXECABORT error is used.
2) A generic error is always reported on EXEC.
3) The client DISCARDs the MULTI state after a failed EXEC, otherwise
pipelining multiple transactions would be basically impossible:
After a failed EXEC the next transaction would be simply queued as
the tail of the previous transaction.
antirez [Mon, 19 Nov 2012 11:02:08 +0000 (12:02 +0100)]
Children creating AOF or RDB files now report memory used by COW.
Finally Redis is able to report the amount of memory used by
copy-on-write while saving an RDB or writing an AOF file in background.
Note that this information is currently only logged (at NOTICE level)
and not shown in INFO because this is less trivial (but surely doable
with some minor form of interprocess communication).
The reason we can't capture this information on the parent before we
call wait3() is that the Linux kernel will release the child memory
ASAP, and only retain the minimal state for the process that is useful
to report the child termination to the parent.
The COW size is obtained by summing all the Private_Dirty fields found
in the "smap" file inside the proc filesystem for the process.
All this is Linux specific and is not available on other systems.
So the AOF rewrite was handled in the else branch without actually
checking if the pid really matches. This commit makes the check explicit
and logs at WARNING level if the pid returned by wait3() does not match
neither the RDB or AOF rewrite child.
antirez [Thu, 1 Nov 2012 14:36:37 +0000 (15:36 +0100)]
32 bit build fixed on Linux.
It failed because of the way jemalloc was compiled (without passing the
right flags to make, but just to configure). Now the same set of flags
are also passed to the make command, fixing the issue.
antirez [Wed, 31 Oct 2012 08:23:05 +0000 (09:23 +0100)]
Invert two sides of if expression in SET to avoid a lookup.
Because of the short circuit behavior of && inverting the two sides of
the if expression avoids an hash table lookup if the non-EX variant of
SET is called.
Thanks to Weibin Yao (@yaoweibin on github) for spotting this.
Schuster [Mon, 22 Oct 2012 09:44:20 +0000 (11:44 +0200)]
redis-check-dump now understands dumps produced by Redis 2.6
(Commit message from @antirez as it was missign in the original commits,
also the patch was modified a bit to still work with 2.4 dumps and to
avoid if expressions that are always true due to checked types range)
This commit changes redis-check-dump to account for new encodings and
for the new MSTIME expire format. It also refactors the test for valid
type into a function.
The code is still compatible with Redis 2.4 generated dumps.
antirez [Mon, 22 Oct 2012 08:43:39 +0000 (10:43 +0200)]
Default memory limit for 32bit instanced moved from 3.5 GB to 3 GB.
In some system, notably osx, the 3.5 GB limit was too far and not able
to prevent a crash for out of memory. The 3 GB limit works better and it
is still a lot of memory within a 4 GB theorical limit so it's not going
to bore anyone :-)
antirez [Mon, 22 Oct 2012 08:28:54 +0000 (10:28 +0200)]
Differentiate SCRIPT KILL error replies.
When calling SCRIPT KILL currently you can get two errors:
* No script in timeout (busy) state.
* The script already performed a write.
It is useful to be able to distinguish the two errors, but right now both
start with "ERR" prefix, so string matching (that is fragile) must be used.
This commit introduces two different prefixes.
-NOTBUSY and -UNKILLABLE respectively to reply with an error when no
script is busy at the moment, and when the script already executed a
write operation and can not be killed.
NanXiao [Wed, 10 Oct 2012 09:08:43 +0000 (17:08 +0800)]
Update src/redis-benchmark.c
The code of current implementation:
if (c->pending == 0) clientDone(c);
In clientDone function, the c's memory has been freed, then the loop will continue: while(c->pending). The memory of c has been freed now, so c->pending is invalid (c is an invalid pointer now), and this will cause memory dump in some platforams(eg: Solaris).
So I think the code should be modified as:
if (c->pending == 0)
{
clientDone(c);
break;
}
and this will not lead to while(c->pending).
antirez [Tue, 16 Oct 2012 15:35:50 +0000 (17:35 +0200)]
Fix MULTI / EXEC rendering in MONITOR output.
Before of this commit it used to be like this:
MULTI
EXEC
... actual commands of the transaction ...
Because after all that is the natural order of things. Transaction
commands are queued and executed *only after* EXEC is called.
However this makes debugging with MONITOR a mess, so the code was
modified to provide a coherent output.
What happens is that MULTI is rendered in the MONITOR output as far as
possible, instead EXEC is propagated only after the transaction is
executed, or even in the case it fails because of WATCH, so in this case
you'll simply see:
antirez [Thu, 11 Oct 2012 16:34:05 +0000 (18:34 +0200)]
Allow AUTH when Redis is busy because of timedout Lua script.
If the server is password protected we need to accept AUTH when there is
a server busy (-BUSY) condition, otherwise it will be impossible to send
SHUTDOWN NOSAVE or SCRIPT KILL.
antirez [Wed, 3 Oct 2012 17:14:46 +0000 (19:14 +0200)]
Hash function switched to murmurhash2.
The previously used hash function, djbhash, is not secure against
collision attacks even when the seed is randomized as there are simple
ways to find seed-independent collisions.
The new hash function appears to be safe (or much harder to exploit at
least) in this case, and has better distribution.
Better distribution does not always means that's better. For instance in
a fast benchmark with "DEBUG POPULATE 1000000" I obtained the following
results:
1.6 seconds with djbhash
2.0 seconds with murmurhash2
This is due to the fact that djbhash will hash objects that follow the
pattern `prefix:<id>` and where the id is numerically near, to near
buckets. This improves the locality.
However in other access patterns with keys that have no relation
murmurhash2 has some (apparently minimal) speed advantage.
On the other hand a better distribution should significantly
improve the quality of the distribution of elements returned with
dictGetRandomKey() that is used in SPOP, SRANDMEMBER, RANDOMKEY, and
other commands.
Everything considered, and under the suspect that this commit fixes a
security issue in Redis, we are switching to the new hash function.
If some serious speed regression will be found in the future we'll be able
to step back easiliy.
antirez [Fri, 5 Oct 2012 08:48:49 +0000 (10:48 +0200)]
Warn when configured maxmemory value seems odd.
This commit warns the user with a log at "warning" level if:
1) After the server startup the maxmemory limit was found to be < 1MB.
2) After a CONFIG SET command modifying the maxmemory setting the limit
is set to a value that is smaller than the currently used memory.
The behaviour of the Redis server is unmodified, and this wil not make
the CONFIG SET command or a wrong configuration in redis.conf less
likely to create problems, but at least this will make aware most users
about a possbile error they committed without resorting to external
help.
However no warning is issued if, as a result of loading the AOF or RDB
file, we are very near the maxmemory setting, or key eviction will be
needed in order to go under the specified maxmemory setting. The reason
is that in servers configured as a cache with an aggressive
maxmemory-policy most of the times restarting the server will cause this
condition to happen if persistence is not switched off.
Jokea [Thu, 30 Aug 2012 07:08:19 +0000 (15:08 +0800)]
Force expire all timer events when system clock skew is detected.
When system time changes back, the timer will not worker properly
hence some core functionality of redis will stop working(e.g. replication,
bgsave, etc). See issue #633 for details.
The patch saves the previous time and when a system clock skew is detected,
it will force expire all timers.
Modiifed by @antirez: the previous time was moved into the eventLoop
structure to make sure the library is still thread safe as long as you
use different event loops into different threads (otherwise you need
some synchronization). More comments added about the reasoning at the
base of the patch, that's worth reporting here:
/* If the system clock is moved to the future, and then set back to the
* right value, time events may be delayed in a random way. Often this
* means that scheduled operations will not be performed soon enough.
*
* Here we try to detect system clock skews, and force all the time
* events to be processed ASAP when this happens: the idea is that
* processing events earlier is less dangerous than delaying them
* indefinitely, and practice suggests it is. */
The new message now contains an hint about modifying the repl-timeout
configuration directive if the problem persists.
This should normally not be needed, because while the master generates
the RDB file it makes sure to send newlines to the replication channel
to prevent timeouts. However there are times when masters running on
very slow systems can completely stop for seconds during the RDB saving
process. In such a case enlarging the timeout value can fix the problem.
See issue #695 for an example of this problem in an EC2 deployment.
antirez [Wed, 3 Oct 2012 09:41:08 +0000 (11:41 +0200)]
"SORT by nosort" (skip sorting) respect sorted set ordering.
When SORT is called with the option BY set to a string constant not
inclduing the wildcard character "*", there is no way to sort the output
so any ordering is valid. This allows the SORT internals to optimize its
work and don't really sort the output at all.
However it was odd that this option was not able to retain the natural
order of a sorted set. This feature was requested by users multiple
times as sometimes to call SORT with GET against sorted sets as a way to
mass-fetch objects can be handy.
This commit introduces two things:
1) The ability of SORT to return sorted sets elements in their natural
ordering when `BY nosort` is specified, accordingly to `DESC / ASC` options.
2) The ability of SORT to optimize this case further if LIMIT is passed
as well, avoiding to really fetch the whole sorted set, but directly
obtaining the specified range.
Because in this case the sorting is always deterministic, no
post-sorting activity is performed when SORT is called from a Lua
script.
Scripting: add helper functions redis.error_reply() and redis.status_reply().
A previous commit introduced Redis.NIL. This commit adds similar helper
functions to return tables with a single field set to the specified
string so that instead of using 'return {err="My Error"}' it is possible
to use a more idiomatic form:
Lua arrays can't contain nil elements (see
http://www.lua.org/pil/19.1.html for more information), so Lua scripts
were not able to return a multi-bulk reply containing nil bulk
elements inside.
This commit introduces a special conversion: a table with just
a "nilbulk" field set to a boolean value is converted by Redis as a nil
bulk reply, but at the same time for Lua this type is not a "nil" so can
be used inside Lua arrays.
This type is also assigned to redis.NIL, so the following two forms
are equivalent and will be able to return a nil bulk reply as second
element of a three elements array:
Sentinel: reply -IDONTKNOW to get-master-addr-by-name on lack of info.
If we don't have any clue about a master since it never replied to INFO
so far, reply with an -IDONTKNOW error to SENTINEL
get-master-addr-by-name requests.
Sentinel: more easy master redirection if master is a slave.
Before this commit Sentienl used to redirect master ip/addr if the
current instance reported to be a slave only if this was the first INFO
output received, and the role was found to be slave.
Now instead also if we find that the runid is different, and the
reported role is slave, we also redirect to the reported master ip/addr.
This unifies the behavior of Sentinel in the case of a reboot (where it
will see the first INFO output with the wrong role and will perform the
redirection), with the behavior of Sentinel in the case of a change in
what it sees in the INFO output of the master.
antirez [Tue, 28 Aug 2012 15:45:01 +0000 (17:45 +0200)]
Sentinel: Sentinel-side support for slave priority.
The slave priority that is now published by Redis in INFO output is
now used by Sentinel in order to select the slave with minimum priority
for promotion, and in order to consider slaves with priority set to 0 as
not able to play the role of master (they will never be promoted by
Sentinel).
The "slave-priority" field is now one of the fileds that Sentinel
publishes when describing an instance via the SENTINEL commands such as
"SENTINEL slaves mastername".
antirez [Fri, 24 Aug 2012 10:29:54 +0000 (12:29 +0200)]
Sentinel: send SCRIPT KILL on -BUSY reply and SDOWN instance.
From the point of view of Redis an instance replying -BUSY is down,
since it is effectively not able to reply to user requests. However
a looping script is a recoverable condition in Redis if the script still
did not performed any write to the dataset. In that case performing a
fail over is not optimal, so Sentinel now tries to restore the normal server
condition killing the script with a SCRIPT KILL command.
If the script already performed some write before entering an infinite
(or long enough to timeout) loop, SCRIPT KILL will not work and the
fail over will be triggered anyway.
antirez [Fri, 3 Aug 2012 10:39:13 +0000 (12:39 +0200)]
Sentinel: SENTINEL FAILOVER command implemented.
This command can be used in order to force a Sentinel instance to start
a failover for the specified master, as leader, forcing the failover
even if the master is up.
The commit also adds some minor refactoring and other improvements to
functions already implemented that make them able to work when the
master is not in SDOWN condition. For instance slave selection
assumed that we ask INFO every second to every slave, this is true
only when the master is in SDOWN condition, so slave selection did not
worked when the master was not in SDOWN condition.
This commit adds support to optionally execute a script when one of the
following events happen:
* The failover starts (with a slave already promoted).
* The failover ends.
* The failover is aborted.
The script is called with enough parameters (documented in the example
sentinel.conf file) to provide information about the old and new ip:port
pair of the master, the role of the sentinel (leader or observer) and
the name of the master.
The goal of the script is to inform clients of the configuration change
in a way specific to the environment Sentinel is running, that can't be
implemented in a genereal way inside Sentinel itself.
Sentinel: when leader in wait-start, sense another leader as race.
When we are in wait start, if another leader (or any other external
entity) turns a slave into a master, abort the failover, and detect it
as an observer.
Note that the wait-start state is mainly there for this reason but the
abort was yet not implemented.
This adds a new sentinel event -failover-abort-race.
Sentinel: abort failover when in wait-start if master is back.
When we are a Leader Sentinel in wait-start state, starting with this
commit the failover is aborted if the master returns online.
This improves the way we handle a notable case of net split, that is the
split between Sentinels and Redis servers, that will be a very common
case of split becase Sentinels will often be installed in the client's
network and servers can be in a differnt arm of the network.
When Sentinels and Redis servers are isolated the master is in ODOWN
condition since the Sentinels can agree about this state, however the
failover does not start since there are no good slaves to promote (in
this specific case all the slaves are unreachable).
However when the split is resolved, Sentinels may sense the slave back
a moment before they sense the master is back, so the failover may start
without a good reason (since the master is actually working too).
Now this condition is reversible, so the failover will be aborted
immediately after if the master is detected to be working again, that
is, not in SDOWN nor in ODOWN condition.
Sentinel: abort failover if no good slave is available.
The previous behavior of the state machine was to wait some time and
retry the slave selection, but this is not robust enough against drastic
changes in the conditions of the monitored instances.
What we do now when the slave selection fails is to abort the failover
and return back monitoring the master. If the ODOWN condition is still
present a new failover will be triggered and so forth.
This commit also refactors the code we use to abort a failover.
When we reset the master we should start with clean timestamps for ping
replies otherwise we'll detect a spurious +sdown event, because on
+master-switch event the previous master instance was probably in +sdown
condition. Since we updated the address we should count time from
scratch again.
Also this commit makes sure to explicitly reset the count of pending
commands, now we can do this because of the new way the hiredis link
is closed.
Sentinel: changes to connection handling and redirection.
We disconnect the Redis instances hiredis link in a more robust way now.
Also we change the way we perform the redirection for the +switch-master
event, that is not just an instance reset with an address change.
Using the same system we now implement the +redirect-to-master event
that is triggered by an instance that is configured to be master but
found to be a slave at the first INFO reply. In that case we monitor the
master instead, logging the incident as an event.
Sentinel: more robust failover detection as observer.
Sentinel observers detect failover checking if a slave attached to the
monitored master turns into its replication state from slave to master.
However while this change may in theory only happen after a SLAVEOF NO
ONE command, in practie it is very easy to reboot a slave instance with
a wrong configuration that turns it into a master, especially if it was
a past master before a successfull failover.
This commit changes the detection policy so that if an instance goes
from slave to master, but at the same time the runid has changed, we
sense a reboot, and in that case we don't detect a failover at all.
This commit also introduces the "reboot" sentinel event, that is logged
at "warning" level (so this will trigger an admin notification).
The commit also fixes a problem in the disconnect handler that assumed
that the instance object always existed, that is not the case. Now we
no longer assume that redisAsyncFree() will call the disconnection
handler before returning.
This commit implements the first, beta quality implementation of Redis
Sentinel, a distributed monitoring system for Redis with notification
and automatic failover capabilities.
SRANDMEMBER called with just the key argument can just return a single
random element from a Redis Set. However many users need to return
multiple unique elements from a Set, this is not a trivial problem to
handle in the client side, and for truly good performance a C
implementation was required.
After many requests for this feature it was finally implemented.
The problem implementing this command is the strategy to follow when
the number of elements the user asks for is near to the number of
elements that are already inside the set. In this case asking random
elements to the dictionary API, and trying to add it to a temporary set,
may result into an extremely poor performance, as most add operations
will be wasted on duplicated elements.
For this reason this implementation uses a different strategy in this
case: the Set is copied, and random elements are returned to reach the
specified count.
The code actually uses 4 different algorithms optimized for the
different cases.
If the count is negative, the command changes behavior and allows for
duplicated elements in the returned subset.
A reimplementation of blocking operation internals.
Redis provides support for blocking operations such as BLPOP or BRPOP.
This operations are identical to normal LPOP and RPOP operations as long
as there are elements in the target list, but if the list is empty they
block waiting for new data to arrive to the list.
All the clients blocked waiting for th same list are served in a FIFO
way, so the first that blocked is the first to be served when there is
more data pushed by another client into the list.
The previous implementation of blocking operations was conceived to
serve clients in the context of push operations. For for instance:
1) There is a client "A" blocked on list "foo".
2) The client "B" performs `LPUSH foo somevalue`.
3) The client "A" is served in the context of the "B" LPUSH,
synchronously.
Processing things in a synchronous way was useful as if "A" pushes a
value that is served by "B", from the point of view of the database is a
NOP (no operation) thing, that is, nothing is replicated, nothing is
written in the AOF file, and so forth.
However later we implemented two things:
1) Variadic LPUSH that could add multiple values to a list in the
context of a single call.
2) BRPOPLPUSH that was a version of BRPOP that also provided a "PUSH"
side effect when receiving data.
This forced us to make the synchronous implementation more complex. If
client "B" is waiting for data, and "A" pushes three elemnents in a
single call, we needed to propagate an LPUSH with a missing argument
in the AOF and replication link. We also needed to make sure to
replicate the LPUSH side of BRPOPLPUSH, but only if in turn did not
happened to serve another blocking client into another list ;)
This were complex but with a few of mutually recursive functions
everything worked as expected... until one day we introduced scripting
in Redis.
Basically you can't "rewrite" a script to have just a partial effect on
the replicas and AOF file if the script happened to serve a few blocked
clients.
The solution to all this problems, implemented by this commit, is to
change the way we serve blocked clients. Instead of serving the blocked
clients synchronously, in the context of the command performing the PUSH
operation, it is now an asynchronous and iterative process:
1) If a key that has clients blocked waiting for data is the subject of
a list push operation, We simply mark keys as "ready" and put it into a
queue.
2) Every command pushing stuff on lists, as a variadic LPUSH, a script,
or whatever it is, is replicated verbatim without any rewriting.
3) Every time a Redis command, a MULTI/EXEC block, or a script,
completed its execution, we run the list of keys ready to serve blocked
clients (as more data arrived), and process this list serving the
blocked clients.
4) As a result of "3" maybe more keys are ready again for other clients
(as a result of BRPOPLPUSH we may have push operations), so we iterate
back to step "3" if it's needed.
The new code has a much simpler semantics, and a simpler to understand
implementation, with the disadvantage of not being able to "optmize out"
a PUSH+BPOP as a No OP.
This commit will be tested with care before the final merge, more tests
will be added likely.
Make sure that SELECT argument is an integer or return an error.
Unfortunately we had still the lame atoi() without any error checking in
place, so "SELECT foo" would work as "SELECT 0". This was not an huge
problem per se but some people expected that DB can be strings and not
just numbers, and without errors you get the feeling that they can be
numbers, but not the behavior.
Now getLongFromObjectOrReply() is used as almost everybody else across
the code, generating an error if the number is not an integer or
overflows the long type.
Thanks to @mipearson for reporting that on Twitter.
remove unsafe and unnecessary cast.
until now, this cast may lead segmentation fault when end > UINT_MAX
setbit foo 0 1
bitcount 0 4294967295
=> ok
bitcount 0 4294967296
=> cause segmentation fault.
Note by @antirez: the commit was modified a bit to also change the
string length type to long, since it's guaranteed to be at max 512 MB in
size, so we can work with the same type across all the code path.
REDIS_REPL_PING_SLAVE_PERIOD controls how often the master should
transmit a heartbeat (PING) to its slaves. This period, which defaults
to 10, is measured in seconds.
Redis 2.4 masters used to ping their slaves every ten seconds, just like
it says on the tin.
The Redis 2.6 masters I have been experimenting with, on the other hand,
ping their slaves *every second*. (master_last_io_seconds_ago never
approaches 10.) I think the ping period was inadvertently slashed to
one-tenth of its nominal value around the time REDIS_HZ was introduced.
This commit reintroduces correct ping schedule behaviour.