Michael Parker [Thu, 26 Jul 2012 06:51:22 +0000 (23:51 -0700)]
Use correct variable name for value to convert.
Note by @antirez: this code was never compiled because utils.c lacked the
float.h include, so we never noticed this variable was mispelled in the
past.
This should provide a noticeable speed boost when saving certain types
of databases with many sorted sets inside.
Saj Goonatilleke [Mon, 16 Jul 2012 05:33:25 +0000 (15:33 +1000)]
Truncate short write from the AOF
If Redis only manages to write out a partial buffer, the AOF file won't
load back into Redis the next time it starts up. It is better to
discard the short write than waste time running redis-check-aof.
Saj Goonatilleke [Tue, 17 Jul 2012 02:06:53 +0000 (12:06 +1000)]
New in INFO: aof_last_bgrewrite_status
Behaves like rdb_last_bgsave_status -- even down to reporting 'ok' when
no rewrite has been done yet. (You might want to check that
aof_last_rewrite_time_sec is not -1.)
Allow Pub/Sub in contexts where other commands are blocked.
Redis loading data from disk, and a Redis slave disconnected from its
master with serve-stale-data disabled, are two conditions where
commands are normally refused by Redis, returning an error.
However there is no reason to disable Pub/Sub commands as well, given
that this layer does not interact with the dataset. To allow Pub/Sub in
as many contexts as possible is especially interesting now that Redis
Sentinel uses Pub/Sub of a Redis master as a communication channel
between Sentinels.
This commit allows Pub/Sub to be used in the above two contexts where
it was previously denied.
For the C standard char can be either signed or unsigned, it's up to the
compiler, but Redis assumed that it was signed in a few places.
The practical effect of this patch is that now Redis 2.6 will run
correctly in every system where char is unsigned, notably the RaspBerry
PI and other ARM systems with GCC.
Thanks to Georgi Marinov (@eesn on twitter) that reported the problem
and allowed me to use his RaspBerry via SSH to trace and fix the issue!
antirez [Tue, 26 Jun 2012 07:47:47 +0000 (09:47 +0200)]
REPLCONF internal command introduced.
The REPLCONF command is an internal command (not designed to be directly
used by normal clients) that allows a slave to set some replication
related state in the master before issuing SYNC to start the
replication.
The initial motivation for this command, and the only reason currently
it is used by the implementation, is to let the slave instance
communicate its listening port to the slave, so that the master can
show all the slaves with their listening ports in the "replication"
section of the INFO output.
This allows clients to auto discover and query all the slaves attached
into a master.
Currently only a single option of the REPLCONF command is supported, and
it is called "listening-port", so the slave now starts the replication
process with something like the following chat:
REPLCONF listening-prot 6380
SYNC
Note that this works even if the master is an older version of Redis and
does not understand REPLCONF, because the slave ignores the REPLCONF
error.
In the future REPLCONF can be used for partial replication and other
replication related features where there is the need to exchange
information between master and slave.
NOTE: This commit also fixes a bug: the INFO outout already carried
information about slaves, but the port was broken, and was obtained
with getpeername(2), so it was actually just the ephemeral port used
by the slave to connect to the master as a client.
antirez [Thu, 21 Jun 2012 09:50:01 +0000 (11:50 +0200)]
Fixed a timing attack on AUTH (Issue #560).
The way we compared the authentication password using strcmp() allowed
an attacker to gain information about the password using a well known
class of attacks called "timing attacks".
The bug appears to be practically not exploitable in most modern systems
running Redis since even using multiple bytes of differences in the
input at a time instead of one the difference in running time in in the
order of 10 nanoseconds, making it hard to exploit even on LAN. However
attacks always get better so we are providing a fix ASAP.
The new implementation uses two fixed length buffers and a constant time
comparison function, with the goal of:
1) Completely avoid leaking information about the content of the
password, since the comparison is always performed between 512
characters and without conditionals.
2) Partially avoid leaking information about the length of the
password.
About "2" we still have a stage in the code where the real password and
the user provided password are copied in the static buffers, we also run
two strlen() operations against the two inputs, so the running time
of the comparison is a fixed amount plus a time proportional to
LENGTH(A)+LENGTH(B). This means that the absolute time of the operation
performed is still related to the length of the password in some way,
but there is no way to change the input in order to get a difference in
the execution time in the comparison that is not just proportional to
the string provided by the user (because the password length is fixed).
Thus in practical terms the user should try to discover LENGTH(PASSWORD)
looking at the whole execution time of the AUTH command and trying to
guess a proportionality between the whole execution time and the
password length: this appears to be mostly unfeasible in the real world.
Also protecting from this attack is not very useful in the case of Redis
as a brute force attack is anyway feasible if the password is too short,
while with a long password makes it not an issue that the attacker knows
the length.
antirez [Fri, 15 Jun 2012 08:03:25 +0000 (10:03 +0200)]
Fix c->reply_bytes computation in setDeferredMultiBulkLength()
In order to implement reply buffer limits introduced in 2.6 and useful
to close the connection under user-selected circumastances of big output
buffers (for instance slow consumers in pub/sub, a blocked slave, and so
forth) Redis takes a counter with the amount of used memory in objects
inside the output list stored into c->reply.
The computation was broken in the function setDeferredMultiBulkLength(),
in the case the object was glued with the next one. This caused the
c->reply_bytes field to go out of sync, be subtracted more than needed,
and wrap back near to ULONG_MAX values.
This commit fixes this bug and adds an assertion that is able to trap
this class of problems.
This problem was discovered looking at the INFO output of an unrelated
issue (issue #547).
antirez [Thu, 14 Jun 2012 13:59:25 +0000 (15:59 +0200)]
ziplistFind(): don't assume that entries are comparable by encoding.
Because Redis 2.6 introduced new integer encodings it is no longer true
that if two entries have a different encoding they are not equal.
An old ziplist can be loaded from an RDB file generated with Redis 2.4,
in this case for instance a small unsigned integers is encoded with a
16 bit encoding, while in Redis 2.6 a more specific 8 bit encoding
format is used.
Because of this bug hashes ended with duplicated values or fields lookup
failed, causing many bad behaviors.
This in turn caused a crash while converting the ziplist encoded hash into
a real hash table because an assertion was raised on duplicated elements.
This commit fixes issue #547.
Many thanks to Pinterest's Marty Weiner and colleagues for discovering
the problem and helping us in the debugging process.
Ted Nyman [Wed, 13 Jun 2012 05:35:00 +0000 (22:35 -0700)]
Standardize punctuation in redis-cli help.
Right there is a mix of help entries ending with periods or
without periods. This standardizes the end of command as without
periods, which seems to be the general custom in most unix tools,
at least.
antirez [Tue, 12 Jun 2012 13:20:16 +0000 (15:20 +0200)]
Added a new hash fuzzy tester.
The new fuzzy tester also removes elements from the hash instead of just
adding random fields. This should increase the probability to find bugs
in the implementations of the hash type internal representations.
antirez [Mon, 11 Jun 2012 21:44:34 +0000 (23:44 +0200)]
Dump ziplist hex value on failed assertion.
The ziplist -> hashtable conversion code is triggered every time an hash
value must be promoted to a full hash table because the number or size of
elements reached the threshold.
If a problem in the ziplist causes the same field to be present
multiple times, the assertion of successful addition of the element
inside the hash table will fail, crashing server with a failed
assertion, but providing little information about the problem.
This code adds a new logging function to perform the hex dump of binary
data, and makes sure that the ziplist -> hashtable conversion code uses
this new logging facility to dump the content of the ziplist when the
assertion fails.
This change was originally made in order to investigate issue #547.
antirez [Sat, 2 Jun 2012 21:29:57 +0000 (23:29 +0200)]
EVAL replication test: less false positives.
wait_for_condition is now used instead of the usual "after 1000" (that
is the way to sleep in Tcl). This should avoid to find the replica in
a state where it is loading the RDB in memory, returning -LOADING error.
This test used to fail when running the test over valgrind, due to the
added latencies.
Alex Mitrofanov [Sat, 2 Jun 2012 01:48:45 +0000 (18:48 -0700)]
Fixed RESTORE hash failure (Issue #532)
(additional commit notes by antirez@gmail.com):
The rdbIsObjectType() macro was not updated when the new RDB object type
of ziplist encoded hashes was added.
As a result RESTORE, that uses rdbLoadObjectType(), failed when a
ziplist encoded hash was loaded.
This does not affected normal RDB loading because in that case we use
the lower-level function rdbLoadType().
antirez [Sat, 2 Jun 2012 08:21:57 +0000 (10:21 +0200)]
RDB type loading functions clarified in comments.
Improved comments to make clear that rdbLoadType() just loads a
general TYPE in the context of RDB that can be an object type or an
expire type, end-of-file, and so forth.
While rdbLoadObjectType() enforces that the type is a valid Object Type
otherwise it returns -1.
antirez [Thu, 31 May 2012 19:45:39 +0000 (21:45 +0200)]
BITOP bug when called against non existing keys fixed.
In the issue #529 an user reported a bug that can be triggered with the
following code:
flushdb
set a
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
bitop or x a b
The bug was introduced with the speed optimization in commit 8bbc076
that specializes every BITOP operation loop up to the minimum length of
the input strings.
However the computation of the minimum length contained an error when a
non existing key was present in the input, after a key that was non zero
length.
This commit fixes the bug and adds a regression test for it.
antirez [Fri, 25 May 2012 10:24:20 +0000 (12:24 +0200)]
Release notes: more info about 2.4 -> 2.6 migration.
Migration information moved at the end of the file as now they start to
be too long to stay at the top, but a warning was added at the top of
the file to remember the user to check.
Added information about INFO fields that changed name between 2.4 and
2.6.
antirez [Fri, 25 May 2012 10:11:30 +0000 (12:11 +0200)]
Four new persistence fields in INFO. A few renamed.
The 'persistence' section of INFO output now contains additional four
fields related to RDB and AOF persistence:
rdb_last_bgsave_time_sec Duration of latest BGSAVE in sec.
rdb_current_bgsave_time_sec Duration of current BGSAVE in sec.
aof_last_rewrite_time_sec Duration of latest AOF rewrite in sec.
aof_current_rewrite_time_sec Duration of current AOF rewrite in sec.
The 'current' fields are set to -1 if a BGSAVE / AOF rewrite is not in
progress. The 'last' fileds are set to -1 if no previous BGSAVE / AOF
rewrites were performed.
Additionally a few fields in the persistence section were renamed for
consistency:
After the renaming, fields in the persistence section start with rdb_ or
aof_ prefix depending on the persistence method they describe.
The field 'loading' and related fields are not prefixed because they are
unique for both the persistence methods.
antirez [Wed, 23 May 2012 20:12:50 +0000 (22:12 +0200)]
BITOP command 10x speed improvement.
This commit adds a fast-path to the BITOP that can be used for all the
bytes from 0 to the minimal length of the string, and if there are
at max 16 input keys.
Often the intersected bitmaps are roughly the same size, so this
optimization can provide a 10x speed boost to most real world usages
of the command.
Bytes are processed four full words at a time, in loops specialized
for the specific BITOP sub-command, without the need to check for
length issues with the inputs (since we run this algorithm only as far
as there is data from all the keys at the same time).
The remaining part of the string is intersected in the usual way using
the slow but generic algorith.
It is possible to do better than this with inputs that are not roughly
the same size, sorting the input keys by length, by initializing the
result string in a smarter way, and noticing that the final part of the
output string composed of only data from the longest string does not
need any proecessing since AND, OR and XOR against an empty string does
not alter the output (zero in the first case, and the original string in
the other two cases).
More implementations will be implemented later likely, but this should
be enough to release Redis 2.6-RC4 with bitops merged in.
Note: this commit also adds better testing for BITOP NOT command, that
is currently the faster and hard to optimize further since it just
flips the bits of a single input string.
antirez [Tue, 22 May 2012 15:40:20 +0000 (17:40 +0200)]
BITOP: handle integer encoded objects correctly.
A bug in the implementation caused BITOP to crash the server if at least
one one of the source objects was integer encoded.
The new implementation takes an additional array of Redis objects
pointers and calls getDecodedObject() to get a reference to a string
encoded object, and then uses decrRefCount() to release the object.
Tests modified to cover the regression and improve coverage.
antirez [Sun, 20 May 2012 19:34:58 +0000 (21:34 +0200)]
BITCOUNT performance improved.
At Redis's default optimization level the command is now much faster,
always using a constant-time bit manipualtion technique to count bits
instead of GCC builtin popcount, and unrolling the loop.
The current implementation performance is 1.5GB/s in a MBA 11" (1.8 Ghz
i7) compiled with both GCC and clang.
antirez [Sun, 20 May 2012 09:06:29 +0000 (11:06 +0200)]
bitop.c renamed bitops.c
bitop.c contains the "Bit related string operations" so it seems more
logical to call it bitops instead of bitop.
This also makes it matching the name of the test (unit/bitops.tcl).
antirez [Sun, 20 May 2012 09:03:54 +0000 (11:03 +0200)]
Bit operations tests improved.
Fuzzing tests of BITCOUNT / BITOP are iterated multiple times.
The new BITCOUNT fuzzing test uses random strings in a wider interval of
lengths including zero-len strings.
antirez [Sat, 19 May 2012 22:49:35 +0000 (00:49 +0200)]
popcount() optimization for speed.
We run the array by 32 bit words instead of processing it byte per byte.
If the code is compiled using GCC __builtin_popcount() builtin function
is used instead.
antirez [Sat, 19 May 2012 14:16:25 +0000 (16:16 +0200)]
BITCOUNT refactoring.
The low level popualtion counting function is now separated from the
BITCOUNT command implementation, so that the low level function can be
further optimized and eventually used in other contexts if needed.
antirez [Sat, 19 May 2012 08:33:20 +0000 (10:33 +0200)]
Bit-related string operations moved to bitop.c
All the general string operations are implemented in t_string.c, however
the bit operations, while targeting the string type, are better served
in a specific file where we have the implementations of the following
four commands and helper functions:
GETBIT
SETBIT
BITOP
BITCOUNT
In the future this file will probably contain more code related to
making the BITOP and BITCOUNT operations faster.
antirez [Thu, 17 May 2012 13:50:44 +0000 (15:50 +0200)]
BITOP and BITCOUNT tests.
The Redis implementation is tested against Tcl implementations of the
same operation. Both fuzzing and testing of specific aspects of the
commands behavior are performed.
antirez [Wed, 16 May 2012 14:23:09 +0000 (16:23 +0200)]
New commands: BITOP and BITCOUNT.
The motivation for this new commands is to be search in the usage of
Redis for real time statistics. See the article "Fast real time metrics
using Redis".
In general Redis strings when used as bitmaps using the SETBIT/GETBIT
command provide a very space-efficient and fast way to store statistics.
For instance in a web application with users, every user can be
associated with a key that shows every day in which the user visited the
web service. This information can be really valuable to extract user
behaviour information.
With Redis bitmaps doing this is very simple just saying that a given
day is 0 (the data the service was put online) and all the next days are
1, 2, 3, and so forth. So with SETBIT it is possible to set the bit
corresponding to the current day every time the user visits the site.
It is possible to take the count of the bit sets on the run, this is
extremely easy using a Lua script. However a fast bit count native
operation can be useful, especially if it can operate on ranges, or when
the string is small like in the case of days (even if you consider many
years it is still extremely little data).
For this reason BITOP was introduced. The command counts the number of
bits set to 1 in a string, with optional range:
BITCOUNT key [start end]
The start/end parameters are similar to GETRANGE. If omitted the whole
string is tested.
Population counting is more useful when bit-level operations like AND,
OR and XOR are avaialble. For instance I can test multiple users to see
the number of days three users visited the site at the same time. To do
this we can take the AND of all the bitmaps, and then count the set bits.
In the special case of NOT (that inverts the bits) only one source key
can be passed.
The judicious use of BITCOUNT and BITOP combined can lead to interesting
use cases with very space efficient representation of data.
The implementation provided is still not tested and optimized for speed,
next commits will introduce unit tests. Later the implementation will be
profiled to see if it is possible to gain an important amount of speed
without making the code much more complex.
antirez [Thu, 24 May 2012 13:03:23 +0000 (15:03 +0200)]
Add aof_rewrite_buffer_length INFO field.
The INFO output, persistence section, already contained the field
describing the size of the current AOF buffer to flush on disk. However
the other AOF buffer, used to accumulate changes during an AOF rewrite,
was not mentioned in the INFO output.
This commit introduces a new field called aof_rewrite_buffer_length with
the length of the rewrite buffer.
antirez [Tue, 22 May 2012 11:03:41 +0000 (13:03 +0200)]
Allow an AOF rewrite buffer > 2GB (Fix for issue #504).
During the AOF rewrite process, the parent process needs to accumulate
the new writes in an in-memory buffer: when the child will terminate the
AOF rewriting process this buffer (that ist the difference between the
dataset when the rewrite was started, and the current dataset) is
flushed to the new AOF file.
We used to implement this buffer using an sds.c string, but sds.c has a
2GB limit. Sometimes the dataset can be big enough, the amount of writes
so high, and the rewrite process slow enough that we overflow the 2GB
limit, causing a crash, documented on github by issue #504.
In order to prevent this from happening, this commit introduces a new
system to accumulate writes, implemented by a linked list of blocks of
10 MB each, so that we also avoid paying the reallocation cost.
Note that theoretically modern operating systems may implement realloc()
simply as a remaping of the old pages, thus with very good performances,
see for instance the mremap() syscall on Linux. However this is not
always true, and jemalloc by default avoids doing this because there are
issues with the current implementation of mremap().
For this reason we are using a linked list of blocks instead of a single
block that gets reallocated again and again.
The changes in this commit lacks testing, that will be performed before
merging into the unstable branch. This fix will not enter 2.4 because it
is too invasive. However 2.4 will log a warning when the AOF rewrite
buffer is near to the 2GB limit.
antirez [Thu, 24 May 2012 09:35:21 +0000 (11:35 +0200)]
Dead code removed from replication.c.
The user @jokea noticed that the following line of code into
replication.c made little sense:
addReplySds(slave,sdsempty());
Investigating a bit I found that this was introduced by commit 6208b3a7
three years ago in the early stages of Redis. The code apparently is not
useful at all, so I'm removing it.
This change will not be backported into 2.4 so that in the rare case
this should introduce a bug, we'll have a chance to detect it into the
development branch. However following the code path it seems like the
code is not useful at all, so the risk is truly small.
antirez [Sun, 20 May 2012 21:47:45 +0000 (23:47 +0200)]
TODO file removed.
The list of things to do is since long time in two places:
1) Github issues.
2) I've a private TOOD list of random ideas, what makes sense is later
moved to github issues. So github is anyway the true source of things to
do.
antirez [Mon, 21 May 2012 14:50:05 +0000 (16:50 +0200)]
Use comments to split aof.c into sections.
This makes the code more readable, it is still not the case to split the
file itself into three different files, but the logical separation
improves the readability especially since new commits are going to
introduce an additional section.
antirez [Tue, 22 May 2012 11:13:24 +0000 (13:13 +0200)]
Redis test: include bug report on crash.
Due to a change in the format of the bug report in case of crash of
failed assertion the test suite was no longer able to properly log it.
Instead just a protocol error was logged by the Redis TCL client that
provided no clue about the actual problem.
This commit resolves the issue by logging everything from the first line
of the log including the string REDIS BUG REPORT, till the end of the
file.
antirez [Wed, 23 May 2012 09:02:38 +0000 (11:02 +0200)]
Fixed issue #516 (ZINTERSTORE mixing sets and zsets).
Weeks ago trying to fix an harmless GCC warning I introduced a bug in
the ziplist-encoded implementations of sorted sets.
The bug completely broke zuiNext() iterator, that is used in the
ZINTERSTORE and ZUNIONSTORE implementation, so those two commands are no
longer reliable starting from Redis version 2.4.12 and latest 2.6.0-RC
releases.
This commit fixes the problem and adds a regression test.
antirez [Wed, 16 May 2012 10:22:29 +0000 (12:22 +0200)]
Deleted jemalloc.orig from /deps.
In the commit upgrading jemalloc to version 3.0.0 I added the old
version of Jemalloc in the 'jemalloc.orig' directory for an error.
This commit removes the not useful version of jemalloc.
antirez [Mon, 14 May 2012 15:35:51 +0000 (17:35 +0200)]
Added time.h include in redis-cli.
redis-cli.c uses the time() function to seed the PRNG, but time.h was
not included. This was not noticed since sys/time.h is included and was
enough in most systems (but not correct). With Ubuntu 12.04 GCC
generates a warning that made us aware of the issue.
antirez [Mon, 14 May 2012 14:04:41 +0000 (16:04 +0200)]
activeExpireCycle(): better precision in max time used.
activeExpireCycle() can consume no more than a few milliseconds per
iteration. This commit improves the precision of the check for the time
elapsed in two ways:
1) We check every 16 iterations instead of the main loop instead of 256.
2) We reset iterations at the start of the function and not every time
we switch to the next database, so the check is correctly performed
every 16 iterations.
A previous commit introduced REDIS_HZ define that changes the frequency
of calls to the serverCron() Redis function. This commit improves
different related things:
1) Software watchdog: now the minimal period can be set according to
REDIS_HZ. The minimal period is two times the timer period, that is:
(1000/REDIS_HZ)*2 milliseconds
2) The incremental rehashing is now performed in the expires dictionary
as well.
3) The activeExpireCycle() function was improved in different ways:
- Now it checks if it already used too much time using microseconds
instead of milliseconds for better precision.
- The time limit is now calculated correctly, in the previous version
the division was performed before of the multiplication resulting in
a timelimit of 0 if HZ was big enough.
- Databases with less than 1% of buckets fill in the hash table are
skipped, because getting random keys is too expensive in this
condition.
4) tryResizeHashTables() is now called at every timer call, we need to
match the number of calls we do to the expired keys colleciton cycle.
antirez [Sun, 13 May 2012 14:40:29 +0000 (16:40 +0200)]
Redis timer interrupt frequency configurable as REDIS_HZ.
Redis uses a function called serverCron() that is very similar to the
timer interrupt of an operating system. This function is used to handle
a number of asynchronous things, like active expired keys collection,
clients timeouts, update of statistics, things related to the cluster
and replication, triggering of BGSAVE and AOF rewrite process, and so
forth.
In the past the timer was called 1 time per second. At some point it was
raised to 10 times per second, but it still was fixed and could not be
changed even at compile time, because different functions called from
serverCron() assumed a given fixed frequency.
This commmit makes the frequency configurable, so that it is simpler to
pick a good tradeoff between overhead of this function (that is usually
very small) and the responsiveness of Redis during a few critical
circumstances where a lot of work is done inside the timer.
An example of such a critical condition is mass-expire of a lot of keys
in the same second. Up to a given percentage of CPU time is used to
perform expired keys collection per expire cylce. Now changing the
REDIS_HZ macro it is possible to do less work but more times per second
in order to block the server for less time.
If this patch will work well in our tests it will enter Redis 2.6-final.
antirez [Fri, 11 May 2012 17:17:31 +0000 (19:17 +0200)]
More incremental active expired keys collection process.
If a large amonut of keys are all expiring about at the same time, the
"active" expired keys collection cycle used to block as far as the
percentage of already expired keys was >= 25% of the total population of
keys with an expire set.
This could block the server even for many seconds in order to reclaim
memory ASAP. The new algorithm uses at max a small amount of
milliseconds per cycle, even if this means reclaiming the memory less
promptly it also means a more responsive server.
Compare integers in ziplist regardless of encoding
Because of the introduction of new integer encoding types for ziplists
in the 2.6 tree, the same integer value may have a different encoding in
different versions of the ziplist implementation. This means that the
encoding can NOT be used as a fast path in comparing integers.
antirez [Wed, 2 May 2012 20:41:50 +0000 (22:41 +0200)]
syncio.c read / write functions reworked for correctness and performance.
The new implementation start reading / writing before blocking with
aeWait(), likely the descriptor can accept writes or has buffered data
inside and we can go faster, otherwise we get an error and wait.
This change has effects on speed but also on correctness: on socket
errors when we perform non blocking connect(2) write is performed ASAP
and the error is returned ASAP before waiting.
So the practical effect is that now a Redis slave is more available if it
can not connect to the master, previously the slave continued to block on
syncWrite() trying to send SYNC, and serving commands very slowly.
antirez [Wed, 2 May 2012 15:14:45 +0000 (17:14 +0200)]
Use specific error if master is down and slave-serve-stale-data is set to no.
We used to reply -ERR ... message ..., now the reply is
instead -MASTERDOWN ... message ... so that it can be distinguished
easily by the other error conditions.
Every matched key in a KEYS call is checked for expiration. When the key
is set to expire, the call to `getExpire` will assert that the key also
exists in the main dictionary. This in turn causes a rehashing step to
be executed. Rehashing a dictionary when there is an iterator active may
result in the iterator emitting duplicate entries, or not emitting some
entries at all. By using a safe iterator, the rehash step is omitted.
Set LUA_MASKCOUNT hook more selectively. Fixes issue #480.
An user reported a crash with Redis scripting (see issue #480 on
github), inspection of the kindly provided strack trace showed that
server.lua_caller was probably set to NULL. The stack trace also slowed
that the call to the hook was originating from a point where we just
used to set/get a few global variables in the Lua state.
What was happening is that we did not set the timeout hook selectively
only when the user script was called. Now we set it more selectively,
specifically only in the context of the lua_pcall() call, and make sure
to remove the hook when the call returns. Otherwise the hook can get
called in random contexts every time we do something with the Lua
state.
Re-introduce -g -rdynamic -ggdb when linking, fixing strack traces.
A previous commit removed -g -rdynamic -ggdb as LDFLAGS, not allowing
Redis to produce a stack trace wth symbol names on crash.
This commit fixes the issue.
The main reason is that otherwise it is completely pointless that we do
a lot of efforts to print the stack trace on crash, and the content of
the stack and registers as well. Using an alternate stack broken this
feature completely.
Redis test: More reliable BRPOPLPUSH replication test.
Now it uses the new wait_for_condition testing primitive.
Also wait_for_condition implementation was fixed in this commit to properly
escape the expr command and its argument.
Redis test: scripting EVALSHA replication test more reliable.
A new primitive wait_for_condition was introduced in the scripting
engine that makes waiting for events simpler, so that it is simpler to
write tests that are more resistant to timing issues.