redis.o: redis.c fmacros.h config.h redis.h ae.h sds.h anet.h dict.h \
adlist.h zmalloc.h lzf.h pqsort.h zipmap.h staticsymbols.h sha1.h
sds.o: sds.c sds.h zmalloc.h
-test.o: test.c dict2.h
zipmap.o: zipmap.c zmalloc.h
zmalloc.o: zmalloc.c config.h
staticsymbols:
tclsh utils/build-static-symbols.tcl > staticsymbols.h
-test: redis-server
- tclsh8.5 test/test_helper.tcl
+test:
+ tclsh8.5 tests/test_helper.tcl
bench:
./redis-benchmark
<div id="pagecontent">
<div class="index">
<!-- This is a (PRE) block. Make sure it's left aligned or your toc title will be off. -->
-<b>AppendOnlyFileHowto: Contents</b><br> <a href="#General Information">General Information</a><br> <a href="#Log rewriting">Log rewriting</a><br> <a href="#Wait... but how does this work?">Wait... but how does this work?</a><br> <a href="#How durable is the append only file?">How durable is the append only file?</a>
+<b>AppendOnlyFileHowto: Contents</b><br> <a href="#Append Only File HOWTO">Append Only File HOWTO</a><br> <a href="#General Information">General Information</a><br> <a href="#Log rewriting">Log rewriting</a><br> <a href="#Wait... but how does this work?">Wait... but how does this work?</a><br> <a href="#How durable is the append only file?">How durable is the append only file?</a>
</div>
<h1 class="wikiname">AppendOnlyFileHowto</h1>
</div>
<div class="narrow">
- = Append Only File HOWTO =<h2><a name="General Information">General Information</a></h2>Append only file is an alternative durability option for Redis. What this mean? Let's start with some fact:<br/><br/><ul><li> For default Redis saves snapshots of the dataset on disk, in a binary file called dump.rdb (by default at least). For instance you can configure Redis to save the dataset every 60 seconds if there are at least 100 changes in the dataset, or every 1000 seconds if there is at least a single change in the dataset. This is known as "Snapshotting".</li><li> Snapshotting is not very durable. If your computer running Redis stops, your power line fails, or you write killall -9 redis-server for a mistake, the latest data written on Redis will get lost. There are applications where this is not a big deal. There are applications where this is not acceptable and Redis <b>was</b> not an option for this applications.</li></ul>
+ #sidebar <a href="RedisGuides.html">RedisGuides</a>
+<h1><a name="Append Only File HOWTO">Append Only File HOWTO</a></h1><h2><a name="General Information">General Information</a></h2>Append only file is an alternative durability option for Redis. What this mean? Let's start with some fact:<br/><br/><ul><li> For default Redis saves snapshots of the dataset on disk, in a binary file called dump.rdb (by default at least). For instance you can configure Redis to save the dataset every 60 seconds if there are at least 100 changes in the dataset, or every 1000 seconds if there is at least a single change in the dataset. This is known as "Snapshotting".</li><li> Snapshotting is not very durable. If your computer running Redis stops, your power line fails, or you write killall -9 redis-server for a mistake, the latest data written on Redis will get lost. There are applications where this is not a big deal. There are applications where this is not acceptable and Redis <b>was</b> not an option for this applications.</li></ul>
What is the solution? To use append only file as alternative to snapshotting. How it works?<br/><br/><ul><li> It is an 1.1 only feature.</li><li> You have to turn it on editing the configuration file. Just make sure you have "appendonly yes" somewhere.</li><li> Append only files work this way: every time Redis receive a command that changes the dataset (for instance a SET or LPUSH command) it appends this command in the append only file. When you restart Redis it will first <b>re-play</b> the append only file to rebuild the state.</li></ul>
-<h2><a name="Log rewriting">Log rewriting</a></h2>As you can guess... the append log file gets bigger and bigger, every time there is a new operation changing the dataset. Even if you set always the same key "mykey" to the values of "1", "2", "3", ... up to 10000000000 in the end you'll have just a single key in the dataset, just a few bytes! but how big will be the append log file? Very very big.<br/><br/>So Redis supports an interesting feature: it is able to rebuild the append log file, in background, without to stop processing client commands. The key is the command <a href="BGREWRITEAOF.html">BGREWRITEAOF</a>. This command basically is able to use the dataset in memory in order to rewrite the shortest sequence of commands able to rebuild the exact dataset that is currently in memory.<br/><br/>So from time to time when the log gets too big, try this command. It's safe as if it fails you will not lost your old log (but you can make a backup copy given that currently 1.1 is still in beta!).<h2><a name="Wait... but how does this work?">Wait... but how does this work?</a></h2>Basically it uses the same fork() copy-on-write trick that snapshotting already uses. This is how the algorithm works:<br/><br/><ul><li> Redis forks, so now we have a child and a parent.</li><li> The child starts writing the new append log file in a temporary file.</li><li> The parent accumulates all the new changes in an in-memory buffer.</li><li> When the child finished to rewrite the file, the parent gets a signal, and append the in-memory buffer at the end of the file generated by the child.</li><li> Profit! Now Redis atomically renames the old file into the new one, and starts appending new data into the new file.</li></ul>
+<h2><a name="Log rewriting">Log rewriting</a></h2>As you can guess... the append log file gets bigger and bigger, every time there is a new operation changing the dataset. Even if you set always the same key "mykey" to the values of "1", "2", "3", ... up to 10000000000 in the end you'll have just a single key in the dataset, just a few bytes! but how big will be the append log file? Very very big.<br/><br/>So Redis supports an interesting feature: it is able to rebuild the append log file, in background, without to stop processing client commands. The key is the command <a href="BGREWRITEAOF.html">BGREWRITEAOF</a>. This command basically is able to use the dataset in memory in order to rewrite the shortest sequence of commands able to rebuild the exact dataset that is currently in memory.<br/><br/>So from time to time when the log gets too big, try this command. It's safe as if it fails you will not lost your old log (but you can make a backup copy given that currently 1.1 is still in beta!).<h2><a name="Wait... but how does this work?">Wait... but how does this work?</a></h2>Basically it uses the same fork() copy-on-write trick that snapshotting already uses. This is how the algorithm works:<br/><br/><ul><li> Redis forks, so now we have a child and a parent.</li><li> The child starts writing the new append log file in a temporary file.</li><li> The parent accumulates all the new changes in an in-memory buffer (but at the same time it writes the new changes in the <b>old</b> append only file, so if the rewriting fails, we are safe).</li><li> When the child finished to rewrite the file, the parent gets a signal, and append the in-memory buffer at the end of the file generated by the child.</li><li> Profit! Now Redis atomically renames the old file into the new one, and starts appending new data into the new file.</li></ul>
<h2><a name="How durable is the append only file?">How durable is the append only file?</a></h2>Check redis.conf, you can configure how many times Redis will fsync() data on disk. There are three options:<br/><br/><ul><li> Fsync() every time a new command is appended to the append log file. Very very slow, very safe.</li><li> Fsync() one time every second. Fast enough, and you can lose 1 second of data if there is a disaster.</li><li> Never fsync(), just put your data in the hands of the Operating System. The faster and unsafer method.</li></ul>
Warning: by default Redis will fsync() after <b>every command</b>! This is because the Redis authors want to ship a default configuration that is the safest pick. But the best compromise for most datasets is to fsync() one time every second.
+
</div>
</div>
<div id="pagecontent">
<div class="index">
<!-- This is a (PRE) block. Make sure it's left aligned or your toc title will be off. -->
-<b>CommandReference: Contents</b><br> <a href="#Connection handling">Connection handling</a><br> <a href="#Commands operating on all the kind of values">Commands operating on all the kind of values</a><br> <a href="#Commands operating on string values">Commands operating on string values</a><br> <a href="#Commands operating on lists">Commands operating on lists</a><br> <a href="#Commands operating on sets">Commands operating on sets</a><br> <a href="#Commands operating on sorted sets (zsets, Redis version >">Commands operating on sorted sets (zsets, Redis version ></a><br> <a href="#Sorting">Sorting</a><br> <a href="#Persistence control commands">Persistence control commands</a><br> <a href="#Remote server control commands">Remote server control commands</a>
+<b>CommandReference: Contents</b><br> <a href="#Connection handling">Connection handling</a><br> <a href="#Commands operating on all the kind of values">Commands operating on all the kind of values</a><br> <a href="#Commands operating on string values">Commands operating on string values</a><br> <a href="#Commands operating on lists">Commands operating on lists</a><br> <a href="#Commands operating on sets">Commands operating on sets</a><br> <a href="#Commands operating on sorted sets (zsets, Redis version >">Commands operating on sorted sets (zsets, Redis version ></a><br> <a href="#Commands operating on hashes">Commands operating on hashes</a><br> <a href="#Sorting">Sorting</a><br> <a href="#Transactions">Transactions</a><br> <a href="#Publish/Subscribe">Publish/Subscribe</a><br> <a href="#Persistence control commands">Persistence control commands</a><br> <a href="#Remote server control commands">Remote server control commands</a>
</div>
<h1 class="wikiname">CommandReference</h1>
<div class="narrow">
= Redis Command Reference =<br/><br/>Every command name links to a specific wiki page describing the behavior of the command.<h2><a name="Connection handling">Connection handling</a></h2><ul><li> <a href="QuitCommand.html">QUIT</a> <code name="code" class="python">close the connection</code></li><li> <a href="AuthCommand.html">AUTH</a> <code name="code" class="python">simple password authentication if enabled</code></li></ul>
<h2><a name="Commands operating on all the kind of values">Commands operating on all the kind of values</a></h2><ul><li> <a href="ExistsCommand.html">EXISTS</a> <i>key</i> <code name="code" class="python">test if a key exists</code></li><li> <a href="DelCommand.html">DEL</a> <i>key</i> <code name="code" class="python">delete a key</code></li><li> <a href="TypeCommand.html">TYPE</a> <i>key</i> <code name="code" class="python">return the type of the value stored at key</code></li><li> <a href="KeysCommand.html">KEYS</a> <i>pattern</i> <code name="code" class="python">return all the keys matching a given pattern</code></li><li> <a href="RandomkeyCommand.html">RANDOMKEY</a> <code name="code" class="python">return a random key from the key space</code></li><li> <a href="RenameCommand.html">RENAME</a> <i>oldname</i> <i>newname</i> <code name="code" class="python">rename the old key in the new one, destroing the newname key if it already exists</code></li><li> <a href="RenamenxCommand.html">RENAMENX</a> <i>oldname</i> <i>newname</i> <code name="code" class="python">rename the old key in the new one, if the newname key does not already exist</code></li><li> <a href="DbsizeCommand.html">DBSIZE</a> <code name="code" class="python">return the number of keys in the current db</code></li><li> <a href="ExpireCommand.html">EXPIRE</a> <code name="code" class="python">set a time to live in seconds on a key</code></li><li> <a href="TtlCommand.html">TTL</a> <code name="code" class="python">get the time to live in seconds of a key</code></li><li> <a href="SelectCommand.html">SELECT</a> <i>index</i> <code name="code" class="python">Select the DB having the specified index</code></li><li> <a href="MoveCommand.html">MOVE</a> <i>key</i> <i>dbindex</i> <code name="code" class="python">Move the key from the currently selected DB to the DB having as index dbindex</code></li><li> <a href="FlushdbCommand.html">FLUSHDB</a> <code name="code" class="python">Remove all the keys of the currently selected DB</code></li><li> <a href="FlushallCommand.html">FLUSHALL</a> <code name="code" class="python">Remove all the keys from all the databases</code></li></ul>
-<h2><a name="Commands operating on string values">Commands operating on string values</a></h2><ul><li> <a href="SetCommand.html">SET</a> <i>key</i> <i>value</i> <code name="code" class="python">set a key to a string value</code></li><li> <a href="GetCommand.html">GET</a> <i>key</i> <code name="code" class="python">return the string value of the key</code></li><li> <a href="GetsetCommand.html">GETSET</a> <i>key</i> <i>value</i> <code name="code" class="python">set a key to a string returning the old value of the key</code></li><li> <a href="MgetCommand.html">MGET</a> <i>key1</i> <i>key2</i> ... <i>keyN</i> <code name="code" class="python">multi-get, return the strings values of the keys</code></li><li> <a href="SetnxCommand.html">SETNX</a> <i>key</i> <i>value</i> <code name="code" class="python">set a key to a string value if the key does not exist</code></li><li> <a href="MsetCommand.html">MSET</a> <i>key1</i> <i>value1</i> <i>key2</i> <i>value2</i> ... <i>keyN</i> <i>valueN</i> <code name="code" class="python">set a multiple keys to multiple values in a single atomic operation</code></li><li> <a href="MsetCommand.html">MSETNX</a> <i>key1</i> <i>value1</i> <i>key2</i> <i>value2</i> ... <i>keyN</i> <i>valueN</i> <code name="code" class="python">set a multiple keys to multiple values in a single atomic operation if none of the keys already exist</code></li><li> <a href="IncrCommand.html">INCR</a> <i>key</i> <code name="code" class="python">increment the integer value of key</code></li><li> <a href="IncrCommand.html">INCRBY</a> <i>key</i> <i>integer</i><code name="code" class="python"> increment the integer value of key by integer</code></li><li> <a href="IncrCommand.html">DECR</a> <i>key</i> <code name="code" class="python">decrement the integer value of key</code></li><li> <a href="IncrCommand.html">DECRBY</a> <i>key</i> <i>integer</i> <code name="code" class="python">decrement the integer value of key by integer</code></li></ul>
-<h2><a name="Commands operating on lists">Commands operating on lists</a></h2><ul><li> <a href="RpushCommand.html">RPUSH</a> <i>key</i> <i>value</i> <code name="code" class="python">Append an element to the tail of the List value at key</code></li><li> <a href="RpushCommand.html">LPUSH</a> <i>key</i> <i>value</i> <code name="code" class="python">Append an element to the head of the List value at key</code></li><li> <a href="LlenCommand.html">LLEN</a> <i>key</i> <code name="code" class="python">Return the length of the List value at key</code></li><li> <a href="LrangeCommand.html">LRANGE</a> <i>key</i> <i>start</i> <i>end</i> <code name="code" class="python">Return a range of elements from the List at key</code></li><li> <a href="LtrimCommand.html">LTRIM</a> <i>key</i> <i>start</i> <i>end</i> <code name="code" class="python">Trim the list at key to the specified range of elements</code></li><li> <a href="LindexCommand.html">LINDEX</a> <i>key</i> <i>index</i> <code name="code" class="python">Return the element at index position from the List at key</code></li><li> <a href="LsetCommand.html">LSET</a> <i>key</i> <i>index</i> <i>value</i> <code name="code" class="python">Set a new value as the element at index position of the List at key</code></li><li> <a href="LremCommand.html">LREM</a> <i>key</i> <i>count</i> <i>value</i> <code name="code" class="python">Remove the first-N, last-N, or all the elements matching value from the List at key</code></li><li> <a href="LpopCommand.html">LPOP</a> <i>key</i> <code name="code" class="python">Return and remove (atomically) the first element of the List at key</code></li><li> <a href="LpopCommand.html">RPOP</a> <i>key</i> <code name="code" class="python">Return and remove (atomically) the last element of the List at key</code></li><li> <a href="RpoplpushCommand.html">RPOPLPUSH</a> <i>srckey</i> <i>dstkey</i> <code name="code" class="python">Return and remove (atomically) the last element of the source List stored at _srckey_ and push the same element to the destination List stored at _dstkey_</code></li></ul>
+<h2><a name="Commands operating on string values">Commands operating on string values</a></h2><ul><li> <a href="SetCommand.html">SET</a> <i>key</i> <i>value</i> <code name="code" class="python">set a key to a string value</code></li><li> <a href="GetCommand.html">GET</a> <i>key</i> <code name="code" class="python">return the string value of the key</code></li><li> <a href="GetsetCommand.html">GETSET</a> <i>key</i> <i>value</i> <code name="code" class="python">set a key to a string returning the old value of the key</code></li><li> <a href="MgetCommand.html">MGET</a> <i>key1</i> <i>key2</i> ... <i>keyN</i> <code name="code" class="python">multi-get, return the strings values of the keys</code></li><li> <a href="SetnxCommand.html">SETNX</a> <i>key</i> <i>value</i> <code name="code" class="python">set a key to a string value if the key does not exist</code></li><li> <a href="SetexCommand.html">SETEX</a> <i>key</i> <i>time</i> <i>value</i> <code name="code" class="python">Set+Expire combo command</code></li><li> <a href="MsetCommand.html">MSET</a> <i>key1</i> <i>value1</i> <i>key2</i> <i>value2</i> ... <i>keyN</i> <i>valueN</i> <code name="code" class="python">set a multiple keys to multiple values in a single atomic operation</code></li><li> <a href="MsetCommand.html">MSETNX</a> <i>key1</i> <i>value1</i> <i>key2</i> <i>value2</i> ... <i>keyN</i> <i>valueN</i> <code name="code" class="python">set a multiple keys to multiple values in a single atomic operation if none of the keys already exist</code></li><li> <a href="IncrCommand.html">INCR</a> <i>key</i> <code name="code" class="python">increment the integer value of key</code></li><li> <a href="IncrCommand.html">INCRBY</a> <i>key</i> <i>integer</i><code name="code" class="python"> increment the integer value of key by integer</code></li><li> <a href="IncrCommand.html">DECR</a> <i>key</i> <code name="code" class="python">decrement the integer value of key</code></li><li> <a href="IncrCommand.html">DECRBY</a> <i>key</i> <i>integer</i> <code name="code" class="python">decrement the integer value of key by integer</code></li><li> <a href="AppendCommand.html">APPEND</a> <i>key</i> <i>value</i> <code name="code" class="python">append the specified string to the string stored at key</code></li><li> <a href="SubstrCommand.html">SUBSTR</a> <i>key</i> <i>start</i> <i>end</i> <code name="code" class="python">return a substring out of a larger string</code></li></ul>
+<h2><a name="Commands operating on lists">Commands operating on lists</a></h2><ul><li> <a href="RpushCommand.html">RPUSH</a> <i>key</i> <i>value</i> <code name="code" class="python">Append an element to the tail of the List value at key</code></li><li> <a href="RpushCommand.html">LPUSH</a> <i>key</i> <i>value</i> <code name="code" class="python">Append an element to the head of the List value at key</code></li><li> <a href="LlenCommand.html">LLEN</a> <i>key</i> <code name="code" class="python">Return the length of the List value at key</code></li><li> <a href="LrangeCommand.html">LRANGE</a> <i>key</i> <i>start</i> <i>end</i> <code name="code" class="python">Return a range of elements from the List at key</code></li><li> <a href="LtrimCommand.html">LTRIM</a> <i>key</i> <i>start</i> <i>end</i> <code name="code" class="python">Trim the list at key to the specified range of elements</code></li><li> <a href="LindexCommand.html">LINDEX</a> <i>key</i> <i>index</i> <code name="code" class="python">Return the element at index position from the List at key</code></li><li> <a href="LsetCommand.html">LSET</a> <i>key</i> <i>index</i> <i>value</i> <code name="code" class="python">Set a new value as the element at index position of the List at key</code></li><li> <a href="LremCommand.html">LREM</a> <i>key</i> <i>count</i> <i>value</i> <code name="code" class="python">Remove the first-N, last-N, or all the elements matching value from the List at key</code></li><li> <a href="LpopCommand.html">LPOP</a> <i>key</i> <code name="code" class="python">Return and remove (atomically) the first element of the List at key</code></li><li> <a href="LpopCommand.html">RPOP</a> <i>key</i> <code name="code" class="python">Return and remove (atomically) the last element of the List at key</code></li><li> <a href="BlpopCommand.html">BLPOP</a> <i>key1</i> <i>key2</i> ... <i>keyN</i> <i>timeout</i> <code name="code" class="python">Blocking LPOP</code></li><li> <a href="BlpopCommand.html">BRPOP</a> <i>key1</i> <i>key2</i> ... <i>keyN</i> <i>timeout</i> <code name="code" class="python">Blocking RPOP</code></li><li> <a href="RpoplpushCommand.html">RPOPLPUSH</a> <i>srckey</i> <i>dstkey</i> <code name="code" class="python">Return and remove (atomically) the last element of the source List stored at _srckey_ and push the same element to the destination List stored at _dstkey_</code></li></ul>
<h2><a name="Commands operating on sets">Commands operating on sets</a></h2><ul><li> <a href="SaddCommand.html">SADD</a> <i>key</i> <i>member</i> <code name="code" class="python">Add the specified member to the Set value at key</code></li><li> <a href="SremCommand.html">SREM</a> <i>key</i> <i>member</i> <code name="code" class="python">Remove the specified member from the Set value at key</code></li><li> <a href="SpopCommand.html">SPOP</a> <i>key</i> <code name="code" class="python">Remove and return (pop) a random element from the Set value at key</code></li><li> <a href="SmoveCommand.html">SMOVE</a> <i>srckey</i> <i>dstkey</i> <i>member</i> <code name="code" class="python">Move the specified member from one Set to another atomically</code></li><li> <a href="ScardCommand.html">SCARD</a> <i>key</i> <code name="code" class="python">Return the number of elements (the cardinality) of the Set at key</code></li><li> <a href="SismemberCommand.html">SISMEMBER</a> <i>key</i> <i>member</i> <code name="code" class="python">Test if the specified value is a member of the Set at key</code></li><li> <a href="SinterCommand.html">SINTER</a> <i>key1</i> <i>key2</i> ... <i>keyN</i> <code name="code" class="python">Return the intersection between the Sets stored at key1, key2, ..., keyN</code></li><li> <a href="SinterstoreCommand.html">SINTERSTORE</a> <i>dstkey</i> <i>key1</i> <i>key2</i> ... <i>keyN</i> <code name="code" class="python">Compute the intersection between the Sets stored at key1, key2, ..., keyN, and store the resulting Set at dstkey</code></li><li> <a href="SunionCommand.html">SUNION</a> <i>key1</i> <i>key2</i> ... <i>keyN</i> <code name="code" class="python">Return the union between the Sets stored at key1, key2, ..., keyN</code></li><li> <a href="SunionstoreCommand.html">SUNIONSTORE</a> <i>dstkey</i> <i>key1</i> <i>key2</i> ... <i>keyN</i> <code name="code" class="python">Compute the union between the Sets stored at key1, key2, ..., keyN, and store the resulting Set at dstkey</code></li><li> <a href="SdiffCommand.html">SDIFF</a> <i>key1</i> <i>key2</i> ... <i>keyN</i> <code name="code" class="python">Return the difference between the Set stored at key1 and all the Sets key2, ..., keyN</code></li><li> <a href="SdiffstoreCommand.html">SDIFFSTORE</a> <i>dstkey</i> <i>key1</i> <i>key2</i> ... <i>keyN</i> <code name="code" class="python">Compute the difference between the Set key1 and all the Sets key2, ..., keyN, and store the resulting Set at dstkey</code></li><li> <a href="SmembersCommand.html">SMEMBERS</a> <i>key</i> <code name="code" class="python">Return all the members of the Set value at key</code></li><li> <a href="SrandmemberCommand.html">SRANDMEMBER</a> <i>key</i> <code name="code" class="python">Return a random member of the Set value at key</code></li></ul>
-<h2><a name="Commands operating on sorted sets (zsets, Redis version >">Commands operating on sorted sets (zsets, Redis version ></a></h2> 1.1) ==<br/><br/><ul><li> <a href="ZaddCommand.html">ZADD</a> <i>key</i> <i>score</i> <i>member</i> <code name="code" class="python">Add the specified member to the Sorted Set value at key or update the score if it already exist</code></li><li> <a href="ZremCommand.html">ZREM</a> <i>key</i> <i>member</i> <code name="code" class="python">Remove the specified member from the Sorted Set value at key</code></li><li> <a href="ZincrbyCommand.html">ZINCRBY</a> <i>key</i> <i>increment</i> <i>member</i> <code name="code" class="python">If the member already exists increment its score by _increment_, otherwise add the member setting _increment_ as score</code></li><li> <a href="ZrangeCommand.html">ZRANGE</a> <i>key</i> <i>start</i> <i>end</i> <code name="code" class="python">Return a range of elements from the sorted set at key</code></li><li> <a href="ZrangeCommand.html">ZREVRANGE</a> <i>key</i> <i>start</i> <i>end</i> <code name="code" class="python">Return a range of elements from the sorted set at key, exactly like ZRANGE, but the sorted set is ordered in traversed in reverse order, from the greatest to the smallest score</code></li><li> <a href="ZrangebyscoreCommand.html">ZRANGEBYSCORE</a> <i>key</i> <i>min</i> <i>max</i> <code name="code" class="python">Return all the elements with score >= min and score <= max (a range query) from the sorted set</code></li><li> <a href="ZcardCommand.html">ZCARD</a> <i>key</i> <code name="code" class="python">Return the cardinality (number of elements) of the sorted set at key</code></li><li> <a href="ZscoreCommand.html">ZSCORE</a> <i>key</i> <i>element</i> <code name="code" class="python">Return the score associated with the specified element of the sorted set at key</code></li><li> <a href="ZremrangebyscoreCommand.html">ZREMRANGEBYSCORE</a> <i>key</i> <i>min</i> <i>max</i> <code name="code" class="python">Remove all the elements with score >= min and score <= max from the sorted set</code></li></ul>
+<h2><a name="Commands operating on sorted sets (zsets, Redis version >">Commands operating on sorted sets (zsets, Redis version ></a></h2> 1.1) ==<br/><br/><ul><li> <a href="ZaddCommand.html">ZADD</a> <i>key</i> <i>score</i> <i>member</i> <code name="code" class="python">Add the specified member to the Sorted Set value at key or update the score if it already exist</code></li><li> <a href="ZremCommand.html">ZREM</a> <i>key</i> <i>member</i> <code name="code" class="python">Remove the specified member from the Sorted Set value at key</code></li><li> <a href="ZincrbyCommand.html">ZINCRBY</a> <i>key</i> <i>increment</i> <i>member</i> <code name="code" class="python">If the member already exists increment its score by _increment_, otherwise add the member setting _increment_ as score</code></li><li> <a href="ZrankCommand.html">ZRANK</a> <i>key</i> <i>member</i> <code name="code" class="python">Return the rank (or index) or _member_ in the sorted set at _key_, with scores being ordered from low to high</code></li><li> <a href="ZrankCommand.html">ZREVRANK</a> <i>key</i> <i>member</i> <code name="code" class="python">Return the rank (or index) or _member_ in the sorted set at _key_, with scores being ordered from high to low</code></li><li> <a href="ZrangeCommand.html">ZRANGE</a> <i>key</i> <i>start</i> <i>end</i> <code name="code" class="python">Return a range of elements from the sorted set at key</code></li><li> <a href="ZrangeCommand.html">ZREVRANGE</a> <i>key</i> <i>start</i> <i>end</i> <code name="code" class="python">Return a range of elements from the sorted set at key, exactly like ZRANGE, but the sorted set is ordered in traversed in reverse order, from the greatest to the smallest score</code></li><li> <a href="ZrangebyscoreCommand.html">ZRANGEBYSCORE</a> <i>key</i> <i>min</i> <i>max</i> <code name="code" class="python">Return all the elements with score >= min and score <= max (a range query) from the sorted set</code></li><li> <a href="ZcardCommand.html">ZCARD</a> <i>key</i> <code name="code" class="python">Return the cardinality (number of elements) of the sorted set at key</code></li><li> <a href="ZscoreCommand.html">ZSCORE</a> <i>key</i> <i>element</i> <code name="code" class="python">Return the score associated with the specified element of the sorted set at key</code></li><li> <a href="ZremrangebyrankCommand.html">ZREMRANGEBYRANK</a> <i>key</i> <i>min</i> <i>max</i> <code name="code" class="python">Remove all the elements with rank >= min and rank <= max from the sorted set</code></li><li> <a href="ZremrangebyscoreCommand.html">ZREMRANGEBYSCORE</a> <i>key</i> <i>min</i> <i>max</i> <code name="code" class="python">Remove all the elements with score >= min and score <= max from the sorted set</code></li><li> <a href="ZunionCommand.html">ZUNION / ZINTER</a> <i>dstkey</i> <i>N</i> <i>key1</i> ... <i>keyN</i> WEIGHTS <i>w1</i> ... <i>wN</i> AGGREGATE SUM|MIN|MAX <code name="code" class="python">Perform a union or intersection over a number of sorted sets with optional weight and aggregate</code></li></ul>
+<h2><a name="Commands operating on hashes">Commands operating on hashes</a></h2><ul><li> <a href="HsetCommand.html">HSET</a> <i>key</i> <i>field</i> <i>value</i> <code name="code" class="python">Set the hash field to the specified value. Creates the hash if needed.</code></li><li> <a href="HgetCommand.html">HGET</a> <i>key</i> <i>field</i> <code name="code" class="python">Retrieve the value of the specified hash field.</code></li><li> <a href="HmsetCommand.html">HMSET</a> <i>key</i> <i>field1</i> <i>value1</i> ... <i>fieldN</i> <i>valueN</i> <code name="code" class="python">Set the hash fields to their respective values.</code></li><li> <a href="HincrbyCommand.html">HINCRBY</a> <i>key</i> <i>field</i> <i>integer</i> <code name="code" class="python">Increment the integer value of the hash at _key_ on _field_ with _integer_.</code></li><li> <a href="HexistsCommand.html">HEXISTS</a> <i>key</i> <i>field</i> <code name="code" class="python">Test for existence of a specified field in a hash</code></li><li> <a href="HdelCommand.html">HDEL</a> <i>key</i> <i>field</i> <code name="code" class="python">Remove the specified field from a hash</code></li><li> <a href="HlenCommand.html">HLEN</a> <i>key</i> <code name="code" class="python">Return the number of items in a hash.</code></li><li> <a href="HgetallCommand.html">HKEYS</a> <i>key</i> <code name="code" class="python">Return all the fields in a hash.</code></li><li> <a href="HgetallCommand.html">HVALS</a> <i>key</i> <code name="code" class="python">Return all the values in a hash.</code></li><li> <a href="HgetallCommand.html">HGETALL</a> <i>key</i> <code name="code" class="python">Return all the fields and associated values in a hash.</code></li></ul>
<h2><a name="Sorting">Sorting</a></h2><ul><li> <a href="SortCommand.html">SORT</a> <i>key</i> BY <i>pattern</i> LIMIT <i>start</i> <i>end</i> GET <i>pattern</i> ASC|DESC ALPHA <code name="code" class="python">Sort a Set or a List accordingly to the specified parameters</code></li></ul>
+<h2><a name="Transactions">Transactions</a></h2><ul><li> <a href="MultiExecCommand.html">MULTI/EXEC/DISCARD</a> <code name="code" class="python">Redis atomic transactions</code></li></ul>
+<h2><a name="Publish/Subscribe">Publish/Subscribe</a></h2><ul><li> <a href="PublishSubscribe.html">SUBSCRIBE/UNSUBSCRIBE/PUBLISH</a> <code name="code" class="python">Redis Public/Subscribe messaging paradigm implementation</code></li></ul>
<h2><a name="Persistence control commands">Persistence control commands</a></h2><ul><li> <a href="SaveCommand.html">SAVE</a> <code name="code" class="python">Synchronously save the DB on disk</code></li><li> <a href="BgsaveCommand.html">BGSAVE</a> <code name="code" class="python">Asynchronously save the DB on disk</code></li><li> <a href="LastsaveCommand.html">LASTSAVE</a> <code name="code" class="python">Return the UNIX time stamp of the last successfully saving of the dataset on disk</code></li><li> <a href="ShutdownCommand.html">SHUTDOWN</a> <code name="code" class="python">Synchronously save the DB on disk, then shutdown the server</code></li><li> <a href="BgrewriteaofCommand.html">BGREWRITEAOF</a> <code name="code" class="python">Rewrite the append only file in background when it gets too big</code></li></ul>
<h2><a name="Remote server control commands">Remote server control commands</a></h2><ul><li> <a href="InfoCommand.html">INFO</a> <code name="code" class="python">Provide information and statistics about the server</code></li><li> <a href="MonitorCommand.html">MONITOR</a> <code name="code" class="python">Dump all the received requests in real time</code></li><li> <a href="SlaveofCommand.html">SLAVEOF</a> <code name="code" class="python">Change the replication settings</code></li></ul>
</div>
<div id="pagecontent">
<div class="index">
<!-- This is a (PRE) block. Make sure it's left aligned or your toc title will be off. -->
-<b>ExpireCommand: Contents</b><br> <a href="#EXPIRE _key_ _seconds_">EXPIRE _key_ _seconds_</a><br> <a href="#EXPIREAT _key_ _unixtime_ (Redis >">EXPIREAT _key_ _unixtime_ (Redis ></a><br> <a href="#How the expire is removed from a key">How the expire is removed from a key</a><br> <a href="#Restrictions with write operations against volatile keys">Restrictions with write operations against volatile keys</a><br> <a href="#Setting the timeout again on already volatile keys">Setting the timeout again on already volatile keys</a><br> <a href="#Enhanced Lazy Expiration algorithm">Enhanced Lazy Expiration algorithm</a><br> <a href="#Version 1.0">Version 1.0</a><br> <a href="#Version 1.1">Version 1.1</a><br> <a href="#Return value">Return value</a>
+<b>ExpireCommand: Contents</b><br> <a href="#EXPIRE _key_ _seconds_">EXPIRE _key_ _seconds_</a><br> <a href="#EXPIREAT _key_ _unixtime_ (Redis >">EXPIREAT _key_ _unixtime_ (Redis ></a><br> <a href="#How the expire is removed from a key">How the expire is removed from a key</a><br> <a href="#Restrictions with write operations against volatile keys">Restrictions with write operations against volatile keys</a><br> <a href="#Setting the timeout again on already volatile keys">Setting the timeout again on already volatile keys</a><br> <a href="#Enhanced Lazy Expiration algorithm">Enhanced Lazy Expiration algorithm</a><br> <a href="#Version 1.0">Version 1.0</a><br> <a href="#Version 1.1">Version 1.1</a><br> <a href="#Return value">Return value</a><br> <a href="#FAQ: Can you explain better why Redis deletes keys with an EXPIRE on write operations?">FAQ: Can you explain better why Redis deletes keys with an EXPIRE on write operations?</a>
</div>
<h1 class="wikiname">ExpireCommand</h1>
<h2><a name="Return value">Return value</a></h2><a href="ReplyTypes.html">Integer reply</a>, specifically:<br/><br/><pre class="codeblock python python" name="code">
1: the timeout was set.
0: the timeout was not set since the key already has an associated timeout, or the key does not exist.
+</pre><h2><a name="FAQ: Can you explain better why Redis deletes keys with an EXPIRE on write operations?">FAQ: Can you explain better why Redis deletes keys with an EXPIRE on write operations?</a></h2>
+Ok let's start with the problem:
+<pre class="codeblock python python python" name="code">
+redis> set a 100
+OK
+redis> expire a 360
+(integer) 1
+redis> incr a
+(integer) 1
</pre>
-
+I set a key to the value of 100, then set an expire of 360 seconds, and then incremented the key (before the 360 timeout expired of course). The obvious result would be: 101, instead the key is set to the value of 1. Why?
+There is a very important reason involving the Append Only File and Replication. Let's rework a bit hour example adding the notion of time to the mix:
+<pre class="codeblock python python python python" name="code">
+SET a 100
+EXPIRE a 5
+... wait 10 seconds ...
+INCR a
+</pre>
+Imagine a Redis version that does not implement the "Delete keys with an expire set on write operation" semantic.
+Running the above example with the 10 seconds pause will lead to 'a' being set to the value of 1, as it no longer exists when INCR is called 10 seconds later.<br/><br/>Instead if we drop the 10 seconds pause, the result is that 'a' is set to 101.<br/><br/>And in the practice timing changes! For instance the client may wait 10 seconds before INCR, but the sequence written in the Append Only File (and later replayed-back as fast as possible when Redis is restarted) will not have the pause. Even if we add a timestamp in the AOF, when the time difference is smaller than our timer resolution, we have a race condition.<br/><br/>The same happens with master-slave replication. Again, consider the example above: the client will use the same sequence of commands without the 10 seconds pause, but the replication link will slow down for a few seconds due to a network problem. Result? The master will contain 'a' set to 101, the slave 'a' set to 1.<br/><br/>The only way to avoid this but at the same time have reliable non time dependent timeouts on keys is to destroy volatile keys when a write operation is attempted against it.<br/><br/>After all Redis is one of the rare fully persistent databases that will give you EXPIRE. This comes to a cost :)
</div>
</div>
<div id="pagecontent">
<div class="index">
<!-- This is a (PRE) block. Make sure it's left aligned or your toc title will be off. -->
-<b>FAQ: Contents</b><br> <a href="#Isn't this key-value thing just hype?">Isn't this key-value thing just hype?</a><br> <a href="#Can I backup a Redis DB while the server is working?">Can I backup a Redis DB while the server is working?</a><br> <a href="#What's the Redis memory footprint?">What's the Redis memory footprint?</a><br> <a href="#I like Redis high level operations and features, but I don't like it takes everything in memory and I can't have a dataset larger the memory. Plans to change this?">I like Redis high level operations and features, but I don't like it takes everything in memory and I can't have a dataset larger the memory. Plans to change this?</a><br> <a href="#Why Redis takes the whole dataset in RAM?">Why Redis takes the whole dataset in RAM?</a><br> <a href="#If my dataset is too big for RAM and I don't want to use consistent hashing or other ways to distribute the dataset across different nodes, what I can do to use Redis anyway?">If my dataset is too big for RAM and I don't want to use consistent hashing or other ways to distribute the dataset across different nodes, what I can do to use Redis anyway?</a><br> <a href="#Do you plan to implement Virtual Memory in Redis? Why don't just let the Operating System handle it for you?">Do you plan to implement Virtual Memory in Redis? Why don't just let the Operating System handle it for you?</a><br> <a href="#I have an empty Redis server but INFO and logs are reporting megabytes of memory in use!">I have an empty Redis server but INFO and logs are reporting megabytes of memory in use!</a><br> <a href="#What happens if Redis runs out of memory?">What happens if Redis runs out of memory?</a><br> <a href="#How much time it takes to load a big database at server startup?">How much time it takes to load a big database at server startup?</a><br> <a href="#Background saving is failing with a fork() error under Linux even if I've a lot of free RAM!">Background saving is failing with a fork() error under Linux even if I've a lot of free RAM!</a><br> <a href="#Are Redis on disk snapshots atomic?">Are Redis on disk snapshots atomic?</a><br> <a href="#Redis is single threaded, how can I exploit multiple CPU / cores?">Redis is single threaded, how can I exploit multiple CPU / cores?</a><br> <a href="#I'm using some form of key hashing for partitioning, but what about SORT BY?">I'm using some form of key hashing for partitioning, but what about SORT BY?</a><br> <a href="#What is the maximum number of keys a single Redis instance can hold? and what the max number of elements in a List, Set, Ordered Set?">What is the maximum number of keys a single Redis instance can hold? and what the max number of elements in a List, Set, Ordered Set?</a><br> <a href="#What Redis means actually?">What Redis means actually?</a><br> <a href="#Why did you started the Redis project?">Why did you started the Redis project?</a>
+<b>FAQ: Contents</b><br> <a href="#Isn't this key-value thing just hype?">Isn't this key-value thing just hype?</a><br> <a href="#Can I backup a Redis DB while the server is working?">Can I backup a Redis DB while the server is working?</a><br> <a href="#What's the Redis memory footprint?">What's the Redis memory footprint?</a><br> <a href="#I like Redis high level operations and features, but I don't like it takes everything in memory and I can't have a dataset larger the memory. Plans to change this?">I like Redis high level operations and features, but I don't like it takes everything in memory and I can't have a dataset larger the memory. Plans to change this?</a><br> <a href="#Why Redis takes the whole dataset in RAM?">Why Redis takes the whole dataset in RAM?</a><br> <a href="#If my dataset is too big for RAM and I don't want to use consistent hashing or other ways to distribute the dataset across different nodes, what I can do to use Redis anyway?">If my dataset is too big for RAM and I don't want to use consistent hashing or other ways to distribute the dataset across different nodes, what I can do to use Redis anyway?</a><br> <a href="#Do you plan to implement Virtual Memory in Redis? Why don't just let the Operating System handle it for you?">Do you plan to implement Virtual Memory in Redis? Why don't just let the Operating System handle it for you?</a><br> <a href="#Is there something I can do to lower the Redis memory usage?">Is there something I can do to lower the Redis memory usage?</a><br> <a href="#I have an empty Redis server but INFO and logs are reporting megabytes of memory in use!">I have an empty Redis server but INFO and logs are reporting megabytes of memory in use!</a><br> <a href="#What happens if Redis runs out of memory?">What happens if Redis runs out of memory?</a><br> <a href="#Does Redis use more memory running in 64 bit boxes? Can I use 32 bit Redis in 64 bit systems?">Does Redis use more memory running in 64 bit boxes? Can I use 32 bit Redis in 64 bit systems?</a><br> <a href="#How much time it takes to load a big database at server startup?">How much time it takes to load a big database at server startup?</a><br> <a href="#Background saving is failing with a fork() error under Linux even if I've a lot of free RAM!">Background saving is failing with a fork() error under Linux even if I've a lot of free RAM!</a><br> <a href="#Are Redis on disk snapshots atomic?">Are Redis on disk snapshots atomic?</a><br> <a href="#Redis is single threaded, how can I exploit multiple CPU / cores?">Redis is single threaded, how can I exploit multiple CPU / cores?</a><br> <a href="#I'm using some form of key hashing for partitioning, but what about SORT BY?">I'm using some form of key hashing for partitioning, but what about SORT BY?</a><br> <a href="#What is the maximum number of keys a single Redis instance can hold? and what the max number of elements in a List, Set, Ordered Set?">What is the maximum number of keys a single Redis instance can hold? and what the max number of elements in a List, Set, Ordered Set?</a><br> <a href="#What Redis means actually?">What Redis means actually?</a><br> <a href="#Why did you started the Redis project?">Why did you started the Redis project?</a>
</div>
<h1 class="wikiname">FAQ</h1>
memory is full of pointers, reference counters and other metadata. Add
to this malloc fragmentation and need to return word-aligned chunks of
memory and you have a clear picture of what happens. So this means to
-have 10 times the I/O between memory and disk than otherwise needed.<h1><a name="I have an empty Redis server but INFO and logs are reporting megabytes of memory in use!">I have an empty Redis server but INFO and logs are reporting megabytes of memory in use!</a></h1>This may happen and it's prefectly ok. Redis objects are small C structures allocated and freed a lot of times. This costs a lot of CPU so instead of being freed, released objects are taken into a free list and reused when needed. This memory is taken exactly by this free objects ready to be reused.<h1><a name="What happens if Redis runs out of memory?">What happens if Redis runs out of memory?</a></h1>With modern operating systems malloc() returning NULL is not common, usually the server will start swapping and Redis performances will be disastrous so you'll know it's time to use more Redis servers or get more RAM.<br/><br/>The INFO command (work in progress in this days) will report the amount of memory Redis is using so you can write scripts that monitor your Redis servers checking for critical conditions.<br/><br/>You can also use the "maxmemory" option in the config file to put a limit to the memory Redis can use. If this limit is reached Redis will start to reply with an error to write commands (but will continue to accept read-only commands).<h1><a name="How much time it takes to load a big database at server startup?">How much time it takes to load a big database at server startup?</a></h1>Just an example on normal hardware: It takes about 45 seconds to restore a 2 GB database on a fairly standard system, no RAID. This can give you some kind of feeling about the order of magnitude of the time needed to load data when you restart the server.<h1><a name="Background saving is failing with a fork() error under Linux even if I've a lot of free RAM!">Background saving is failing with a fork() error under Linux even if I've a lot of free RAM!</a></h1>Short answer: <code name="code" class="python">echo 1 > /proc/sys/vm/overcommit_memory</code> :)<br/><br/>And now the long one:<br/><br/>Redis background saving schema relies on the copy-on-write semantic of fork in modern operating systems: Redis forks (creates a child process) that is an exact copy of the parent. The child process dumps the DB on disk and finally exits. In theory the child should use as much memory as the parent being a copy, but actually thanks to the copy-on-write semantic implemented by most modern operating systems the parent and child process will <i>share</i> the common memory pages. A page will be duplicated only when it changes in the child or in the parent. Since in theory all the pages may change while the child process is saving, Linux can't tell in advance how much memory the child will take, so if the <code name="code" class="python">overcommit_memory</code> setting is set to zero fork will fail unless there is as much free RAM as required to really duplicate all the parent memory pages, with the result that if you have a Redis dataset of 3 GB and just 2 GB of free memory it will fail.<br/><br/>Setting <code name="code" class="python">overcommit_memory</code> to 1 says Linux to relax and perform the fork in a more optimistic allocation fashion, and this is indeed what you want for Redis.<h1><a name="Are Redis on disk snapshots atomic?">Are Redis on disk snapshots atomic?</a></h1>Yes, redis background saving process is always fork(2)ed when the server is outside of the execution of a command, so every command reported to be atomic in RAM is also atomic from the point of view of the disk snapshot.<h1><a name="Redis is single threaded, how can I exploit multiple CPU / cores?">Redis is single threaded, how can I exploit multiple CPU / cores?</a></h1>Simply start multiple instances of Redis in different ports in the same box and threat them as different servers! Given that Redis is a distributed database anyway in order to scale you need to think in terms of multiple computational units. At some point a single box may not be enough anyway.<br/><br/>In general key-value databases are very scalable because of the property that different keys can stay on different servers independently.<br/><br/>In Redis there are client libraries such Redis-rb (the Ruby client) that are able to handle multiple servers automatically using <i>consistent hashing</i>. We are going to implement consistent hashing in all the other major client libraries. If you use a different language you can implement it yourself otherwise just hash the key before to SET / GET it from a given server. For example imagine to have N Redis servers, server-0, server-1, ..., server-N. You want to store the key "foo", what's the right server where to put "foo" in order to distribute keys evenly among different servers? Just perform the <i>crc</i> = CRC32("foo"), then <i>servernum</i> = <i>crc</i> % N (the rest of the division for N). This will give a number between 0 and N-1 for every key. Connect to this server and store the key. The same for gets.<br/><br/>This is a basic way of performing key partitioning, consistent hashing is much better and this is why after Redis 1.0 will be released we'll try to implement this in every widely used client library starting from Python and PHP (Ruby already implements this support).<h1><a name="I'm using some form of key hashing for partitioning, but what about SORT BY?">I'm using some form of key hashing for partitioning, but what about SORT BY?</a></h1>With <a href="SortCommand.html">SORT</a> BY you need that all the <i>weight keys</i> are in the same Redis instance of the list/set you are trying to sort. In order to make this possible we developed a concept called <i>key tags</i>. A key tag is a special pattern inside a key that, if preset, is the only part of the key hashed in order to select the server for this key. For example in order to hash the key "foo" I simply perform the CRC32 checksum of the whole string, but if this key has a pattern in the form of the characters {...} I only hash this substring. So for example for the key "foo{bared}" the key hashing code will simply perform the CRC32 of "bared". This way using key tags you can ensure that related keys will be stored on the same Redis instance just using the same key tag for all this keys. Redis-rb already implements key tags.<h1><a name="What is the maximum number of keys a single Redis instance can hold? and what the max number of elements in a List, Set, Ordered Set?">What is the maximum number of keys a single Redis instance can hold? and what the max number of elements in a List, Set, Ordered Set?</a></h1>In theory Redis can handle up to 2<sup>32 keys, and was tested in practice to handle at least 150 million of keys per instance. We are working in order to experiment with larger values.<br/><br/>Every list, set, and ordered set, can hold 2</sup>32 elements.<br/><br/>Actually Redis internals are ready to allow up to 2<sup>64 elements but the current disk dump format don't support this, and there is a lot time to fix this issues in the future as currently even with 128 GB of RAM it's impossible to reach 2</sup>32 elements.<h1><a name="What Redis means actually?">What Redis means actually?</a></h1>Redis means two things:
+have 10 times the I/O between memory and disk than otherwise needed.<h1><a name="Is there something I can do to lower the Redis memory usage?">Is there something I can do to lower the Redis memory usage?</a></h1>Yes, try to compile it with 32 bit target if you are using a 64 bit box.<br/><br/>If you are using Redis >= 1.3, try using the Hash data type, it can save a lot of memory.<br/><br/>If you are using hashes or any other type with values bigger than 128 bytes try also this to lower the RSS usage (Resident Set Size): EXPORT MMAP_THRESHOLD=4096<h1><a name="I have an empty Redis server but INFO and logs are reporting megabytes of memory in use!">I have an empty Redis server but INFO and logs are reporting megabytes of memory in use!</a></h1>This may happen and it's prefectly ok. Redis objects are small C structures allocated and freed a lot of times. This costs a lot of CPU so instead of being freed, released objects are taken into a free list and reused when needed. This memory is taken exactly by this free objects ready to be reused.<h1><a name="What happens if Redis runs out of memory?">What happens if Redis runs out of memory?</a></h1>With modern operating systems malloc() returning NULL is not common, usually the server will start swapping and Redis performances will be disastrous so you'll know it's time to use more Redis servers or get more RAM.<br/><br/>The INFO command (work in progress in this days) will report the amount of memory Redis is using so you can write scripts that monitor your Redis servers checking for critical conditions.<br/><br/>You can also use the "maxmemory" option in the config file to put a limit to the memory Redis can use. If this limit is reached Redis will start to reply with an error to write commands (but will continue to accept read-only commands).<h1><a name="Does Redis use more memory running in 64 bit boxes? Can I use 32 bit Redis in 64 bit systems?">Does Redis use more memory running in 64 bit boxes? Can I use 32 bit Redis in 64 bit systems?</a></h1>Redis uses a lot more memory when compiled for 64 bit target, especially if the dataset is composed of many small keys and values. Such a database will, for instance, consume 50 MB of RAM when compiled for the 32 bit target, and 80 MB for 64 bit! That's a big difference.<br/><br/>You can run 32 bit Redis binaries in a 64 bit Linux and Mac OS X system without problems. For OS X just use <b>make 32bit</b>. For Linux instead, make sure you have <b>libc6-dev-i386</b> installed, then use <b>make 32bit</b> if you are using the latest Git version. Instead for Redis <= 1.2.2 you have to edit the Makefile and replace "-arch i386" with "-m32".<br/><br/>If your application is already able to perform application-level sharding, it is very advisable to run N instances of Redis 32bit against a big 64 bit Redis box (with more than 4GB of RAM) instead than a single 64 bit instance, as this is much more memory efficient. <h1><a name="How much time it takes to load a big database at server startup?">How much time it takes to load a big database at server startup?</a></h1>Just an example on normal hardware: It takes about 45 seconds to restore a 2 GB database on a fairly standard system, no RAID. This can give you some kind of feeling about the order of magnitude of the time needed to load data when you restart the server.<h1><a name="Background saving is failing with a fork() error under Linux even if I've a lot of free RAM!">Background saving is failing with a fork() error under Linux even if I've a lot of free RAM!</a></h1>Short answer: <code name="code" class="python">echo 1 > /proc/sys/vm/overcommit_memory</code> :)<br/><br/>And now the long one:<br/><br/>Redis background saving schema relies on the copy-on-write semantic of fork in modern operating systems: Redis forks (creates a child process) that is an exact copy of the parent. The child process dumps the DB on disk and finally exits. In theory the child should use as much memory as the parent being a copy, but actually thanks to the copy-on-write semantic implemented by most modern operating systems the parent and child process will <i>share</i> the common memory pages. A page will be duplicated only when it changes in the child or in the parent. Since in theory all the pages may change while the child process is saving, Linux can't tell in advance how much memory the child will take, so if the <code name="code" class="python">overcommit_memory</code> setting is set to zero fork will fail unless there is as much free RAM as required to really duplicate all the parent memory pages, with the result that if you have a Redis dataset of 3 GB and just 2 GB of free memory it will fail.<br/><br/>Setting <code name="code" class="python">overcommit_memory</code> to 1 says Linux to relax and perform the fork in a more optimistic allocation fashion, and this is indeed what you want for Redis.<h1><a name="Are Redis on disk snapshots atomic?">Are Redis on disk snapshots atomic?</a></h1>Yes, redis background saving process is always fork(2)ed when the server is outside of the execution of a command, so every command reported to be atomic in RAM is also atomic from the point of view of the disk snapshot.<h1><a name="Redis is single threaded, how can I exploit multiple CPU / cores?">Redis is single threaded, how can I exploit multiple CPU / cores?</a></h1>Simply start multiple instances of Redis in different ports in the same box and threat them as different servers! Given that Redis is a distributed database anyway in order to scale you need to think in terms of multiple computational units. At some point a single box may not be enough anyway.<br/><br/>In general key-value databases are very scalable because of the property that different keys can stay on different servers independently.<br/><br/>In Redis there are client libraries such Redis-rb (the Ruby client) that are able to handle multiple servers automatically using <i>consistent hashing</i>. We are going to implement consistent hashing in all the other major client libraries. If you use a different language you can implement it yourself otherwise just hash the key before to SET / GET it from a given server. For example imagine to have N Redis servers, server-0, server-1, ..., server-N. You want to store the key "foo", what's the right server where to put "foo" in order to distribute keys evenly among different servers? Just perform the <i>crc</i> = CRC32("foo"), then <i>servernum</i> = <i>crc</i> % N (the rest of the division for N). This will give a number between 0 and N-1 for every key. Connect to this server and store the key. The same for gets.<br/><br/>This is a basic way of performing key partitioning, consistent hashing is much better and this is why after Redis 1.0 will be released we'll try to implement this in every widely used client library starting from Python and PHP (Ruby already implements this support).<h1><a name="I'm using some form of key hashing for partitioning, but what about SORT BY?">I'm using some form of key hashing for partitioning, but what about SORT BY?</a></h1>With <a href="SortCommand.html">SORT</a> BY you need that all the <i>weight keys</i> are in the same Redis instance of the list/set you are trying to sort. In order to make this possible we developed a concept called <i>key tags</i>. A key tag is a special pattern inside a key that, if preset, is the only part of the key hashed in order to select the server for this key. For example in order to hash the key "foo" I simply perform the CRC32 checksum of the whole string, but if this key has a pattern in the form of the characters {...} I only hash this substring. So for example for the key "foo{bared}" the key hashing code will simply perform the CRC32 of "bared". This way using key tags you can ensure that related keys will be stored on the same Redis instance just using the same key tag for all this keys. Redis-rb already implements key tags.<h1><a name="What is the maximum number of keys a single Redis instance can hold? and what the max number of elements in a List, Set, Ordered Set?">What is the maximum number of keys a single Redis instance can hold? and what the max number of elements in a List, Set, Ordered Set?</a></h1>In theory Redis can handle up to 2<sup>32 keys, and was tested in practice to handle at least 150 million of keys per instance. We are working in order to experiment with larger values.<br/><br/>Every list, set, and ordered set, can hold 2</sup>32 elements.<br/><br/>Actually Redis internals are ready to allow up to 2<sup>64 elements but the current disk dump format don't support this, and there is a lot time to fix this issues in the future as currently even with 128 GB of RAM it's impossible to reach 2</sup>32 elements.<h1><a name="What Redis means actually?">What Redis means actually?</a></h1>Redis means two things:
<ul><li> it's a joke on the word Redistribute (instead to use just a Relational DB redistribute your workload among Redis servers)</li><li> it means REmote DIctionary Server</li></ul>
<h1><a name="Why did you started the Redis project?">Why did you started the Redis project?</a></h1>In order to scale <a href="http://lloogg.com" target="_blank">LLOOGG</a>. But after I got the basic server working I liked the idea to share the work with other guys, and Redis was turned into an open source project.
-
</div>
</div>
<i>Time complexity: O(1)</i><blockquote>Increment or decrement the number stored at <i>key</i> by one. If the key doesnot exist or contains a value of a wrong type, set the key to thevalue of "0" before to perform the increment or decrement operation.</blockquote>
<blockquote>INCRBY and DECRBY work just like INCR and DECR but instead toincrement/decrement by 1 the increment/decrement is <i>integer</i>.</blockquote>
<blockquote>INCR commands are limited to 64 bit signed integers.</blockquote>
-<h2><a name="Return value">Return value</a></h2><a href="ReplyTypes.html">Integer reply</a>, this commands will reply with the new value of <i>key</i> after the increment or decrement.
-
+Note: this is actually a string operation, that is, in Redis there are not "integer" types. Simply the string stored at the key is parsed as a base 10 64 bit signed integer, incremented, and then converted back as a string.<h2><a name="Return value">Return value</a></h2><a href="ReplyTypes.html">Integer reply</a>, this commands will reply with the new value of <i>key</i> after the increment or decrement.
</div>
</div>
<div id="pagecontent">
<div class="index">
<!-- This is a (PRE) block. Make sure it's left aligned or your toc title will be off. -->
-<b>IntroductionToRedisDataTypes: Contents</b><br> <a href="#Redis keys">Redis keys</a><br> <a href="#The string type">The string type</a><br> <a href="#The List type">The List type</a><br> <a href="#First steps with Redis lists">First steps with Redis lists</a><br> <a href="#Pushing IDs instead of the actual data in Redis lists">Pushing IDs instead of the actual data in Redis lists</a><br> <a href="#Redis Sets">Redis Sets</a><br> <a href="#A digression. How to get unique identifiers for strings">A digression. How to get unique identifiers for strings</a><br> <a href="#Sorted sets">Sorted sets</a><br> <a href="#Operating on ranges">Operating on ranges</a><br> <a href="#Back to the reddit example">Back to the reddit example</a><br> <a href="#Updating the scores of a sorted set">Updating the scores of a sorted set</a>
+<b>IntroductionToRedisDataTypes: Contents</b><br> <a href="#A fifteen minutes introduction to Redis data types">A fifteen minutes introduction to Redis data types</a><br> <a href="#Redis keys">Redis keys</a><br> <a href="#The string type">The string type</a><br> <a href="#The List type">The List type</a><br> <a href="#First steps with Redis lists">First steps with Redis lists</a><br> <a href="#Pushing IDs instead of the actual data in Redis lists">Pushing IDs instead of the actual data in Redis lists</a><br> <a href="#Redis Sets">Redis Sets</a><br> <a href="#A digression. How to get unique identifiers for strings">A digression. How to get unique identifiers for strings</a><br> <a href="#Sorted sets">Sorted sets</a><br> <a href="#Operating on ranges">Operating on ranges</a><br> <a href="#Back to the reddit example">Back to the reddit example</a><br> <a href="#Updating the scores of a sorted set">Updating the scores of a sorted set</a>
</div>
<h1 class="wikiname">IntroductionToRedisDataTypes</h1>
</div>
<div class="narrow">
- = A fifteen minutes introduction to Redis data types =<br/><br/>As you already probably know Redis is not a plain key-value store, actually it is a <b>data structures server</b>, supporting different kind of values. That is, you can't just set strings as values of keys. All the following data types are supported as values:<br/><br/><ul><li> Binary-safe strings.</li><li> Lists of binary-safe strings.</li><li> Sets of binary-safe strings, that are collection of unique unsorted elements. You can think at this as a Ruby hash where all the keys are set to the 'true' value.</li><li> Sorted sets, similar to Sets but where every element is associated to a floating number score. The elements are taken sorted by score. You can think at this as Ruby hashes where the key is the element and the value is the score, but where elements are always taken in order without requiring a sorting operation.</li></ul>
+ #sidebar <a href="RedisGuides.html">RedisGuides</a>
+<h1><a name="A fifteen minutes introduction to Redis data types">A fifteen minutes introduction to Redis data types</a></h1>As you already probably know Redis is not a plain key-value store, actually it is a <b>data structures server</b>, supporting different kind of values. That is, you can't just set strings as values of keys. All the following data types are supported as values:<br/><br/><ul><li> Binary-safe strings.</li><li> Lists of binary-safe strings.</li><li> Sets of binary-safe strings, that are collection of unique unsorted elements. You can think at this as a Ruby hash where all the keys are set to the 'true' value.</li><li> Sorted sets, similar to Sets but where every element is associated to a floating number score. The elements are taken sorted by score. You can think at this as Ruby hashes where the key is the element and the value is the score, but where elements are always taken in order without requiring a sorting operation.</li></ul>
It's not always trivial to grasp how this data types work and what to use in order to solve a given problem from the <a href="CommandReference.html">Redis command reference</a>, so this document is a crash course to Redis data types and their most used patterns.<br/><br/>For all the examples we'll use the <b>redis-cli</b> utility, that's a simple but handy command line utility to issue commands against the Redis server.<h2><a name="Redis keys">Redis keys</a></h2>Before to start talking about the different kind of values supported by Redis it is better to start saying that keys are not binary safe strings in Redis, but just strings not containing a space or a newline character. For instance "foo" or "123456789" or "foo_bar" are valid keys, while "hello world" or "hello\n" are not.<br/><br/>Actually there is nothing inside the Redis internals preventing the use of binary keys, it's just a matter of protocol, and actually the new protocol introduced with Redis 1.2 (1.2 betas are 1.1.x) in order to implement commands like MSET, is totally binary safe. Still for now consider this as an hard limit as the database is only tested with "normal" keys.<br/><br/>A few other rules about keys:<br/><br/><ul><li> Too long keys are not a good idea, for instance a key of 1024 bytes is not a good idea not only memory-wise, but also because the lookup of the key in the dataset may require several costly key-comparisons.</li><li> Too short keys are not a good idea. There is no point in writing "u:1000:pwd" as key if you can write instead "user:1000:password", the latter is more readable and the added space is very little compared to the space used by the key object itself.</li><li> Try to stick with a schema. For instance "object-type:id:field" can be a nice idea, like in "user:1000:password". I like to use dots for multi-words fields, like in "comment:1234:reply.to".</li></ul>
<h2><a name="The string type">The string type</a></h2>This is the simplest Redis type. If you use only this type, Redis will be something like a memcached server with persistence.<br/><br/>Let's play a bit with the string type:<br/><br/><pre class="codeblock python" name="code">
$ ./redis-cli set mykey "my binary safe value"
(integer) 102
$ ./redis-cli incrby counter 10
(integer) 112
-</pre>The <a href="IncrCommand.html">INCR</a> command parses the string value as an integer, increments it by one, and finally sets the obtained value as the new string value. There are other similar commands like <a href="IncrCommand.html">INCRBY</a>, <a href="IncrCommand.html">DECR</a> and <a href="IncrCommand.html">DECRBY</a>. Actually internally it's always the same command, acting in a slightly different way.<br/><br/>What means that INCR is atomic? That even multiple clients issuing INCR against the same key will never incur into a race condition. For instance it can't never happen that client 1 read "10", client 2 read "10" at the same time, both increment to 11, and set the new value of 11. The final value will always be of 12 ad the read-increment-set operation is performed while all the other clients are not executing a command at the same time.<br/><br/>Another interesting operation on string is the <a href="GetsetCommand.html">GETSET</a> command, that does just what its name suggests: Set a key to a new value, returning the old value, as result. Why this is useful? Example: you have a system that increments a Redis key using the <a href="IncrCommand.html">INCR</a> command every time your web site receives a new visit. You want to collect this information one time every hour, without loosing a single key. You can GETSET the key assigning it the new value of "0" and reading the old value back.<h2><a name="The List type">The List type</a></h2>To explain the List data type it's better to start with a little of theory, as the term <b>List</b> is often used in an improper way by information technology folks. For instance "Python Lists" are not what the name may suggest (Linked Lists), but them are actually Arrays (the same data type is called Array in Ruby actually).<br/><br/>From a very general point of view a List is just a sequence of ordered elements: 10,20,1,2,3 is a list, but when a list of items is implemented using an Array and when instead a <b>Linked List</b> is used for the implementation, the properties change a lot.<br/><br/>Redis lists are implemented via Linked Lists, this means that even if you have million of elements inside a list, the operation of adding a new element in the head or in the tail of the list is performed <b>in constant time</b>. Adding a new element with the <a href="LpopCommand.html">LPOP</a> command to the head of a ten elements list is the same speed as adding an element to the head of a 10 million elements list.<br/><br/>What's the downside? That accessing an element <b>by index</b> is very fast in lists implemented with an Array and not so fast in lists implemented by linked lists.<br/><br/>Redis Lists are implemented with linked lists because for a database system is crucial to be able to add elements to a very long list in a very fast way. Another strong advantage is, as you'll see in a moment, that Redis Lists can be taken at constant length in constant time.<h3><a name="First steps with Redis lists">First steps with Redis lists</a></h3>The <a href="RpushCommand.html">LPUSH</a> command add a new element into a list, on the left (on head), while the <a href="RpushCommand.html">RPUSH</a> command add a new element into alist, ot the right (on tail). Finally the <a href="LrangeCommand.html">LRANGE</a> command extract ranges of elements from lists:<br/><br/><pre class="codeblock python python python" name="code">
+</pre>The <a href="IncrCommand.html">INCR</a> command parses the string value as an integer, increments it by one, and finally sets the obtained value as the new string value. There are other similar commands like <a href="IncrCommand.html">INCRBY</a>, <a href="IncrCommand.html">DECR</a> and <a href="IncrCommand.html">DECRBY</a>. Actually internally it's always the same command, acting in a slightly different way.<br/><br/>What means that INCR is atomic? That even multiple clients issuing INCR against the same key will never incur into a race condition. For instance it can't never happen that client 1 read "10", client 2 read "10" at the same time, both increment to 11, and set the new value of 11. The final value will always be of 12 ad the read-increment-set operation is performed while all the other clients are not executing a command at the same time.<br/><br/>Another interesting operation on string is the <a href="GetsetCommand.html">GETSET</a> command, that does just what its name suggests: Set a key to a new value, returning the old value, as result. Why this is useful? Example: you have a system that increments a Redis key using the <a href="IncrCommand.html">INCR</a> command every time your web site receives a new visit. You want to collect this information one time every hour, without loosing a single key. You can GETSET the key assigning it the new value of "0" and reading the old value back.<h2><a name="The List type">The List type</a></h2>To explain the List data type it's better to start with a little of theory, as the term <b>List</b> is often used in an improper way by information technology folks. For instance "Python Lists" are not what the name may suggest (Linked Lists), but them are actually Arrays (the same data type is called Array in Ruby actually).<br/><br/>From a very general point of view a List is just a sequence of ordered elements: 10,20,1,2,3 is a list, but when a list of items is implemented using an Array and when instead a <b>Linked List</b> is used for the implementation, the properties change a lot.<br/><br/>Redis lists are implemented via Linked Lists, this means that even if you have million of elements inside a list, the operation of adding a new element in the head or in the tail of the list is performed <b>in constant time</b>. Adding a new element with the <a href="LpushCommand.html">LPUSH</a> command to the head of a ten elements list is the same speed as adding an element to the head of a 10 million elements list.<br/><br/>What's the downside? That accessing an element <b>by index</b> is very fast in lists implemented with an Array and not so fast in lists implemented by linked lists.<br/><br/>Redis Lists are implemented with linked lists because for a database system is crucial to be able to add elements to a very long list in a very fast way. Another strong advantage is, as you'll see in a moment, that Redis Lists can be taken at constant length in constant time.<h3><a name="First steps with Redis lists">First steps with Redis lists</a></h3>The <a href="RpushCommand.html">LPUSH</a> command add a new element into a list, on the left (on head), while the <a href="RpushCommand.html">RPUSH</a> command add a new element into alist, ot the right (on tail). Finally the <a href="LrangeCommand.html">LRANGE</a> command extract ranges of elements from lists:<br/><br/><pre class="codeblock python python python" name="code">
$ ./redis-cli rpush messages "Hello how are you?"
OK
$ ./redis-cli rpush messages "Fine thanks. I'm having fun with Redis"
$ ./redis-cli zremrangebyscore hackers 1940 1960
(integer) 2
</pre><a href="ZremrangebyscoreCommand.html">ZREMRANGEBYSCORE</a> is not the best command name, but it can be very useful, and returns the number of removed elements.<h3><a name="Back to the reddit example">Back to the reddit example</a></h3>For the last time, back to the Reddit example. Now we have a decent plan to populate a sorted set in order to generate the home page. A sorted set can contain all the news that are not older than a few days (we remove old entries from time to time using ZREMRANGEBYSCORE). A background job gets all the elements from this sorted set, get the user votes and the time of the news, and compute the score to populate the <b>reddit.home.page</b> sorted set with the news IDs and associated scores. To show the home page we have just to perform a blazingly fast call to ZRANGE.<br/><br/>From time to time we'll remove too old news from the <b>reddit.home.page</b> sorted set as well in order for our system to work always against a limited set of news.<h3><a name="Updating the scores of a sorted set">Updating the scores of a sorted set</a></h3>Just a final note before to finish this tutorial. Sorted sets scores can be updated at any time. Just calling again ZADD against an element already included in the sorted set will update its score (and position) in O(log(N)), so sorted sets are suitable even when there are tons of updates.<br/><br/>This tutorial is in no way complete, this is just the basics to get started with Redis, read the <a href="CommandReference.html">Command Reference</a> to discover a lot more.<br/><br/>Thanks for reading. Salvatore.
+
</div>
</div>
</div>
<div class="narrow">
- == List Commands ==<br/><br/><ul><li> <a href="RpushCommand.html">RPUSH</a></li><li> <a href="RpushCommand.html">LPUSH</a></li><li> <a href="LlenCommand.html">LLEN</a></li><li> <a href="LrangeCommand.html">LRANGE</a></li><li> <a href="LtrimCommand.html">LTRIM</a></li><li> <a href="LindexCommand.html">LINDEX</a></li><li> <a href="LsetCommand.html">LSET</a></li><li> <a href="LremCommand.html">LREM</a></li><li> <a href="LpopCommand.html">LPOP</a></li><li> <a href="LpopCommand.html">RPOP</a></li><li> <a href="RpoplpushCommand.html">RPOPLPUSH</a></li><li> <a href="SortCommand.html">SORT</a></li></ul>
+ == List Commands ==<br/><br/><ul><li> <a href="RpushCommand.html">RPUSH</a></li><li> <a href="RpushCommand.html">LPUSH</a></li><li> <a href="LlenCommand.html">LLEN</a></li><li> <a href="LrangeCommand.html">LRANGE</a></li><li> <a href="LtrimCommand.html">LTRIM</a></li><li> <a href="LindexCommand.html">LINDEX</a></li><li> <a href="LsetCommand.html">LSET</a></li><li> <a href="LremCommand.html">LREM</a></li><li> <a href="LpopCommand.html">LPOP</a></li><li> <a href="LpopCommand.html">RPOP</a></li><li> <a href="BlpopCommand.html">BLPOP</a></li><li> <a href="BlpopCommand.html">BRPOP</a></li><li> <a href="RpoplpushCommand.html">RPOPLPUSH</a></li><li> <a href="SortCommand.html">SORT</a></li></ul>
</div>
</div>
<div id="pagecontent">
<div class="index">
<!-- This is a (PRE) block. Make sure it's left aligned or your toc title will be off. -->
-<b>LrangeCommand: Contents</b><br> <a href="#LRANGE _key_ _start_ _end_">LRANGE _key_ _start_ _end_</a><br> <a href="#Return value">Return value</a>
+<b>LrangeCommand: Contents</b><br> <a href="#LRANGE _key_ _start_ _end_">LRANGE _key_ _start_ _end_</a><br> <a href="#Consistency with range functions in various programming languages">Consistency with range functions in various programming languages</a><br> <a href="#Out-of-range indexes">Out-of-range indexes</a><br> <a href="#Return value">Return value</a>
</div>
<h1 class="wikiname">LrangeCommand</h1>
<div class="narrow">
#sidebar <a href="ListCommandsSidebar.html">ListCommandsSidebar</a><h1><a name="LRANGE _key_ _start_ _end_">LRANGE _key_ _start_ _end_</a></h1>
-<i>Time complexity: O(n) (with n being the length of the range)</i><blockquote>Return the specified elements of the list stored at the specifiedkey. Start and end are zero-based indexes. 0 is the first elementof the list (the list head), 1 the next element and so on.</blockquote>
-<blockquote>For example LRANGE foobar 0 2 will return the first three elementsof the list.</blockquote>
-<blockquote>_start_ and <i>end</i> can also be negative numbers indicating offsetsfrom the end of the list. For example -1 is the last element ofthe list, -2 the penultimate element and so on.</blockquote>
-<blockquote>Indexes out of range will not produce an error: if start is overthe end of the list, or start <code name="code" class="python">></code> end, an empty list is returned.If end is over the end of the list Redis will threat it just likethe last element of the list.</blockquote>
-<h2><a name="Return value">Return value</a></h2><a href="ReplyTypes.html">Multi bulk reply</a>, specifically a list of elements in the specified range.
-
+<i>Time complexity: O(start+n) (with n being the length of the range and start being the start offset)</i>Return the specified elements of the list stored at the specified
+key. Start and end are zero-based indexes. 0 is the first element
+of the list (the list head), 1 the next element and so on.<br/><br/>For example LRANGE foobar 0 2 will return the first three elements
+of the list.<br/><br/><i>start</i> and <i>end</i> can also be negative numbers indicating offsets
+from the end of the list. For example -1 is the last element of
+the list, -2 the penultimate element and so on.<h2><a name="Consistency with range functions in various programming languages">Consistency with range functions in various programming languages</a></h2>Note that if you have a list of numbers from 0 to 100, LRANGE 0 10 will return
+11 elements, that is, rightmost item is included. This <b>may or may not</b> be consistent with
+behavior of range-related functions in your programming language of choice (think Ruby's Range.new, Array#slice or Python's range() function).<br/><br/>LRANGE behavior is consistent with one of Tcl.<h2><a name="Out-of-range indexes">Out-of-range indexes</a></h2>Indexes out of range will not produce an error: if start is over
+the end of the list, or start <code name="code" class="python">></code> end, an empty list is returned.
+If end is over the end of the list Redis will threat it just like
+the last element of the list.<h2><a name="Return value">Return value</a></h2><a href="ReplyTypes.html">Multi bulk reply</a>, specifically a list of elements in the specified range.
</div>
</div>
<div class="narrow">
#sidebar <a href="ListCommandsSidebar.html">ListCommandsSidebar</a><h1><a name="LSET _key_ _index_ _value_">LSET _key_ _index_ _value_</a></h1>
<i>Time complexity: O(N) (with N being the length of the list)</i><blockquote>Set the list element at <i>index</i> (see LINDEX for information about the_index_ argument) with the new <i>value</i>. Out of range indexes willgenerate an error. Note that setting the first or last elements ofthe list is O(1).</blockquote>
-<h2><a name="Return value">Return value</a></h2><a href="ReplyTypes.html">Status code reply</a>
-
+<blockquote>Similarly to other list commands accepting indexes, the index can be negative to access elements starting from the end of the list. So -1 is the last element, -2 is the penultimate, and so forth.</blockquote><h2><a name="Return value">Return value</a></h2><a href="ReplyTypes.html">Status code reply</a>
</div>
</div>
<div id="pagecontent">
<div class="index">
<!-- This is a (PRE) block. Make sure it's left aligned or your toc title will be off. -->
-<b>QuickStart: Contents</b><br> <a href="#Obtain the latest version">Obtain the latest version</a><br> <a href="#Compile">Compile</a><br> <a href="#Run the server">Run the server</a><br> <a href="#Play with the built in client">Play with the built in client</a><br> <a href="#Further reading">Further reading</a>
+<b>QuickStart: Contents</b><br> <a href="#Quick Start">Quick Start</a><br> <a href="#Obtain the latest version">Obtain the latest version</a><br> <a href="#Compile">Compile</a><br> <a href="#Run the server">Run the server</a><br> <a href="#Play with the built in client">Play with the built in client</a><br> <a href="#Further reading">Further reading</a>
</div>
<h1 class="wikiname">QuickStart</h1>
</div>
<div class="narrow">
- = Quick Start =<br/><br/>This quickstart is a five minutes howto on how to get started with Redis. For more information on Redis check <a href="http://code.google.com/p/redis/wiki/index" target="_blank">Redis Documentation Index</a>.<h2><a name="Obtain the latest version">Obtain the latest version</a></h2>The latest stable source distribution of Redis can be obtained <a href="http://code.google.com/p/redis/downloads/list" target="_blank">at this location as a tarball</a>.<br/><br/><pre class="codeblock python" name="code">
+ #sidebar <a href="RedisGuides.html">RedisGuides</a>
+<h1><a name="Quick Start">Quick Start</a></h1>This quickstart is a five minutes howto on how to get started with Redis. For more information on Redis check <a href="http://code.google.com/p/redis/wiki/index" target="_blank">Redis Documentation Index</a>.<h2><a name="Obtain the latest version">Obtain the latest version</a></h2>The latest stable source distribution of Redis can be obtained <a href="http://code.google.com/p/redis/downloads/list" target="_blank">at this location as a tarball</a>.<br/><br/><pre class="codeblock python" name="code">
$ wget http://redis.googlecode.com/files/redis-1.02.tar.gz
</pre>The unstable source code, with more features but not ready for production, can be downloaded using git:<br/><br/><pre class="codeblock python python" name="code">
$ git clone git://github.com/antirez/redis.git
<div id="pagecontent">
<div class="index">
<!-- This is a (PRE) block. Make sure it's left aligned or your toc title will be off. -->
-<b>Redis_1_2_0_Changelog: Contents</b><br> <a href="#CHANGELOG for Redis 1.1.90">CHANGELOG for Redis 1.1.90</a>
+<b>Redis_1_2_0_Changelog: Contents</b><br> <a href="#What's new in Redis 1.2">What's new in Redis 1.2</a><br> <a href="#New persistence mode: Append Only File">New persistence mode: Append Only File</a><br> <a href="#New data type: sorted sets">New data type: sorted sets</a><br> <a href="#Specialized integer objects encoding">Specialized integer objects encoding</a><br> <a href="#MSET and MSETNX">MSET and MSETNX</a><br> <a href="#Better Performances">Better Performances</a><br> <a href="#Solaris Support">Solaris Support</a><br> <a href="#Support for the new generation protocol">Support for the new generation protocol</a><br> <a href="#A few new commands about already supported data types">A few new commands about already supported data types</a><br> <a href="#Bug fixing">Bug fixing</a><br> <a href="#CHANGELOG for Redis 1.1.90">CHANGELOG for Redis 1.1.90</a>
</div>
<h1 class="wikiname">Redis_1_2_0_Changelog</h1>
</div>
<div class="narrow">
- <h1><a name="CHANGELOG for Redis 1.1.90">CHANGELOG for Redis 1.1.90</a></h1><ul><li> 2009-09-10 in-memory specialized object encoding. (antirez)</li><li> 2009-09-17 maxmemory fixed in 64 systems for values > 4GB. (antirez)</li><li> 2009-10-07 multi-bulk protocol implemented. (antriez)</li><li> 2009-10-16 MSET and MSETNX commands implemented (antirez)</li><li> 2009-10-21 SRANDMEMBER added (antirez)</li><li> 2009-10-23 Fixed compilation in mac os x snow leopard when compiling a 32 bit binary. (antirez)</li><li> 2009-10-23 New data type: Sorted sets and Z-commands (antirez)</li><li> 2009-10-26 Solaris fixed (Alan Harder)</li><li> 2009-10-29 Fixed Issue a number of open issues (antirez)</li><li> 2009-10-30 New persistence mode: append only file (antirez)</li><li> 2009-11-01 SORT STORE option (antirez)</li><li> 2009-11-03 redis-cli now accepts a -r (repeat) switch. (antirez)</li><li> 2009-11-04 masterauth option merged (Anthony Lauzon)</li><li> 2009-11-04 redis-test is now a better Redis citizen, testing everything against DB 9 and 10 and only if this DBs are empty. (antirez)</li><li> 2009-11-10 Implemented a much better lazy expiring algorithm for EXPIRE (antirez)</li><li> 2009-11-11 RPUSHLPOP (antirez from an idea of @ezmobius)</li><li> 2009-11-12 Merge git://github.com/ianxm/redis (Can't remmber what this implements, sorry)</li><li> 2009-11-17 multi-bulk reply support for redis-bench, LRANGE speed tests (antirez)</li><li> 2009-11-17 support for writev implemented. (Stefano Barbato)</li><li> 2009-11-19 debug mode (-D) in redis-bench (antirez)</li><li> 2009-11-21 SORT GET # implemented (antirez)</li><li> 2009-11-23 ae.c made modular, with support for epoll. (antirez)</li><li> 2009-11-26 background append log rebuilding (antirez)</li><li> 2009-11-28 Added support for kqueue. (Harish Mallipeddi)</li><li> 2009-11-29 SORT support for sorted sets (antirez, thanks to @tobi for the idea)</li></ul>
+ <h1><a name="What's new in Redis 1.2">What's new in Redis 1.2</a></h1><h2><a name="New persistence mode: Append Only File">New persistence mode: Append Only File</a></h2>The Append Only File is an alternative way to save your data in Redis that is fully durable! Unlike the snapshotting (default) persistence mode, where the database is saved asynchronously from time to time, the Append Only File saves every change ASAP in a text-only file that works like a journal. Redis will play back this file again at startup reloading the whole dataset back in memory. Redis Append Only File supports background Log compaction. For more info read the <a href="AppendOnlyFileHowto.html">Append Only File HOWTO</a>.<h2><a name="New data type: sorted sets">New data type: sorted sets</a></h2>Sorted sets are collections of elements (like Sets) with an associated score (in the form of a double precision floating point number). Elements in a sorted set are taken in order, so for instance to take the greatest element is an O(1) operation. Insertion and deletion is O(log(N)). Sorted sets are implemented using a dual ported data structure consisting of an hash table and a skip list. For more information please read the <a href="IntroductionToRedisDataTypes.html">Introduction To Redis Data Types</a>.<h2><a name="Specialized integer objects encoding">Specialized integer objects encoding</a></h2>Redis 1.2 will use less memory than Redis 1.0 for values in Strings, Lists or Sets elements that happen to be representable as 32 or 64 bit signed integers (it depends on your arch bits for the long C type). This is totally transparent form the point of view of the user, but will safe a lot of memory (30% less in datasets where there are many integers).<h2><a name="MSET and MSETNX">MSET and MSETNX</a></h2>That is, setting multiple keys in one command, atomically. For more information see the <a href="MsetCommand.html">MSET command</a> wiki page.<h2><a name="Better Performances">Better Performances</a></h2><ul><li> 100x times faster SAVE and BGSAVE! There was a problem in the LZF lib configuration that is now resolved. The effect is this impressive speedup. Also the saving child will no longer use 100% of CPU.</li><li> Glue output buffer and writev(). Many commands producing large outputs, like LRANGE, will now be even 10 times faster, thanks to the new output buffer gluing algorithm and the (optional) use of writev(2) syscall.</li><li> Support for epool and kqueue / kevent. 10,000 clients scalability.</li><li> Much better EXPIRE support, now it's possible to work with very large sets of keys expiring in very short time without to incur in memory problems (the new algorithm expires keys in an adaptive way, so will get more aggressive if there are a lot of expiring keys)</li></ul>
+<h2><a name="Solaris Support">Solaris Support</a></h2>Redis will now compile and work on Solaris without problems. Warning: the Solaris user base is very little, so Redis running on Solaris may not be as tested and stable as it is on Linux and Mac OS X.<h2><a name="Support for the new generation protocol">Support for the new generation protocol</a></h2><ul><li> Redis is now able to accept commands in a new fully binary safe way: with the new protocol keys are binary safe, not only values, and there is no distinction between bulk commands and inline commands. This new protocol is currently used only for MSET and MSETNX but at some point it will hopefully replace the old one. See the Multi Bulk Commands section in the <a href="ProtocolSpecification.html">Redis Protocol Specification</a> for more information.</li></ul>
+<h2><a name="A few new commands about already supported data types">A few new commands about already supported data types</a></h2><ul><li> <a href="SrandmemberCommand.html">SRANDMEMBER</a></li><li> The <a href="SortCommand.html">SortCommand</a> is now supprots the <b>STORE</b> and <b>GET #</b> forms, the first can be used to save sorted lists, sets or sorted sets into keys for caching. Check the manual page for more information about the <b>GET #</b> form.</li><li> The new <a href="RpoplpushCommand.html">RPOPLPUSH command</a> can do many interesting magics, and a few of this are documented in the wiki page of the command.</li></ul>
+<h2><a name="Bug fixing">Bug fixing</a></h2>Of course, many bugs are now fixed, and I bet, a few others introduced: this is how software works after all, so make sure to report issues in the Redis mailing list or in the Google Code issues tracker.<br/><br/>Enjoy!
+antirez<h1><a name="CHANGELOG for Redis 1.1.90">CHANGELOG for Redis 1.1.90</a></h1><ul><li> 2009-09-10 in-memory specialized object encoding. (antirez)</li><li> 2009-09-17 maxmemory fixed in 64 systems for values > 4GB. (antirez)</li><li> 2009-10-07 multi-bulk protocol implemented. (antriez)</li><li> 2009-10-16 MSET and MSETNX commands implemented (antirez)</li><li> 2009-10-21 SRANDMEMBER added (antirez)</li><li> 2009-10-23 Fixed compilation in mac os x snow leopard when compiling a 32 bit binary. (antirez)</li><li> 2009-10-23 New data type: Sorted sets and Z-commands (antirez)</li><li> 2009-10-26 Solaris fixed (Alan Harder)</li><li> 2009-10-29 Fixed Issue a number of open issues (antirez)</li><li> 2009-10-30 New persistence mode: append only file (antirez)</li><li> 2009-11-01 SORT STORE option (antirez)</li><li> 2009-11-03 redis-cli now accepts a -r (repeat) switch. (antirez)</li><li> 2009-11-04 masterauth option merged (Anthony Lauzon)</li><li> 2009-11-04 redis-test is now a better Redis citizen, testing everything against DB 9 and 10 and only if this DBs are empty. (antirez)</li><li> 2009-11-10 Implemented a much better lazy expiring algorithm for EXPIRE (antirez)</li><li> 2009-11-11 RPUSHLPOP (antirez from an idea of @ezmobius)</li><li> 2009-11-12 Merge git://github.com/ianxm/redis (Can't remmber what this implements, sorry)</li><li> 2009-11-17 multi-bulk reply support for redis-bench, LRANGE speed tests (antirez)</li><li> 2009-11-17 support for writev implemented. (Stefano Barbato)</li><li> 2009-11-19 debug mode (-D) in redis-bench (antirez)</li><li> 2009-11-21 SORT GET # implemented (antirez)</li><li> 2009-11-23 ae.c made modular, with support for epoll. (antirez)</li><li> 2009-11-26 background append log rebuilding (antirez)</li><li> 2009-11-28 Added support for kqueue. (Harish Mallipeddi)</li><li> 2009-11-29 SORT support for sorted sets (antirez, thanks to @tobi for the idea)</li></ul>
</div>
</div>
</div>
<div class="narrow">
- <h1><a name="Redis Replication Howto">Redis Replication Howto</a></h1><h2><a name="General Information">General Information</a></h2>Redis replication is a very simple to use and configure master-slave replication that allows slave Redis servers to be exact copies of master servers. The following are some very important facts about Redis replication:<br/><br/><ul><li> A master can have multiple slaves.</li><li> Slaves are able to accept other slaves connections, so instead to connect a number of slaves against the same master it is also possible to connect some of the slaves to other slaves in a graph-alike structure.</li><li> Redis replication is non-blocking on the master side, this means that the master will continue to serve queries while one or more slaves are performing the first synchronization. Instead replication is blocking on the slave side: while the slave is performing the first synchronization it can't reply to queries.</li><li> Replications can be used both for scalability, in order to have multiple slaves for read-only queries (for example heavy <a href="SortCommand.html">SORT</a> operations can be launched against slaves), or simply for data redundancy.</li><li> It is possible to use replication to avoid the saving process on the master side: just configure your master redis.conf in order to avoid saving at all (just comment al the "save" directives), then connect a slave configured to save from time to time.</li></ul>
-<h2><a name="How Redis replication works">How Redis replication works</a></h2>In order to start the replication, or after the connection closes in order resynchronize with the master, the client connects to the master and issues the SYNC command.<br/><br/>The master starts a background saving, and at the same time starts to collect all the new commands received that had the effect to modify the dataset. When the background saving completed the master starts the transfer of the database file to the slave, that saves it on disk, and then load it in memory. At this point the master starts to send all the accumulated commands, and all the new commands received from clients, that had the effect of a dataset modification.<br/><br/>You can try it yourself via telnet. Connect to the Redis port while the server is doing some work and issue the SYNC command. You'll see a bulk transfer and then every command received by the master will be re-issued in the telnet session.<br/><br/>Slaves are able to automatically reconnect when the master <code name="code" class="python"><-></code> slave link goes down for some reason. If the master receives multiple concurrent slave synchronization requests it performs a single background saving in order to serve all them.<h2><a name="Configuration">Configuration</a></h2>To configure replication is trivial: just add the following line to the slave configuration file:
+ #sidebar <a href="RedisGuides.html">RedisGuides</a>
+<h1><a name="Redis Replication Howto">Redis Replication Howto</a></h1><h2><a name="General Information">General Information</a></h2>Redis replication is a very simple to use and configure master-slave replication that allows slave Redis servers to be exact copies of master servers. The following are some very important facts about Redis replication:<br/><br/><ul><li> A master can have multiple slaves.</li><li> Slaves are able to accept other slaves connections, so instead to connect a number of slaves against the same master it is also possible to connect some of the slaves to other slaves in a graph-alike structure.</li><li> Redis replication is non-blocking on the master side, this means that the master will continue to serve queries while one or more slaves are performing the first synchronization. Instead replication is blocking on the slave side: while the slave is performing the first synchronization it can't reply to queries.</li><li> Replications can be used both for scalability, in order to have multiple slaves for read-only queries (for example heavy <a href="SortCommand.html">SORT</a> operations can be launched against slaves), or simply for data redundancy.</li><li> It is possible to use replication to avoid the saving process on the master side: just configure your master redis.conf in order to avoid saving at all (just comment al the "save" directives), then connect a slave configured to save from time to time.</li></ul>
+<h2><a name="How Redis replication works">How Redis replication works</a></h2>In order to start the replication, or after the connection closes in order resynchronize with the master, the slave connects to the master and issues the SYNC command.<br/><br/>The master starts a background saving, and at the same time starts to collect all the new commands received that had the effect to modify the dataset. When the background saving completed the master starts the transfer of the database file to the slave, that saves it on disk, and then load it in memory. At this point the master starts to send all the accumulated commands, and all the new commands received from clients that had the effect of a dataset modification, to the slave, as a stream of commands, in the same format of the Redis protocol itself.<br/><br/>You can try it yourself via telnet. Connect to the Redis port while the server is doing some work and issue the SYNC command. You'll see a bulk transfer and then every command received by the master will be re-issued in the telnet session.<br/><br/>Slaves are able to automatically reconnect when the master <code name="code" class="python"><-></code> slave link goes down for some reason. If the master receives multiple concurrent slave synchronization requests it performs a single background saving in order to serve all them.<h2><a name="Configuration">Configuration</a></h2>To configure replication is trivial: just add the following line to the slave configuration file:
<pre class="codeblock python" name="code">
slaveof 192.168.1.1 6379
</pre>
Of course you need to replace 192.168.1.1 6379 with your master ip address (or hostname) and port.
+
</div>
</div>
<div class="narrow">
#sidebar <a href="ControlCommandsSidebar.html">ControlCommandsSidebar</a><h3><a name="SAVE">SAVE</a></h3>
-<blockquote>Save the DB on disk. The server hangs while the saving is notcompleted, no connection is served in the meanwhile. An OK codeis returned when the DB was fully stored in disk.</blockquote>
+<blockquote>Save the whole dataset on disk (this means that all the databases are saved, as well as keys with an EXPIRE set (the expire is preserved). The server hangs while the saving is notcompleted, no connection is served in the meanwhile. An OK codeis returned when the DB was fully stored in disk.</blockquote>
+<blockquote>The background variant of this command is <a href="BgsaveCommand.html">BGSAVE</a> that is able to perform the saving in the background while the server continues serving other clients.</blockquote>
<h2><a name="Return value">Return value</a></h2><a href="ReplyTypes.html">Status code reply</a>
-
</div>
</div>
<div class="narrow">
#sidebar <a href="SetCommandsSidebar.html">SetCommandsSidebar</a><h1><a name="SMEMBERS _key_">SMEMBERS _key_</a></h1>
-<i>Time complexity O(N)</i><blockquote>Return all the members (elements) of the set value stored at <i>key</i>. Thisis just syntax glue for <a href="SintersectCommand.html">SINTERSECT</a>.</blockquote>
+<i>Time complexity O(N)</i><blockquote>Return all the members (elements) of the set value stored at <i>key</i>. Thisis just syntax glue for <a href="SintersectCommand.html">SINTER</a>.</blockquote>
<h2><a name="Return value">Return value</a></h2><a href="ReplyTypes.html">Multi bulk reply</a>
-
</div>
</div>
<div id="pagecontent">
<div class="index">
<!-- This is a (PRE) block. Make sure it's left aligned or your toc title will be off. -->
-<b>SortCommand: Contents</b><br> <a href="#Sorting by external keys">Sorting by external keys</a><br> <a href="#Retrieving external keys">Retrieving external keys</a><br> <a href="#Storing the result of a SORT operation">Storing the result of a SORT operation</a><br> <a href="#Return value">Return value</a>
+<b>SortCommand: Contents</b><br> <a href="#Sorting by external keys">Sorting by external keys</a><br> <a href="#Retrieving external keys">Retrieving external keys</a><br> <a href="#Storing the result of a SORT operation">Storing the result of a SORT operation</a><br> <a href="#SORT and Hashes: BY and GET by hash field">SORT and Hashes: BY and GET by hash field</a><br> <a href="#Return value">Return value</a>
</div>
<h1 class="wikiname">SortCommand</h1>
SORT mylist BY weight_* STORE resultkey
</pre><blockquote>An interesting pattern using SORT ... STORE consists in associatingan <a href="ExpireCommand.html">EXPIRE</a> timeout to the resulting key so that inapplications where the result of a sort operation can be cached forsome time other clients will use the cached list instead to call SORTfor every request. When the key will timeout an updated version ofthe cache can be created using SORT ... STORE again.</blockquote>
<blockquote>Note that implementing this pattern it is important to avoid that multipleclients will try to rebuild the cached version of the cacheat the same time, so some form of locking should be implemented(for instance using <a href="SetnxCommand.html">SETNX</a>).</blockquote>
-<h2><a name="Return value">Return value</a></h2><a href="ReplyTypes.html">Multi bulk reply</a>, specifically a list of sorted elements.
-
+<h2><a name="SORT and Hashes: BY and GET by hash field">SORT and Hashes: BY and GET by hash field</a></h2>
+<blockquote>It's possible to use BY and GET options against Hash fields using the following syntax:</blockquote><pre class="codeblock python python python python python python python python python" name="code">
+SORT mylist BY weight_*->fieldname
+SORT mylist GET object_*->fieldname
+</pre>
+<blockquote>The two chars string -> is used in order to signal the name of the Hash field. The key is substituted as documented above with sort BY and GET against normal keys, and the Hash stored at the resulting key is accessed in order to retrieve the specified field.</blockquote><h2><a name="Return value">Return value</a></h2><a href="ReplyTypes.html">Multi bulk reply</a>, specifically a list of sorted elements.
</div>
</div>
</div>
<div class="narrow">
- == Sorted Set Commands ==<br/><br/><ul><li> <a href="ZaddCommand.html">ZADD</a></li><li> <a href="ZremCommand.html">ZREM</a></li><li> <a href="ZincrbyCommand.html">ZINCRBY</a></li><li> <a href="ZrangeCommand.html">ZRANGE</a></li><li> <a href="ZrangeCommand.html">ZREVRANGE</a></li><li> <a href="ZrangebyscoreCommand.html">ZRANGEBYSCORE</a></li><li> <a href="ZcardCommand.html">ZCARD</a></li><li> <a href="ZscoreCommand.html">ZSCORE</a></li><li> <a href="SortCommand.html">SORT</a></li></ul>
+ == Sorted Set Commands ==<br/><br/><ul><li> <a href="ZaddCommand.html">ZADD</a></li><li> <a href="ZremCommand.html">ZREM</a></li><li> <a href="ZincrbyCommand.html">ZINCRBY</a></li><li> <a href="ZrankCommand.html">ZRANK</a></li><li> <a href="ZrankCommand.html">ZREVRANK</a></li><li> <a href="ZrangeCommand.html">ZRANGE</a></li><li> <a href="ZrangeCommand.html">ZREVRANGE</a></li><li> <a href="ZrangebyscoreCommand.html">ZRANGEBYSCORE</a></li><li> <a href="ZremrangebyrankCommand.html">ZREMRANGEBYRANK</a></li><li> <a href="ZremrangebyscoreCommand.html">ZREMRANGEBYSCORE</a> </li><li> <a href="ZcardCommand.html">ZCARD</a></li><li> <a href="ZscoreCommand.html">ZSCORE</a></li><li> <a href="ZunionCommand.html">ZUNION / ZINTER</a></li><li> <a href="SortCommand.html">SORT</a></li></ul>
</div>
</div>
</div>
<div class="narrow">
- <h1><a name="Redis Sponsorship History">Redis Sponsorship History</a></h1>This is a list of companies that sponsorship Redis developments, with details about the sponsored features. <b>Thanks for helping the project!</b>.<br/><br/>If your company is considering a sponsorship please read the <a href="SponsorshipHowto.html">How to Sponsor</a> page.<br/><br/><ul><li> <a href="http://citrusbyte.com" target="_blank"><img src="http://redis.googlecode.com/files/citrusbyte_logo.png" border="0"></img></a><br></br> 18 Dec 2009, part of Virtual Memory.</li><li> <a href="http://www.hitmeister.de/" target="_blank"><img src="http://redis.googlecode.com/files/logo_hitmeister_2.png" border="0"></img></a><br></br> 15 Dec 2009, part of Redis Cluster.</li><li> <a href="http://engineyard.com" target="_blank"><img src="http://redis.googlecode.com/files/engine_yard_logo.jpg" border="0"></img></a><br></br> 13 Dec 2009, for blocking POP (BLPOP) and part of the Virtual Memory implementation.</li></ul>
+ <h1><a name="Redis Sponsorship History">Redis Sponsorship History</a></h1><b>Important notice: since 15 March 2010 I Joined VMware that is sponsoring all my work on Redis.</b> Thank you to all the companies and people donating in the past. No further donations are accepted.<br/><br/>This is a list of companies that sponsorship Redis developments, with details about the sponsored features. <b>Thanks for helping the project!</b>.<br/><br/><ul><li> <a href="http://www.linode.com/?r=5cf1759a154c981368394fca9918970f60b6a2b3" target="_blank"><img src="http://www.linode.com/images/linode_logo10.gif" border="0"></img></a><br></br> 15 January 2010, provided Virtual Machines for Redis testing in a virtualized environment.</li><li> <a href="https://manage.slicehost.com/customers/new?referrer=d6272cc9e5f38cd2513e760e4d22bd9d" target="_blank"><img src="http://wiki.slicehost.com/lib/exe/fetch.php?w=&h=&cache=cache&media=slicehost.gif" border="0"></img></a><br></br> 14 January 2010, provided Virtual Machines for Redis testing in a virtualized environment.</li><li> <a href="http://citrusbyte.com" target="_blank"><img src="http://redis.googlecode.com/files/citrusbyte_logo.png" border="0"></img></a><br></br> 18 Dec 2009, part of Virtual Memory.</li><li> <a href="http://www.hitmeister.de/" target="_blank"><img src="http://redis.googlecode.com/files/logo_hitmeister_2.png" border="0"></img></a><br></br> 15 Dec 2009, part of Redis Cluster.</li><li> <a href="http://engineyard.com" target="_blank"><img src="http://redis.googlecode.com/files/engine_yard_logo.jpg" border="0"></img></a><br></br> 13 Dec 2009, for blocking POP (BLPOP) and part of the Virtual Memory implementation.</li></ul>
<b>Also thaks to the following people or organizations that donated to the Project:</b>
<ul><li> Emil Vladev</li><li> <a href="http://bradjasper.com/" target="_blank">Brad Jasper</a></li><li> <a href="http://www.mrkris.com/" target="_blank">Mrkris</a></li></ul>
</div>
+++ /dev/null
-
-<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN">
-<html>
- <head>
- <link type="text/css" rel="stylesheet" href="style.css" />
- </head>
- <body>
- <div id="page">
-
- <div id='header'>
- <a href="index.html">
- <img style="border:none" alt="Redis Documentation" src="redis.png">
- </a>
- </div>
-
- <div id="pagecontent">
- <div class="index">
-<!-- This is a (PRE) block. Make sure it's left aligned or your toc title will be off. -->
-<b>SponsorshipHowto: Contents</b><br> <a href="#Other donations">Other donations</a>
- </div>
-
- <h1 class="wikiname">SponsorshipHowto</h1>
-
- <div class="summary">
-
- </div>
-
- <div class="narrow">
- = How to sponsor my work on Redis =<br/><br/>I'm accepting sponsorships for Redis development, the idea is that a company using Redis and willing to donate some money can receive something back: visibility in the Redis site, and prioritization of features planned in the TODO list that are somewhat more important for this company compared to other features.<br/><br/>In the last year I spent 50% of my time working on Redis. At the same time Redis is released under the very liberal BSD license, this is very important for users as it prevents <a href="http://monty-says.blogspot.com/2009/10/importance-of-license-model-of-mysql-or.html" target="_blank">that in the future the project will be killed</a>, but at the same time it's not possible to build a business model selling licenses like it happens for MySQL. The alternative is to run a consultancy company, but this means to use more time to work with customers than working to the Redis code base itself, or sponsorship, that I think is the best option currently to ensure fast development of the project.<br/><br/>So, if you are considering a donation, thank you! This is a set of simple rules I developed in order to make sure I'm fair with everybody willing to help the project:<br/><br/><ul><li> 1. Every company can donate any amount of money, even 10$, in order to support Redis development.</li><li> 2. Every company donating an amount equal or greater than 1000$ will be featured in the home page for at least 6 months, and anyway for all the time the sponsored feature takes to reach a <b>stable release</b> of Redis.</li><li> 3. Every company donating at least 100$ will anyway be featured in the "Sponsors" page forever, this page is linked near to the logos of the current sponsors in the front page (the logos about point 2 of this list).</li><li> 4. A sponsoring company can donate for sponsorship of a feature already in the TODO list. If a feature not planned is needed we should first get in touch, discuss if this is a good idea, put it in the TODO list, and then the sponsorship can start, but I've to be genuinely convinced this feature will be good and of general interest ;)</li><li> 5. Not really a sponsorship/donation, but in rare case of a vertical, self-contained feature, I could develop it as a patch for the current stable Redis distribution for a "donation" proportional to the work needed to develop the feature, but in order to have the patch for the next release of Redis there will be to donate again for the porting work and so forth.</li><li> 6. Features for which I receive a good sponsorship (proportionally to the work required to implement the sponsored feature) are prioritized and will get developed faster than other features, possibly changing the development roadmap.</li><li> 7. To sponsor a specific feature is not a must, a company can just donate to the project as a whole.</li></ul>
-If you want to get in touch with me about this issues please drop me an email to my gmail account (username is antirez) or direct-message me @antirez on Twitter. Thanks in advance for the help!<h1><a name="Other donations">Other donations</a></h1>If you just feel like donating a small amount to Redis the simplest way is to use paypal, my paypal address is <b>antirez@invece.org</b>. Please specify in the donation if you don't like to have your name / company name published in the donations history (the amount will not be published anyway).
- </div>
-
- </div>
- </div>
- </body>
-</html>
-
</div>
<div class="narrow">
- == String Commands ==<br/><br/><ul><li> <a href="SetCommand.html">SET</a></li><li> <a href="GetCommand.html">GET</a></li><li> <a href="GetsetCommand.html">GETSET</a></li><li> <a href="MgetCommand.html">MGET</a></li><li> <a href="SetnxCommand.html">SETNX</a></li><li> <a href="MsetCommand.html">MSET</a></li><li> <a href="MsetCommand.html">MSETNX</a></li><li> <a href="IncrCommand.html">INCR</a></li><li> <a href="IncrCommand.html">INCRBY</a></li><li> <a href="IncrCommand.html">DECR</a></li><li> <a href="IncrCommand.html">DECRBY</a></li></ul>
+ == String Commands ==<br/><br/><ul><li> <a href="SetCommand.html">SET</a></li><li> <a href="GetCommand.html">GET</a></li><li> <a href="GetsetCommand.html">GETSET</a></li><li> <a href="MgetCommand.html">MGET</a></li><li> <a href="SetnxCommand.html">SETNX</a></li><li> <a href="SetexCommand.html">SETEX</a></li><li> <a href="MsetCommand.html">MSET</a></li><li> <a href="MsetCommand.html">MSETNX</a></li><li> <a href="IncrCommand.html">INCR</a></li><li> <a href="IncrCommand.html">INCRBY</a></li><li> <a href="IncrCommand.html">DECR</a></li><li> <a href="IncrCommand.html">DECRBY</a></li><li> <a href="AppendCommand.html">APPEND</a></li><li> <a href="SubstrCommand.html">SUBSTR</a></li></ul>
</div>
</div>
"string" if the key contains a String value
"list" if the key contains a List value
"set" if the key contains a Set value
+"zset" if the key contains a Sorted Set value
+"hash" if the key contains a Hash value
</pre><h2><a name="See also">See also</a></h2>
<ul><li> <a href="DataTypes.html">Redis Data Types</a></li></ul>
</div>
<div id="pagecontent">
<div class="index">
<!-- This is a (PRE) block. Make sure it's left aligned or your toc title will be off. -->
-<b>ZrangebyscoreCommand: Contents</b><br> <a href="#ZRANGEBYSCORE _key_ _min_ _max_ `[`LIMIT _offset_ _count_`]` (Redis >">ZRANGEBYSCORE _key_ _min_ _max_ `[`LIMIT _offset_ _count_`]` (Redis ></a><br> <a href="#Return value">Return value</a>
+<b>ZrangebyscoreCommand: Contents</b><br> <a href="#ZRANGEBYSCORE _key_ _min_ _max_ `[`LIMIT _offset_ _count_`]` (Redis >">ZRANGEBYSCORE _key_ _min_ _max_ `[`LIMIT _offset_ _count_`]` (Redis ></a><br> <a href="#ZRANGEBYSCORE _key_ _min_ _max_ `[`LIMIT _offset_ _count_`]` `[`WITHSCORES`]` (Redis >">ZRANGEBYSCORE _key_ _min_ _max_ `[`LIMIT _offset_ _count_`]` `[`WITHSCORES`]` (Redis ></a><br> <a href="#Return value">Return value</a>
</div>
<h1 class="wikiname">ZrangebyscoreCommand</h1>
<div class="narrow">
#sidebar <a href="SortedSetCommandsSidebar.html">SortedSetCommandsSidebar</a><h1><a name="ZRANGEBYSCORE _key_ _min_ _max_ `[`LIMIT _offset_ _count_`]` (Redis >">ZRANGEBYSCORE _key_ _min_ _max_ `[`LIMIT _offset_ _count_`]` (Redis ></a></h1> 1.1) =
+<h1><a name="ZRANGEBYSCORE _key_ _min_ _max_ `[`LIMIT _offset_ _count_`]` `[`WITHSCORES`]` (Redis >">ZRANGEBYSCORE _key_ _min_ _max_ `[`LIMIT _offset_ _count_`]` `[`WITHSCORES`]` (Redis ></a></h1> 1.3.4) =
<i>Time complexity: O(log(N))+O(M) with N being the number of elements in the sorted set and M the number of elements returned by the command, so if M is constant (for instance you always ask for the first ten elements with LIMIT) you can consider it O(log(N))</i><blockquote>Return the all the elements in the sorted set at key with a score between_min_ and <i>max</i> (including elements with score equal to min or max).</blockquote>
<blockquote>The elements having the same score are returned sorted lexicographically asASCII strings (this follows from a property of Redis sorted sets and does notinvolve further computation).</blockquote>
<blockquote>Using the optional LIMIT it's possible to get only a range of the matchingelements in an SQL-alike way. Note that if <i>offset</i> is large the commandsneeds to traverse the list for <i>offset</i> elements and this adds up to theO(M) figure.</blockquote>
<h2><a name="Return value">Return value</a></h2><a href="ReplyTypes.html">Multi bulk reply</a>, specifically a list of elements in the specified score range.
-
</div>
</div>
<div id="pagecontent">
<div class="index">
<!-- This is a (PRE) block. Make sure it's left aligned or your toc title will be off. -->
-<b>index: Contents</b><br> <a href="#Redis Documentation">Redis Documentation</a><br> <a href="#HOWTOs about selected features">HOWTOs about selected features</a><br> <a href="#Hacking">Hacking</a><br> <a href="#Videos">Videos</a>
+<b>index: Contents</b><br> <a href="#HOWTOs about selected features">HOWTOs about selected features</a><br> <a href="#Hacking">Hacking</a><br> <a href="#Videos">Videos</a>
</div>
<h1 class="wikiname">index</h1>
</div>
<div class="narrow">
- <h1><a name="Redis Documentation">Redis Documentation</a></h1>Hello! The followings are pointers to different parts of the Redis Documentation.<br/><br/><ul><li> <a href="README.html">The README</a> is the best starting point to know more about the project.</li><li> <a href="QuickStart.html">This short Quick Start</a> provides a five minutes step-by-step istructions on how to download, compile, run and test the basic workings of a Redis server.</li><li> <a href="CommandReference.html">The command reference</a> is a description of all the Redis commands with links to command specific pages.</li><li> <a href="TwitterAlikeExample.html">This is a tuturial about creating a Twitter clone using *only* Redis as database, no relational DB at all is used</a>, it is a good start to understand the key-value database paradigm.</li><li> <a href="IntroductionToRedisDataTypes.html">A Fifteen Minutes Introduction to the Redis Data Types</a> explains how Redis data types work and the basic patterns of working with Redis.</li><li> <a href="Features.html">The features page</a> (currently in draft) is a good start to understand the strength and limitations of Redis.</li><li> <a href="Benchmarks.html">The benchmark page</a> is about the speed performances of Redis.</li><li> <a href="FAQ.html">Our FAQ</a> contains of course some answers to common questions about Redis.</li><li> <b><a href="SponsorshipHowto.html">How to donate</a></b> to the project sponsoring features.</li></ul>
-<h1><a name="HOWTOs about selected features">HOWTOs about selected features</a></h1><ul><li> <a href="ReplicationHowto.html">The Redis Replication HOWTO</a> is what you need to read in order to understand how Redis master <code name="code" class="python"><-></code> slave replication works.</li><li> <a href="AppendOnlyFileHowto.html">The Append Only File HOWTO</a> explains how the alternative Redis durability mode works. AOF is an alternative to snapshotting on disk from time to time (the default).</li></ul>
+ = Redis Documentation =<br/><br/><a href="http://pyha.ru/wiki/index.php?title=Redis:index" target="_blank">Russian Translation</a>Hello! The followings are pointers to different parts of the Redis Documentation.<br/><br/><ul><li> New! You can now <a href="http://try.redis-db.com" target="_blank">Try Redis directly in your browser!</a>.</li><li> <a href="README.html">The README</a> is the best starting point to know more about the project.</li><li> <a href="QuickStart.html">This short Quick Start</a> provides a five minutes step-by-step istructions on how to download, compile, run and test the basic workings of a Redis server.</li><li> <a href="CommandReference.html">The command reference</a> is a description of all the Redis commands with links to command specific pages. You can also download the <a href="http://go2.wordpress.com/?id=725X1342&site=masonoise.wordpress.com&url=http%3A%2F%2Fmasonoise.files.wordpress.com%2F2010%2F03%2Fredis-cheatsheet-v1.pdf" target="_blank">Redis Commands Cheat-Sheet</a> provided by Mason Jones (btw some command may be missing, the primary source is the wiki).</li><li> <a href="TwitterAlikeExample.html">This is a tuturial about creating a Twitter clone using *only* Redis as database, no relational DB at all is used</a>, it is a good start to understand the key-value database paradigm.</li><li> <a href="IntroductionToRedisDataTypes.html">A Fifteen Minutes Introduction to the Redis Data Types</a> explains how Redis data types work and the basic patterns of working with Redis.</li><li> <a href="http://simonwillison.net/static/2010/redis-tutorial/" target="_blank">the Simon Willison Redis Tutorial</a> is a <b>must read</b>, very good documentation where you will find a lot of real world ideas and use cases.</li><li> <a href="Features.html">The features page</a> (currently in draft) is a good start to understand the strength and limitations of Redis.</li><li> <a href="Benchmarks.html">The benchmark page</a> is about the speed performances of Redis.</li><li> <a href="FAQ.html">Our FAQ</a> contains of course some answers to common questions about Redis.</li><li> <a href="http://www.rediscookbook.org/" target="_blank">The Redis Cookbook</a> is a collaborative effort to provide some good recipe ;)</li></ul>
+<h1><a name="HOWTOs about selected features">HOWTOs about selected features</a></h1><ul><li> <a href="ReplicationHowto.html">The Redis Replication HOWTO</a> is what you need to read in order to understand how Redis master <code name="code" class="python"><-></code> slave replication works.</li><li> <a href="AppendOnlyFileHowto.html">The Append Only File HOWTO</a> explains how the alternative Redis durability mode works. AOF is an alternative to snapshotting on disk from time to time (the default).</li><li> <a href="VirtualMemoryUserGuide.html">Virutal Memory User Guide</a>. A simple to understand guide about using and configuring the Redis Virtual Memory.</li></ul>
<h1><a name="Hacking">Hacking</a></h1>
<ul><li> <a href="ProtocolSpecification.html">The Protocol Specification</a> is all you need in order to implement a Redis client library for a missing language. PHP, Python, Ruby and Erlang are already supported.</li></ul>
+<ul><li> Look at <a href="RedisInternals.html">Redis Internals</a> if you are interested in the implementation details of the Redis server.</li></ul>
<h1><a name="Videos">Videos</a></h1><ul><li> <a href="http://mwrc2009.confreaks.com/13-mar-2009-19-24-redis-key-value-nirvana-ezra-zygmuntowicz.html" target="_blank">watch the Ezra Zygmuntowicz talk about Redis</a> to know the most important Redis ideas in few minutes.</li></ul>
</div>
+++ /dev/null
-# Redis configuration file example
-
-# Note on units: when memory size is needed, it is possible to specifiy
-# it in the usual form of 1k 5GB 4M and so forth:
-#
-# 1k => 1000 bytes
-# 1kb => 1024 bytes
-# 1m => 1000000 bytes
-# 1mb => 1024*1024 bytes
-# 1g => 1000000000 bytes
-# 1gb => 1024*1024*1024 bytes
-#
-# units are case insensitive so 1GB 1Gb 1gB are all the same.
-
-# By default Redis does not run as a daemon. Use 'yes' if you need it.
-# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
-daemonize no
-
-# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
-# default. You can specify a custom pid file location here.
-pidfile redis.pid
-
-# Accept connections on the specified port, default is 6379
-port 6379
-
-# If you want you can bind a single interface, if the bind option is not
-# specified all the interfaces will listen for incoming connections.
-#
-# bind 127.0.0.1
-
-# Close the connection after a client is idle for N seconds (0 to disable)
-timeout 300
-
-# Set server verbosity to 'debug'
-# it can be one of:
-# debug (a lot of information, useful for development/testing)
-# verbose (many rarely useful info, but not a mess like the debug level)
-# notice (moderately verbose, what you want in production probably)
-# warning (only very important / critical messages are logged)
-loglevel verbose
-
-# Specify the log file name. Also 'stdout' can be used to force
-# Redis to log on the standard output. Note that if you use standard
-# output for logging but daemonize, logs will be sent to /dev/null
-logfile stdout
-
-# Set the number of databases. The default database is DB 0, you can select
-# a different one on a per-connection basis using SELECT <dbid> where
-# dbid is a number between 0 and 'databases'-1
-databases 16
-
-################################ SNAPSHOTTING #################################
-#
-# Save the DB on disk:
-#
-# save <seconds> <changes>
-#
-# Will save the DB if both the given number of seconds and the given
-# number of write operations against the DB occurred.
-#
-# In the example below the behaviour will be to save:
-# after 900 sec (15 min) if at least 1 key changed
-# after 300 sec (5 min) if at least 10 keys changed
-# after 60 sec if at least 10000 keys changed
-#
-# Note: you can disable saving at all commenting all the "save" lines.
-
-save 900 1
-save 300 10
-save 60 10000
-
-# Compress string objects using LZF when dump .rdb databases?
-# For default that's set to 'yes' as it's almost always a win.
-# If you want to save some CPU in the saving child set it to 'no' but
-# the dataset will likely be bigger if you have compressible values or keys.
-rdbcompression yes
-
-# The filename where to dump the DB
-dbfilename dump.rdb
-
-# The working directory.
-#
-# The DB will be written inside this directory, with the filename specified
-# above using the 'dbfilename' configuration directive.
-#
-# Also the Append Only File will be created inside this directory.
-#
-# Note that you must specify a directory here, not a file name.
-dir ./test/tmp
-
-################################# REPLICATION #################################
-
-# Master-Slave replication. Use slaveof to make a Redis instance a copy of
-# another Redis server. Note that the configuration is local to the slave
-# so for example it is possible to configure the slave to save the DB with a
-# different interval, or to listen to another port, and so on.
-#
-# slaveof <masterip> <masterport>
-
-# If the master is password protected (using the "requirepass" configuration
-# directive below) it is possible to tell the slave to authenticate before
-# starting the replication synchronization process, otherwise the master will
-# refuse the slave request.
-#
-# masterauth <master-password>
-
-################################## SECURITY ###################################
-
-# Require clients to issue AUTH <PASSWORD> before processing any other
-# commands. This might be useful in environments in which you do not trust
-# others with access to the host running redis-server.
-#
-# This should stay commented out for backward compatibility and because most
-# people do not need auth (e.g. they run their own servers).
-#
-# Warning: since Redis is pretty fast an outside user can try up to
-# 150k passwords per second against a good box. This means that you should
-# use a very strong password otherwise it will be very easy to break.
-#
-# requirepass foobared
-
-################################### LIMITS ####################################
-
-# Set the max number of connected clients at the same time. By default there
-# is no limit, and it's up to the number of file descriptors the Redis process
-# is able to open. The special value '0' means no limits.
-# Once the limit is reached Redis will close all the new connections sending
-# an error 'max number of clients reached'.
-#
-# maxclients 128
-
-# Don't use more memory than the specified amount of bytes.
-# When the memory limit is reached Redis will try to remove keys with an
-# EXPIRE set. It will try to start freeing keys that are going to expire
-# in little time and preserve keys with a longer time to live.
-# Redis will also try to remove objects from free lists if possible.
-#
-# If all this fails, Redis will start to reply with errors to commands
-# that will use more memory, like SET, LPUSH, and so on, and will continue
-# to reply to most read-only commands like GET.
-#
-# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
-# 'state' server or cache, not as a real DB. When Redis is used as a real
-# database the memory usage will grow over the weeks, it will be obvious if
-# it is going to use too much memory in the long run, and you'll have the time
-# to upgrade. With maxmemory after the limit is reached you'll start to get
-# errors for write operations, and this may even lead to DB inconsistency.
-#
-# maxmemory <bytes>
-
-############################## APPEND ONLY MODE ###############################
-
-# By default Redis asynchronously dumps the dataset on disk. If you can live
-# with the idea that the latest records will be lost if something like a crash
-# happens this is the preferred way to run Redis. If instead you care a lot
-# about your data and don't want to that a single record can get lost you should
-# enable the append only mode: when this mode is enabled Redis will append
-# every write operation received in the file appendonly.aof. This file will
-# be read on startup in order to rebuild the full dataset in memory.
-#
-# Note that you can have both the async dumps and the append only file if you
-# like (you have to comment the "save" statements above to disable the dumps).
-# Still if append only mode is enabled Redis will load the data from the
-# log file at startup ignoring the dump.rdb file.
-#
-# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
-# log file in background when it gets too big.
-
-appendonly no
-
-# The name of the append only file (default: "appendonly.aof")
-# appendfilename appendonly.aof
-
-# The fsync() call tells the Operating System to actually write data on disk
-# instead to wait for more data in the output buffer. Some OS will really flush
-# data on disk, some other OS will just try to do it ASAP.
-#
-# Redis supports three different modes:
-#
-# no: don't fsync, just let the OS flush the data when it wants. Faster.
-# always: fsync after every write to the append only log . Slow, Safest.
-# everysec: fsync only if one second passed since the last fsync. Compromise.
-#
-# The default is "everysec" that's usually the right compromise between
-# speed and data safety. It's up to you to understand if you can relax this to
-# "no" that will will let the operating system flush the output buffer when
-# it wants, for better performances (but if you can live with the idea of
-# some data loss consider the default persistence mode that's snapshotting),
-# or on the contrary, use "always" that's very slow but a bit safer than
-# everysec.
-#
-# If unsure, use "everysec".
-
-# appendfsync always
-appendfsync everysec
-# appendfsync no
-
-################################ VIRTUAL MEMORY ###############################
-
-# Virtual Memory allows Redis to work with datasets bigger than the actual
-# amount of RAM needed to hold the whole dataset in memory.
-# In order to do so very used keys are taken in memory while the other keys
-# are swapped into a swap file, similarly to what operating systems do
-# with memory pages.
-#
-# To enable VM just set 'vm-enabled' to yes, and set the following three
-# VM parameters accordingly to your needs.
-
-vm-enabled no
-# vm-enabled yes
-
-# This is the path of the Redis swap file. As you can guess, swap files
-# can't be shared by different Redis instances, so make sure to use a swap
-# file for every redis process you are running. Redis will complain if the
-# swap file is already in use.
-#
-# The best kind of storage for the Redis swap file (that's accessed at random)
-# is a Solid State Disk (SSD).
-#
-# *** WARNING *** if you are using a shared hosting the default of putting
-# the swap file under /tmp is not secure. Create a dir with access granted
-# only to Redis user and configure Redis to create the swap file there.
-vm-swap-file /tmp/redis.swap
-
-# vm-max-memory configures the VM to use at max the specified amount of
-# RAM. Everything that deos not fit will be swapped on disk *if* possible, that
-# is, if there is still enough contiguous space in the swap file.
-#
-# With vm-max-memory 0 the system will swap everything it can. Not a good
-# default, just specify the max amount of RAM you can in bytes, but it's
-# better to leave some margin. For instance specify an amount of RAM
-# that's more or less between 60 and 80% of your free RAM.
-vm-max-memory 0
-
-# Redis swap files is split into pages. An object can be saved using multiple
-# contiguous pages, but pages can't be shared between different objects.
-# So if your page is too big, small objects swapped out on disk will waste
-# a lot of space. If you page is too small, there is less space in the swap
-# file (assuming you configured the same number of total swap file pages).
-#
-# If you use a lot of small objects, use a page size of 64 or 32 bytes.
-# If you use a lot of big objects, use a bigger page size.
-# If unsure, use the default :)
-vm-page-size 32
-
-# Number of total memory pages in the swap file.
-# Given that the page table (a bitmap of free/used pages) is taken in memory,
-# every 8 pages on disk will consume 1 byte of RAM.
-#
-# The total swap size is vm-page-size * vm-pages
-#
-# With the default of 32-bytes memory pages and 134217728 pages Redis will
-# use a 4 GB swap file, that will use 16 MB of RAM for the page table.
-#
-# It's better to use the smallest acceptable value for your application,
-# but the default is large in order to work in most conditions.
-vm-pages 134217728
-
-# Max number of VM I/O threads running at the same time.
-# This threads are used to read/write data from/to swap file, since they
-# also encode and decode objects from disk to memory or the reverse, a bigger
-# number of threads can help with big objects even if they can't help with
-# I/O itself as the physical device may not be able to couple with many
-# reads/writes operations at the same time.
-#
-# The special value of 0 turn off threaded I/O and enables the blocking
-# Virtual Memory implementation.
-vm-max-threads 4
-
-############################### ADVANCED CONFIG ###############################
-
-# Glue small output buffers together in order to send small replies in a
-# single TCP packet. Uses a bit more CPU but most of the times it is a win
-# in terms of number of queries per second. Use 'yes' if unsure.
-glueoutputbuf yes
-
-# Hashes are encoded in a special way (much more memory efficient) when they
-# have at max a given numer of elements, and the biggest element does not
-# exceed a given threshold. You can configure this limits with the following
-# configuration directives.
-hash-max-zipmap-entries 64
-hash-max-zipmap-value 512
-
-# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
-# order to help rehashing the main Redis hash table (the one mapping top-level
-# keys to values). The hash table implementation redis uses (see dict.c)
-# performs a lazy rehashing: the more operation you run into an hash table
-# that is rhashing, the more rehashing "steps" are performed, so if the
-# server is idle the rehashing is never complete and some more memory is used
-# by the hash table.
-#
-# The default is to use this millisecond 10 times every second in order to
-# active rehashing the main dictionaries, freeing memory when possible.
-#
-# If unsure:
-# use "activerehashing no" if you have hard latency requirements and it is
-# not a good thing in your environment that Redis can reply form time to time
-# to queries with 2 milliseconds delay.
-#
-# use "activerehashing yes" if you don't have such hard requirements but
-# want to free memory asap when possible.
-activerehashing yes
-
-################################## INCLUDES ###################################
-
-# Include one or more other config files here. This is useful if you
-# have a standard template that goes to all redis server but also need
-# to customize a few per-server settings. Include files can include
-# other files, so use this wisely.
-#
-# include /path/to/local.conf
-# include /path/to/other.conf
+++ /dev/null
-# Tcl clinet library - used by test-redis.tcl script for now
-# Copyright (C) 2009 Salvatore Sanfilippo
-# Released under the BSD license like Redis itself
-#
-# Example usage:
-#
-# set r [redis 127.0.0.1 6379]
-# $r lpush mylist foo
-# $r lpush mylist bar
-# $r lrange mylist 0 -1
-# $r close
-#
-# Non blocking usage example:
-#
-# proc handlePong {r type reply} {
-# puts "PONG $type '$reply'"
-# if {$reply ne "PONG"} {
-# $r ping [list handlePong]
-# }
-# }
-#
-# set r [redis]
-# $r blocking 0
-# $r get fo [list handlePong]
-#
-# vwait forever
-
-package require Tcl 8.5
-package provide redis 0.1
-
-namespace eval redis {}
-set ::redis::id 0
-array set ::redis::fd {}
-array set ::redis::blocking {}
-array set ::redis::callback {}
-array set ::redis::state {} ;# State in non-blocking reply reading
-array set ::redis::statestack {} ;# Stack of states, for nested mbulks
-array set ::redis::bulkarg {}
-array set ::redis::multibulkarg {}
-
-# Flag commands requiring last argument as a bulk write operation
-foreach redis_bulk_cmd {
- set setnx rpush lpush lset lrem sadd srem sismember echo getset smove zadd zrem zscore zincrby append zrank zrevrank hget hdel hexists setex
-} {
- set ::redis::bulkarg($redis_bulk_cmd) {}
-}
-
-# Flag commands requiring last argument as a bulk write operation
-foreach redis_multibulk_cmd {
- mset msetnx hset hsetnx hmset hmget
-} {
- set ::redis::multibulkarg($redis_multibulk_cmd) {}
-}
-
-unset redis_bulk_cmd
-unset redis_multibulk_cmd
-
-proc redis {{server 127.0.0.1} {port 6379}} {
- set fd [socket $server $port]
- fconfigure $fd -translation binary
- set id [incr ::redis::id]
- set ::redis::fd($id) $fd
- set ::redis::blocking($id) 1
- ::redis::redis_reset_state $id
- interp alias {} ::redis::redisHandle$id {} ::redis::__dispatch__ $id
-}
-
-proc ::redis::__dispatch__ {id method args} {
- set fd $::redis::fd($id)
- set blocking $::redis::blocking($id)
- if {$blocking == 0} {
- if {[llength $args] == 0} {
- error "Please provide a callback in non-blocking mode"
- }
- set callback [lindex $args end]
- set args [lrange $args 0 end-1]
- }
- if {[info command ::redis::__method__$method] eq {}} {
- if {[info exists ::redis::bulkarg($method)]} {
- set cmd "$method "
- append cmd [join [lrange $args 0 end-1]]
- append cmd " [string length [lindex $args end]]\r\n"
- append cmd [lindex $args end]
- ::redis::redis_writenl $fd $cmd
- } elseif {[info exists ::redis::multibulkarg($method)]} {
- set cmd "*[expr {[llength $args]+1}]\r\n"
- append cmd "$[string length $method]\r\n$method\r\n"
- foreach a $args {
- append cmd "$[string length $a]\r\n$a\r\n"
- }
- ::redis::redis_write $fd $cmd
- flush $fd
- } else {
- set cmd "$method "
- append cmd [join $args]
- ::redis::redis_writenl $fd $cmd
- }
- if {$blocking} {
- ::redis::redis_read_reply $fd
- } else {
- # Every well formed reply read will pop an element from this
- # list and use it as a callback. So pipelining is supported
- # in non blocking mode.
- lappend ::redis::callback($id) $callback
- fileevent $fd readable [list ::redis::redis_readable $fd $id]
- }
- } else {
- uplevel 1 [list ::redis::__method__$method $id $fd] $args
- }
-}
-
-proc ::redis::__method__blocking {id fd val} {
- set ::redis::blocking($id) $val
- fconfigure $fd -blocking $val
-}
-
-proc ::redis::__method__close {id fd} {
- catch {close $fd}
- catch {unset ::redis::fd($id)}
- catch {unset ::redis::blocking($id)}
- catch {unset ::redis::state($id)}
- catch {unset ::redis::statestack($id)}
- catch {unset ::redis::callback($id)}
- catch {interp alias {} ::redis::redisHandle$id {}}
-}
-
-proc ::redis::__method__channel {id fd} {
- return $fd
-}
-
-proc ::redis::redis_write {fd buf} {
- puts -nonewline $fd $buf
-}
-
-proc ::redis::redis_writenl {fd buf} {
- redis_write $fd $buf
- redis_write $fd "\r\n"
- flush $fd
-}
-
-proc ::redis::redis_readnl {fd len} {
- set buf [read $fd $len]
- read $fd 2 ; # discard CR LF
- return $buf
-}
-
-proc ::redis::redis_bulk_read {fd} {
- set count [redis_read_line $fd]
- if {$count == -1} return {}
- set buf [redis_readnl $fd $count]
- return $buf
-}
-
-proc ::redis::redis_multi_bulk_read fd {
- set count [redis_read_line $fd]
- if {$count == -1} return {}
- set l {}
- for {set i 0} {$i < $count} {incr i} {
- lappend l [redis_read_reply $fd]
- }
- return $l
-}
-
-proc ::redis::redis_read_line fd {
- string trim [gets $fd]
-}
-
-proc ::redis::redis_read_reply fd {
- set type [read $fd 1]
- switch -exact -- $type {
- : -
- + {redis_read_line $fd}
- - {return -code error [redis_read_line $fd]}
- $ {redis_bulk_read $fd}
- * {redis_multi_bulk_read $fd}
- default {return -code error "Bad protocol, $type as reply type byte"}
- }
-}
-
-proc ::redis::redis_reset_state id {
- set ::redis::state($id) [dict create buf {} mbulk -1 bulk -1 reply {}]
- set ::redis::statestack($id) {}
-}
-
-proc ::redis::redis_call_callback {id type reply} {
- set cb [lindex $::redis::callback($id) 0]
- set ::redis::callback($id) [lrange $::redis::callback($id) 1 end]
- uplevel #0 $cb [list ::redis::redisHandle$id $type $reply]
- ::redis::redis_reset_state $id
-}
-
-# Read a reply in non-blocking mode.
-proc ::redis::redis_readable {fd id} {
- if {[eof $fd]} {
- redis_call_callback $id eof {}
- ::redis::__method__close $id $fd
- return
- }
- if {[dict get $::redis::state($id) bulk] == -1} {
- set line [gets $fd]
- if {$line eq {}} return ;# No complete line available, return
- switch -exact -- [string index $line 0] {
- : -
- + {redis_call_callback $id reply [string range $line 1 end-1]}
- - {redis_call_callback $id err [string range $line 1 end-1]}
- $ {
- dict set ::redis::state($id) bulk \
- [expr [string range $line 1 end-1]+2]
- if {[dict get $::redis::state($id) bulk] == 1} {
- # We got a $-1, hack the state to play well with this.
- dict set ::redis::state($id) bulk 2
- dict set ::redis::state($id) buf "\r\n"
- ::redis::redis_readable $fd $id
- }
- }
- * {
- dict set ::redis::state($id) mbulk [string range $line 1 end-1]
- # Handle *-1
- if {[dict get $::redis::state($id) mbulk] == -1} {
- redis_call_callback $id reply {}
- }
- }
- default {
- redis_call_callback $id err \
- "Bad protocol, $type as reply type byte"
- }
- }
- } else {
- set totlen [dict get $::redis::state($id) bulk]
- set buflen [string length [dict get $::redis::state($id) buf]]
- set toread [expr {$totlen-$buflen}]
- set data [read $fd $toread]
- set nread [string length $data]
- dict append ::redis::state($id) buf $data
- # Check if we read a complete bulk reply
- if {[string length [dict get $::redis::state($id) buf]] ==
- [dict get $::redis::state($id) bulk]} {
- if {[dict get $::redis::state($id) mbulk] == -1} {
- redis_call_callback $id reply \
- [string range [dict get $::redis::state($id) buf] 0 end-2]
- } else {
- dict with ::redis::state($id) {
- lappend reply [string range $buf 0 end-2]
- incr mbulk -1
- set bulk -1
- }
- if {[dict get $::redis::state($id) mbulk] == 0} {
- redis_call_callback $id reply \
- [dict get $::redis::state($id) reply]
- }
- }
- }
- }
-}
+++ /dev/null
-proc error_and_quit {config_file error} {
- puts "!!COULD NOT START REDIS-SERVER\n"
- puts "CONFIGURATION:"
- puts [exec cat $config_file]
- puts "\nERROR:"
- puts [string trim $error]
- exit 1
-}
-
-proc kill_server config {
- set pid [dict get $config pid]
-
- # check for leaks
- catch {
- if {[string match {*Darwin*} [exec uname -a]]} {
- test {Check for memory leaks} {
- exec leaks $pid
- } {*0 leaks*}
- }
- }
-
- # kill server and wait for the process to be totally exited
- exec kill $pid
- while 1 {
- # with a non-zero exit status, the process is gone
- if {[catch {exec ps -p $pid | grep redis-server} result]} {
- break
- }
- after 10
- }
-}
-
-proc start_server {filename overrides {code undefined}} {
- set data [split [exec cat "test/assets/$filename"] "\n"]
- set config {}
- foreach line $data {
- if {[string length $line] > 0 && [string index $line 0] ne "#"} {
- set elements [split $line " "]
- set directive [lrange $elements 0 0]
- set arguments [lrange $elements 1 end]
- dict set config $directive $arguments
- }
- }
-
- # use a different directory every time a server is started
- dict set config dir [tmpdir server]
-
- # start every server on a different port
- dict set config port [incr ::port]
-
- # apply overrides from arguments
- foreach override $overrides {
- set directive [lrange $override 0 0]
- set arguments [lrange $override 1 end]
- dict set config $directive $arguments
- }
-
- # write new configuration to temporary file
- set config_file [tmpfile redis.conf]
- set fp [open $config_file w+]
- foreach directive [dict keys $config] {
- puts -nonewline $fp "$directive "
- puts $fp [dict get $config $directive]
- }
- close $fp
-
- set stdout [format "%s/%s" [dict get $config "dir"] "stdout"]
- set stderr [format "%s/%s" [dict get $config "dir"] "stderr"]
- exec ./redis-server $config_file > $stdout 2> $stderr &
- after 10
-
- # check that the server actually started
- if {[file size $stderr] > 0} {
- error_and_quit $config_file [exec cat $stderr]
- }
-
- set line [exec head -n1 $stdout]
- if {[string match {*already in use*} $line]} {
- error_and_quit $config_file $line
- }
-
- while 1 {
- # check that the server actually started and is ready for connections
- if {[exec cat $stdout | grep "ready to accept" | wc -l] > 0} {
- break
- }
- after 10
- }
-
- # find out the pid
- regexp {^\[(\d+)\]} [exec head -n1 $stdout] _ pid
-
- # create the client object
- set host $::host
- set port $::port
- if {[dict exists $config bind]} { set host [dict get $config bind] }
- if {[dict exists $config port]} { set port [dict get $config port] }
- set client [redis $host $port]
-
- # select the right db when we don't have to authenticate
- if {![dict exists $config requirepass]} {
- $client select 9
- }
-
- # setup config dict
- dict set ret "config" $config_file
- dict set ret "pid" $pid
- dict set ret "stdout" $stdout
- dict set ret "stderr" $stderr
- dict set ret "client" $client
-
- if {$code ne "undefined"} {
- # append the client to the client stack
- lappend ::clients $client
-
- # execute provided block
- catch { uplevel 1 $code } err
-
- # pop the client object
- set ::clients [lrange $::clients 0 end-1]
-
- kill_server $ret
-
- if {[string length $err] > 0} {
- puts "Error executing the suite, aborting..."
- puts $err
- exit 1
- }
- } else {
- set _ $ret
- }
-}
+++ /dev/null
-set ::passed 0
-set ::failed 0
-set ::testnum 0
-
-proc test {name code okpattern} {
- incr ::testnum
- # if {$::testnum < $::first || $::testnum > $::last} return
- puts -nonewline [format "#%03d %-70s " $::testnum $name]
- flush stdout
- set retval [uplevel 1 $code]
- if {$okpattern eq $retval || [string match $okpattern $retval]} {
- puts "PASSED"
- incr ::passed
- } else {
- puts "!! ERROR expected\n'$okpattern'\nbut got\n'$retval'"
- incr ::failed
- }
- if {$::traceleaks} {
- if {![string match {*0 leaks*} [exec leaks redis-server]]} {
- puts "--------- Test $::testnum LEAKED! --------"
- exit 1
- }
- }
-}
+++ /dev/null
-set ::tmpcounter 0
-set ::tmproot "./test/tmp"
-file mkdir $::tmproot
-
-# returns a dirname unique to this process to write to
-proc tmpdir {basename} {
- set dir [file join $::tmproot $basename.[pid].[incr ::tmpcounter]]
- file mkdir $dir
- set _ $dir
-}
-
-# return a filename unique to this process to write to
-proc tmpfile {basename} {
- file join $::tmproot $basename.[pid].[incr ::tmpcounter]
-}
+++ /dev/null
-proc randstring {min max {type binary}} {
- set len [expr {$min+int(rand()*($max-$min+1))}]
- set output {}
- if {$type eq {binary}} {
- set minval 0
- set maxval 255
- } elseif {$type eq {alpha}} {
- set minval 48
- set maxval 122
- } elseif {$type eq {compr}} {
- set minval 48
- set maxval 52
- }
- while {$len} {
- append output [format "%c" [expr {$minval+int(rand()*($maxval-$minval+1))}]]
- incr len -1
- }
- return $output
-}
-
-# Useful for some test
-proc zlistAlikeSort {a b} {
- if {[lindex $a 0] > [lindex $b 0]} {return 1}
- if {[lindex $a 0] < [lindex $b 0]} {return -1}
- string compare [lindex $a 1] [lindex $b 1]
-}
-
-proc waitForBgsave r {
- while 1 {
- set i [$r info]
- if {[string match {*bgsave_in_progress:1*} $i]} {
- puts -nonewline "\nWaiting for background save to finish... "
- flush stdout
- after 1000
- } else {
- break
- }
- }
-}
-
-proc waitForBgrewriteaof r {
- while 1 {
- set i [$r info]
- if {[string match {*bgrewriteaof_in_progress:1*} $i]} {
- puts -nonewline "\nWaiting for background AOF rewrite to finish... "
- flush stdout
- after 1000
- } else {
- break
- }
- }
-}
-
-proc randomInt {max} {
- expr {int(rand()*$max)}
-}
-
-proc randpath args {
- set path [expr {int(rand()*[llength $args])}]
- uplevel 1 [lindex $args $path]
-}
-
-proc randomValue {} {
- randpath {
- # Small enough to likely collide
- randomInt 1000
- } {
- # 32 bit compressible signed/unsigned
- randpath {randomInt 2000000000} {randomInt 4000000000}
- } {
- # 64 bit
- randpath {randomInt 1000000000000}
- } {
- # Random string
- randpath {randstring 0 256 alpha} \
- {randstring 0 256 compr} \
- {randstring 0 256 binary}
- }
-}
-
-proc randomKey {} {
- randpath {
- # Small enough to likely collide
- randomInt 1000
- } {
- # 32 bit compressible signed/unsigned
- randpath {randomInt 2000000000} {randomInt 4000000000}
- } {
- # 64 bit
- randpath {randomInt 1000000000000}
- } {
- # Random string
- randpath {randstring 1 256 alpha} \
- {randstring 1 256 compr}
- }
-}
-
-proc createComplexDataset {r ops} {
- for {set j 0} {$j < $ops} {incr j} {
- set k [randomKey]
- set f [randomValue]
- set v [randomValue]
- randpath {
- set d [expr {rand()}]
- } {
- set d [expr {rand()}]
- } {
- set d [expr {rand()}]
- } {
- set d [expr {rand()}]
- } {
- set d [expr {rand()}]
- } {
- randpath {set d +inf} {set d -inf}
- }
- set t [$r type $k]
-
- if {$t eq {none}} {
- randpath {
- $r set $k $v
- } {
- $r lpush $k $v
- } {
- $r sadd $k $v
- } {
- $r zadd $k $d $v
- } {
- $r hset $k $f $v
- }
- set t [$r type $k]
- }
-
- switch $t {
- {string} {
- # Nothing to do
- }
- {list} {
- randpath {$r lpush $k $v} \
- {$r rpush $k $v} \
- {$r lrem $k 0 $v} \
- {$r rpop $k} \
- {$r lpop $k}
- }
- {set} {
- randpath {$r sadd $k $v} \
- {$r srem $k $v}
- }
- {zset} {
- randpath {$r zadd $k $d $v} \
- {$r zrem $k $v}
- }
- {hash} {
- randpath {$r hset $k $f $v} \
- {$r hdel $k $f}
- }
- }
- }
-}
+++ /dev/null
-# Redis test suite. Copyright (C) 2009 Salvatore Sanfilippo antirez@gmail.com
-# This softare is released under the BSD License. See the COPYING file for
-# more information.
-
-set tcl_precision 17
-source test/support/redis.tcl
-source test/support/server.tcl
-source test/support/tmpfile.tcl
-source test/support/test.tcl
-source test/support/util.tcl
-
-set ::host 127.0.0.1
-set ::port 16379
-set ::traceleaks 0
-
-proc execute_tests name {
- set cur $::testnum
- source "test/$name.tcl"
-}
-
-# setup a list to hold a stack of clients. the proc "r" provides easy
-# access to the client at the top of the stack
-set ::clients {}
-proc r {args} {
- set client [lindex $::clients end]
- $client {*}$args
-}
-
-proc main {} {
- execute_tests "unit/auth"
- execute_tests "unit/protocol"
- execute_tests "unit/basic"
- execute_tests "unit/type/list"
- execute_tests "unit/type/set"
- execute_tests "unit/type/zset"
- execute_tests "unit/type/hash"
- execute_tests "unit/sort"
- execute_tests "unit/expire"
- execute_tests "unit/other"
-
- puts "\n[expr $::passed+$::failed] tests, $::passed passed, $::failed failed"
- if {$::failed > 0} {
- puts "\n*** WARNING!!! $::failed FAILED TESTS ***\n"
- }
-
- # clean up tmp
- exec rm -rf {*}[glob test/tmp/redis.conf.*]
- exec rm -rf {*}[glob test/tmp/server.*]
-}
-
-main
+++ /dev/null
-start_server default.conf {{requirepass foobar}} {
- test {AUTH fails when a wrong password is given} {
- catch {r auth wrong!} err
- format $err
- } {ERR*invalid password}
-
- test {Arbitrary command gives an error when AUTH is required} {
- catch {r set foo bar} err
- format $err
- } {ERR*operation not permitted}
-
- test {AUTH succeeds when the right password is given} {
- r auth foobar
- } {OK}
-}
+++ /dev/null
-start_server default.conf {} {
- test {DEL all keys to start with a clean DB} {
- foreach key [r keys *] {r del $key}
- r dbsize
- } {0}
-
- test {SET and GET an item} {
- r set x foobar
- r get x
- } {foobar}
-
- test {SET and GET an empty item} {
- r set x {}
- r get x
- } {}
-
- test {DEL against a single item} {
- r del x
- r get x
- } {}
-
- test {Vararg DEL} {
- r set foo1 a
- r set foo2 b
- r set foo3 c
- list [r del foo1 foo2 foo3 foo4] [r mget foo1 foo2 foo3]
- } {3 {{} {} {}}}
-
- test {KEYS with pattern} {
- foreach key {key_x key_y key_z foo_a foo_b foo_c} {
- r set $key hello
- }
- lsort [r keys foo*]
- } {foo_a foo_b foo_c}
-
- test {KEYS to get all keys} {
- lsort [r keys *]
- } {foo_a foo_b foo_c key_x key_y key_z}
-
- test {DBSIZE} {
- r dbsize
- } {6}
-
- test {DEL all keys} {
- foreach key [r keys *] {r del $key}
- r dbsize
- } {0}
-
- test {Very big payload in GET/SET} {
- set buf [string repeat "abcd" 1000000]
- r set foo $buf
- r get foo
- } [string repeat "abcd" 1000000]
-
- test {Very big payload random access} {
- set err {}
- array set payload {}
- for {set j 0} {$j < 100} {incr j} {
- set size [expr 1+[randomInt 100000]]
- set buf [string repeat "pl-$j" $size]
- set payload($j) $buf
- r set bigpayload_$j $buf
- }
- for {set j 0} {$j < 1000} {incr j} {
- set index [randomInt 100]
- set buf [r get bigpayload_$index]
- if {$buf != $payload($index)} {
- set err "Values differ: I set '$payload($index)' but I read back '$buf'"
- break
- }
- }
- unset payload
- set _ $err
- } {}
-
- test {SET 10000 numeric keys and access all them in reverse order} {
- set err {}
- for {set x 0} {$x < 10000} {incr x} {
- r set $x $x
- }
- set sum 0
- for {set x 9999} {$x >= 0} {incr x -1} {
- set val [r get $x]
- if {$val ne $x} {
- set err "Eleemnt at position $x is $val instead of $x"
- break
- }
- }
- set _ $err
- } {}
-
- test {DBSIZE should be 10101 now} {
- r dbsize
- } {10101}
-
- test {INCR against non existing key} {
- set res {}
- append res [r incr novar]
- append res [r get novar]
- } {11}
-
- test {INCR against key created by incr itself} {
- r incr novar
- } {2}
-
- test {INCR against key originally set with SET} {
- r set novar 100
- r incr novar
- } {101}
-
- test {INCR over 32bit value} {
- r set novar 17179869184
- r incr novar
- } {17179869185}
-
- test {INCRBY over 32bit value with over 32bit increment} {
- r set novar 17179869184
- r incrby novar 17179869184
- } {34359738368}
-
- test {INCR fails against key with spaces (no integer encoded)} {
- r set novar " 11 "
- catch {r incr novar} err
- format $err
- } {ERR*}
-
- test {INCR fails against a key holding a list} {
- r rpush mylist 1
- catch {r incr mylist} err
- r rpop mylist
- format $err
- } {ERR*}
-
- test {DECRBY over 32bit value with over 32bit increment, negative res} {
- r set novar 17179869184
- r decrby novar 17179869185
- } {-1}
-
- test {SETNX target key missing} {
- r setnx novar2 foobared
- r get novar2
- } {foobared}
-
- test {SETNX target key exists} {
- r setnx novar2 blabla
- r get novar2
- } {foobared}
-
- test {SETNX will overwrite EXPIREing key} {
- r set x 10
- r expire x 10000
- r setnx x 20
- r get x
- } {20}
-
- test {EXISTS} {
- set res {}
- r set newkey test
- append res [r exists newkey]
- r del newkey
- append res [r exists newkey]
- } {10}
-
- test {Zero length value in key. SET/GET/EXISTS} {
- r set emptykey {}
- set res [r get emptykey]
- append res [r exists emptykey]
- r del emptykey
- append res [r exists emptykey]
- } {10}
-
- test {Commands pipelining} {
- set fd [r channel]
- puts -nonewline $fd "SET k1 4\r\nxyzk\r\nGET k1\r\nPING\r\n"
- flush $fd
- set res {}
- append res [string match OK* [::redis::redis_read_reply $fd]]
- append res [::redis::redis_read_reply $fd]
- append res [string match PONG* [::redis::redis_read_reply $fd]]
- format $res
- } {1xyzk1}
-
- test {Non existing command} {
- catch {r foobaredcommand} err
- string match ERR* $err
- } {1}
-
- test {RENAME basic usage} {
- r set mykey hello
- r rename mykey mykey1
- r rename mykey1 mykey2
- r get mykey2
- } {hello}
-
- test {RENAME source key should no longer exist} {
- r exists mykey
- } {0}
-
- test {RENAME against already existing key} {
- r set mykey a
- r set mykey2 b
- r rename mykey2 mykey
- set res [r get mykey]
- append res [r exists mykey2]
- } {b0}
-
- test {RENAMENX basic usage} {
- r del mykey
- r del mykey2
- r set mykey foobar
- r renamenx mykey mykey2
- set res [r get mykey2]
- append res [r exists mykey]
- } {foobar0}
-
- test {RENAMENX against already existing key} {
- r set mykey foo
- r set mykey2 bar
- r renamenx mykey mykey2
- } {0}
-
- test {RENAMENX against already existing key (2)} {
- set res [r get mykey]
- append res [r get mykey2]
- } {foobar}
-
- test {RENAME against non existing source key} {
- catch {r rename nokey foobar} err
- format $err
- } {ERR*}
-
- test {RENAME where source and dest key is the same} {
- catch {r rename mykey mykey} err
- format $err
- } {ERR*}
-
- test {DEL all keys again (DB 0)} {
- foreach key [r keys *] {
- r del $key
- }
- r dbsize
- } {0}
-
- test {DEL all keys again (DB 1)} {
- r select 10
- foreach key [r keys *] {
- r del $key
- }
- set res [r dbsize]
- r select 9
- format $res
- } {0}
-
- test {MOVE basic usage} {
- r set mykey foobar
- r move mykey 10
- set res {}
- lappend res [r exists mykey]
- lappend res [r dbsize]
- r select 10
- lappend res [r get mykey]
- lappend res [r dbsize]
- r select 9
- format $res
- } [list 0 0 foobar 1]
-
- test {MOVE against key existing in the target DB} {
- r set mykey hello
- r move mykey 10
- } {0}
-
- test {SET/GET keys in different DBs} {
- r set a hello
- r set b world
- r select 10
- r set a foo
- r set b bared
- r select 9
- set res {}
- lappend res [r get a]
- lappend res [r get b]
- r select 10
- lappend res [r get a]
- lappend res [r get b]
- r select 9
- format $res
- } {hello world foo bared}
-
- test {MGET} {
- r flushdb
- r set foo BAR
- r set bar FOO
- r mget foo bar
- } {BAR FOO}
-
- test {MGET against non existing key} {
- r mget foo baazz bar
- } {BAR {} FOO}
-
- test {MGET against non-string key} {
- r sadd myset ciao
- r sadd myset bau
- r mget foo baazz bar myset
- } {BAR {} FOO {}}
-
- test {RANDOMKEY} {
- r flushdb
- r set foo x
- r set bar y
- set foo_seen 0
- set bar_seen 0
- for {set i 0} {$i < 100} {incr i} {
- set rkey [r randomkey]
- if {$rkey eq {foo}} {
- set foo_seen 1
- }
- if {$rkey eq {bar}} {
- set bar_seen 1
- }
- }
- list $foo_seen $bar_seen
- } {1 1}
-
- test {RANDOMKEY against empty DB} {
- r flushdb
- r randomkey
- } {}
-
- test {RANDOMKEY regression 1} {
- r flushdb
- r set x 10
- r del x
- r randomkey
- } {}
-
- test {GETSET (set new value)} {
- list [r getset foo xyz] [r get foo]
- } {{} xyz}
-
- test {GETSET (replace old value)} {
- r set foo bar
- list [r getset foo xyz] [r get foo]
- } {bar xyz}
-
- test {MSET base case} {
- r mset x 10 y "foo bar" z "x x x x x x x\n\n\r\n"
- r mget x y z
- } [list 10 {foo bar} "x x x x x x x\n\n\r\n"]
-
- test {MSET wrong number of args} {
- catch {r mset x 10 y "foo bar" z} err
- format $err
- } {*wrong number*}
-
- test {MSETNX with already existent key} {
- list [r msetnx x1 xxx y2 yyy x 20] [r exists x1] [r exists y2]
- } {0 0 0}
-
- test {MSETNX with not existing keys} {
- list [r msetnx x1 xxx y2 yyy] [r get x1] [r get y2]
- } {1 xxx yyy}
-
- test {MSETNX should remove all the volatile keys even on failure} {
- r mset x 1 y 2 z 3
- r expire y 10000
- r expire z 10000
- list [r msetnx x A y B z C] [r mget x y z]
- } {0 {1 {} {}}}
-}
+++ /dev/null
-start_server default.conf {} {
- test {EXPIRE - don't set timeouts multiple times} {
- r set x foobar
- set v1 [r expire x 5]
- set v2 [r ttl x]
- set v3 [r expire x 10]
- set v4 [r ttl x]
- list $v1 $v2 $v3 $v4
- } {1 5 0 5}
-
- test {EXPIRE - It should be still possible to read 'x'} {
- r get x
- } {foobar}
-
- test {EXPIRE - After 6 seconds the key should no longer be here} {
- after 6000
- list [r get x] [r exists x]
- } {{} 0}
-
- test {EXPIRE - Delete on write policy} {
- r del x
- r lpush x foo
- r expire x 1000
- r lpush x bar
- r lrange x 0 -1
- } {bar}
-
- test {EXPIREAT - Check for EXPIRE alike behavior} {
- r del x
- r set x foo
- r expireat x [expr [clock seconds]+15]
- r ttl x
- } {1[345]}
-
- test {SETEX - Set + Expire combo operation. Check for TTL} {
- r setex x 12 test
- r ttl x
- } {1[012]}
-
- test {SETEX - Check value} {
- r get x
- } {test}
-
- test {SETEX - Overwrite old key} {
- r setex y 1 foo
- r get y
- } {foo}
-
- test {SETEX - Wait for the key to expire} {
- after 3000
- r get y
- } {}
-
- test {SETEX - Wrong time parameter} {
- catch {r setex z -10 foo} e
- set _ $e
- } {*invalid expire*}
-}
+++ /dev/null
-start_server default.conf {} {
- test {SAVE - make sure there are all the types as values} {
- # Wait for a background saving in progress to terminate
- waitForBgsave r
- r lpush mysavelist hello
- r lpush mysavelist world
- r set myemptykey {}
- r set mynormalkey {blablablba}
- r zadd mytestzset 10 a
- r zadd mytestzset 20 b
- r zadd mytestzset 30 c
- r save
- } {OK}
-
- foreach fuzztype {binary alpha compr} {
- test "FUZZ stresser with data model $fuzztype" {
- set err 0
- for {set i 0} {$i < 10000} {incr i} {
- set fuzz [randstring 0 512 $fuzztype]
- r set foo $fuzz
- set got [r get foo]
- if {$got ne $fuzz} {
- set err [list $fuzz $got]
- break
- }
- }
- set _ $err
- } {0}
- }
-
- test {BGSAVE} {
- waitForBgsave r
- r flushdb
- r save
- r set x 10
- r bgsave
- waitForBgsave r
- r debug reload
- r get x
- } {10}
-
- test {SELECT an out of range DB} {
- catch {r select 1000000} err
- set _ $err
- } {*invalid*}
-
- if {![catch {package require sha1}]} {
- test {Check consistency of different data types after a reload} {
- r flushdb
- createComplexDataset r 10000
- set sha1 [r debug digest]
- r debug reload
- set sha1_after [r debug digest]
- expr {$sha1 eq $sha1_after}
- } {1}
-
- test {Same dataset digest if saving/reloading as AOF?} {
- r bgrewriteaof
- waitForBgrewriteaof r
- r debug loadaof
- set sha1_after [r debug digest]
- expr {$sha1 eq $sha1_after}
- } {1}
- }
-
- test {EXPIRES after a reload (snapshot + append only file)} {
- r flushdb
- r set x 10
- r expire x 1000
- r save
- r debug reload
- set ttl [r ttl x]
- set e1 [expr {$ttl > 900 && $ttl <= 1000}]
- r bgrewriteaof
- waitForBgrewriteaof r
- set ttl [r ttl x]
- set e2 [expr {$ttl > 900 && $ttl <= 1000}]
- list $e1 $e2
- } {1 1}
-
- test {PIPELINING stresser (also a regression for the old epoll bug)} {
- set fd2 [socket $::host $::port]
- fconfigure $fd2 -encoding binary -translation binary
- puts -nonewline $fd2 "SELECT 9\r\n"
- flush $fd2
- gets $fd2
-
- for {set i 0} {$i < 100000} {incr i} {
- set q {}
- set val "0000${i}0000"
- append q "SET key:$i [string length $val]\r\n$val\r\n"
- puts -nonewline $fd2 $q
- set q {}
- append q "GET key:$i\r\n"
- puts -nonewline $fd2 $q
- }
- flush $fd2
-
- for {set i 0} {$i < 100000} {incr i} {
- gets $fd2 line
- gets $fd2 count
- set count [string range $count 1 end]
- set val [read $fd2 $count]
- read $fd2 2
- }
- close $fd2
- set _ 1
- } {1}
-
- test {MUTLI / EXEC basics} {
- r del mylist
- r rpush mylist a
- r rpush mylist b
- r rpush mylist c
- r multi
- set v1 [r lrange mylist 0 -1]
- set v2 [r ping]
- set v3 [r exec]
- list $v1 $v2 $v3
- } {QUEUED QUEUED {{a b c} PONG}}
-
- test {DISCARD} {
- r del mylist
- r rpush mylist a
- r rpush mylist b
- r rpush mylist c
- r multi
- set v1 [r del mylist]
- set v2 [r discard]
- set v3 [r lrange mylist 0 -1]
- list $v1 $v2 $v3
- } {QUEUED OK {a b c}}
-
- test {APPEND basics} {
- list [r append foo bar] [r get foo] \
- [r append foo 100] [r get foo]
- } {3 bar 6 bar100}
-
- test {APPEND basics, integer encoded values} {
- set res {}
- r del foo
- r append foo 1
- r append foo 2
- lappend res [r get foo]
- r set foo 1
- r append foo 2
- lappend res [r get foo]
- } {12 12}
-
- test {APPEND fuzzing} {
- set err {}
- foreach type {binary alpha compr} {
- set buf {}
- r del x
- for {set i 0} {$i < 1000} {incr i} {
- set bin [randstring 0 10 $type]
- append buf $bin
- r append x $bin
- }
- if {$buf != [r get x]} {
- set err "Expected '$buf' found '[r get x]'"
- break
- }
- }
- set _ $err
- } {}
-
- test {SUBSTR basics} {
- set res {}
- r set foo "Hello World"
- lappend res [r substr foo 0 3]
- lappend res [r substr foo 0 -1]
- lappend res [r substr foo -4 -1]
- lappend res [r substr foo 5 3]
- lappend res [r substr foo 5 5000]
- lappend res [r substr foo -5000 10000]
- set _ $res
- } {Hell {Hello World} orld {} { World} {Hello World}}
-
- test {SUBSTR against integer encoded values} {
- r set foo 123
- r substr foo 0 -2
- } {12}
-
- test {SUBSTR fuzzing} {
- set err {}
- for {set i 0} {$i < 1000} {incr i} {
- set bin [randstring 0 1024 binary]
- set _start [set start [randomInt 1500]]
- set _end [set end [randomInt 1500]]
- if {$_start < 0} {set _start "end-[abs($_start)-1]"}
- if {$_end < 0} {set _end "end-[abs($_end)-1]"}
- set s1 [string range $bin $_start $_end]
- r set bin $bin
- set s2 [r substr bin $start $end]
- if {$s1 != $s2} {
- set err "String mismatch"
- break
- }
- }
- set _ $err
- } {}
-
- # Leave the user with a clean DB before to exit
- test {FLUSHDB} {
- set aux {}
- r select 9
- r flushdb
- lappend aux [r dbsize]
- r select 10
- r flushdb
- lappend aux [r dbsize]
- } {0 0}
-
- test {Perform a final SAVE to leave a clean DB on disk} {
- r save
- } {OK}
-}
+++ /dev/null
-start_server default.conf {} {
- test {Handle an empty query well} {
- set fd [r channel]
- puts -nonewline $fd "\r\n"
- flush $fd
- r ping
- } {PONG}
-
- test {Negative multi bulk command does not create problems} {
- set fd [r channel]
- puts -nonewline $fd "*-10\r\n"
- flush $fd
- r ping
- } {PONG}
-
- test {Negative multi bulk payload} {
- set fd [r channel]
- puts -nonewline $fd "SET x -10\r\n"
- flush $fd
- gets $fd
- } {*invalid bulk*}
-
- test {Too big bulk payload} {
- set fd [r channel]
- puts -nonewline $fd "SET x 2000000000\r\n"
- flush $fd
- gets $fd
- } {*invalid bulk*count*}
-
- test {Multi bulk request not followed by bulk args} {
- set fd [r channel]
- puts -nonewline $fd "*1\r\nfoo\r\n"
- flush $fd
- gets $fd
- } {*protocol error*}
-
- test {Generic wrong number of args} {
- catch {r ping x y z} err
- set _ $err
- } {*wrong*arguments*ping*}
-}
+++ /dev/null
-start_server default.conf {} {
- test {SORT ALPHA against integer encoded strings} {
- r del mylist
- r lpush mylist 2
- r lpush mylist 1
- r lpush mylist 3
- r lpush mylist 10
- r sort mylist alpha
- } {1 10 2 3}
-
- test {Create a random list and a random set} {
- set tosort {}
- array set seenrand {}
- for {set i 0} {$i < 10000} {incr i} {
- while 1 {
- # Make sure all the weights are different because
- # Redis does not use a stable sort but Tcl does.
- randpath {
- set rint [expr int(rand()*1000000)]
- } {
- set rint [expr rand()]
- }
- if {![info exists seenrand($rint)]} break
- }
- set seenrand($rint) x
- r lpush tosort $i
- r sadd tosort-set $i
- r set weight_$i $rint
- r hset wobj_$i weight $rint
- lappend tosort [list $i $rint]
- }
- set sorted [lsort -index 1 -real $tosort]
- set res {}
- for {set i 0} {$i < 10000} {incr i} {
- lappend res [lindex $sorted $i 0]
- }
- format {}
- } {}
-
- test {SORT with BY against the newly created list} {
- r sort tosort {BY weight_*}
- } $res
-
- test {SORT with BY (hash field) against the newly created list} {
- r sort tosort {BY wobj_*->weight}
- } $res
-
- test {SORT with GET (key+hash) with sanity check of each element (list)} {
- set err {}
- set l1 [r sort tosort GET # GET weight_*]
- set l2 [r sort tosort GET # GET wobj_*->weight]
- foreach {id1 w1} $l1 {id2 w2} $l2 {
- set realweight [r get weight_$id1]
- if {$id1 != $id2} {
- set err "ID mismatch $id1 != $id2"
- break
- }
- if {$realweight != $w1 || $realweight != $w2} {
- set err "Weights mismatch! w1: $w1 w2: $w2 real: $realweight"
- break
- }
- }
- set _ $err
- } {}
-
- test {SORT with BY, but against the newly created set} {
- r sort tosort-set {BY weight_*}
- } $res
-
- test {SORT with BY (hash field), but against the newly created set} {
- r sort tosort-set {BY wobj_*->weight}
- } $res
-
- test {SORT with BY and STORE against the newly created list} {
- r sort tosort {BY weight_*} store sort-res
- r lrange sort-res 0 -1
- } $res
-
- test {SORT with BY (hash field) and STORE against the newly created list} {
- r sort tosort {BY wobj_*->weight} store sort-res
- r lrange sort-res 0 -1
- } $res
-
- test {SORT direct, numeric, against the newly created list} {
- r sort tosort
- } [lsort -integer $res]
-
- test {SORT decreasing sort} {
- r sort tosort {DESC}
- } [lsort -decreasing -integer $res]
-
- test {SORT speed, sorting 10000 elements list using BY, 100 times} {
- set start [clock clicks -milliseconds]
- for {set i 0} {$i < 100} {incr i} {
- set sorted [r sort tosort {BY weight_* LIMIT 0 10}]
- }
- set elapsed [expr [clock clicks -milliseconds]-$start]
- puts -nonewline "\n Average time to sort: [expr double($elapsed)/100] milliseconds "
- flush stdout
- format {}
- } {}
-
- test {SORT speed, as above but against hash field} {
- set start [clock clicks -milliseconds]
- for {set i 0} {$i < 100} {incr i} {
- set sorted [r sort tosort {BY wobj_*->weight LIMIT 0 10}]
- }
- set elapsed [expr [clock clicks -milliseconds]-$start]
- puts -nonewline "\n Average time to sort: [expr double($elapsed)/100] milliseconds "
- flush stdout
- format {}
- } {}
-
- test {SORT speed, sorting 10000 elements list directly, 100 times} {
- set start [clock clicks -milliseconds]
- for {set i 0} {$i < 100} {incr i} {
- set sorted [r sort tosort {LIMIT 0 10}]
- }
- set elapsed [expr [clock clicks -milliseconds]-$start]
- puts -nonewline "\n Average time to sort: [expr double($elapsed)/100] milliseconds "
- flush stdout
- format {}
- } {}
-
- test {SORT speed, pseudo-sorting 10000 elements list, BY <const>, 100 times} {
- set start [clock clicks -milliseconds]
- for {set i 0} {$i < 100} {incr i} {
- set sorted [r sort tosort {BY nokey LIMIT 0 10}]
- }
- set elapsed [expr [clock clicks -milliseconds]-$start]
- puts -nonewline "\n Average time to sort: [expr double($elapsed)/100] milliseconds "
- flush stdout
- format {}
- } {}
-
- test {SORT regression for issue #19, sorting floats} {
- r flushdb
- foreach x {1.1 5.10 3.10 7.44 2.1 5.75 6.12 0.25 1.15} {
- r lpush mylist $x
- }
- r sort mylist
- } [lsort -real {1.1 5.10 3.10 7.44 2.1 5.75 6.12 0.25 1.15}]
-
- test {SORT with GET #} {
- r del mylist
- r lpush mylist 1
- r lpush mylist 2
- r lpush mylist 3
- r mset weight_1 10 weight_2 5 weight_3 30
- r sort mylist BY weight_* GET #
- } {2 1 3}
-
- test {SORT with constant GET} {
- r sort mylist GET foo
- } {{} {} {}}
-
- test {SORT against sorted sets} {
- r del zset
- r zadd zset 1 a
- r zadd zset 5 b
- r zadd zset 2 c
- r zadd zset 10 d
- r zadd zset 3 e
- r sort zset alpha desc
- } {e d c b a}
-
- test {Sorted sets +inf and -inf handling} {
- r del zset
- r zadd zset -100 a
- r zadd zset 200 b
- r zadd zset -300 c
- r zadd zset 1000000 d
- r zadd zset +inf max
- r zadd zset -inf min
- r zrange zset 0 -1
- } {min c a b d max}
-}
+++ /dev/null
-start_server default.conf {} {
- test {HSET/HLEN - Small hash creation} {
- array set smallhash {}
- for {set i 0} {$i < 8} {incr i} {
- set key [randstring 0 8 alpha]
- set val [randstring 0 8 alpha]
- if {[info exists smallhash($key)]} {
- incr i -1
- continue
- }
- r hset smallhash $key $val
- set smallhash($key) $val
- }
- list [r hlen smallhash]
- } {8}
-
- test {Is the small hash encoded with a zipmap?} {
- r debug object smallhash
- } {*zipmap*}
-
- test {HSET/HLEN - Big hash creation} {
- array set bighash {}
- for {set i 0} {$i < 1024} {incr i} {
- set key [randstring 0 8 alpha]
- set val [randstring 0 8 alpha]
- if {[info exists bighash($key)]} {
- incr i -1
- continue
- }
- r hset bighash $key $val
- set bighash($key) $val
- }
- list [r hlen bighash]
- } {1024}
-
- test {Is the big hash encoded with a zipmap?} {
- r debug object bighash
- } {*hashtable*}
-
- test {HGET against the small hash} {
- set err {}
- foreach k [array names smallhash *] {
- if {$smallhash($k) ne [r hget smallhash $k]} {
- set err "$smallhash($k) != [r hget smallhash $k]"
- break
- }
- }
- set _ $err
- } {}
-
- test {HGET against the big hash} {
- set err {}
- foreach k [array names bighash *] {
- if {$bighash($k) ne [r hget bighash $k]} {
- set err "$bighash($k) != [r hget bighash $k]"
- break
- }
- }
- set _ $err
- } {}
-
- test {HGET against non existing key} {
- set rv {}
- lappend rv [r hget smallhash __123123123__]
- lappend rv [r hget bighash __123123123__]
- set _ $rv
- } {{} {}}
-
- test {HSET in update and insert mode} {
- set rv {}
- set k [lindex [array names smallhash *] 0]
- lappend rv [r hset smallhash $k newval1]
- set smallhash($k) newval1
- lappend rv [r hget smallhash $k]
- lappend rv [r hset smallhash __foobar123__ newval]
- set k [lindex [array names bighash *] 0]
- lappend rv [r hset bighash $k newval2]
- set bighash($k) newval2
- lappend rv [r hget bighash $k]
- lappend rv [r hset bighash __foobar123__ newval]
- lappend rv [r hdel smallhash __foobar123__]
- lappend rv [r hdel bighash __foobar123__]
- set _ $rv
- } {0 newval1 1 0 newval2 1 1 1}
-
- test {HSETNX target key missing - small hash} {
- r hsetnx smallhash __123123123__ foo
- r hget smallhash __123123123__
- } {foo}
-
- test {HSETNX target key exists - small hash} {
- r hsetnx smallhash __123123123__ bar
- set result [r hget smallhash __123123123__]
- r hdel smallhash __123123123__
- set _ $result
- } {foo}
-
- test {HSETNX target key missing - big hash} {
- r hsetnx bighash __123123123__ foo
- r hget bighash __123123123__
- } {foo}
-
- test {HSETNX target key exists - big hash} {
- r hsetnx bighash __123123123__ bar
- set result [r hget bighash __123123123__]
- r hdel bighash __123123123__
- set _ $result
- } {foo}
-
- test {HMSET wrong number of args} {
- catch {r hmset smallhash key1 val1 key2} err
- format $err
- } {*wrong number*}
-
- test {HMSET - small hash} {
- set args {}
- foreach {k v} [array get smallhash] {
- set newval [randstring 0 8 alpha]
- set smallhash($k) $newval
- lappend args $k $newval
- }
- r hmset smallhash {*}$args
- } {OK}
-
- test {HMSET - big hash} {
- set args {}
- foreach {k v} [array get bighash] {
- set newval [randstring 0 8 alpha]
- set bighash($k) $newval
- lappend args $k $newval
- }
- r hmset bighash {*}$args
- } {OK}
-
- test {HMGET against non existing key and fields} {
- set rv {}
- lappend rv [r hmget doesntexist __123123123__ __456456456__]
- lappend rv [r hmget smallhash __123123123__ __456456456__]
- lappend rv [r hmget bighash __123123123__ __456456456__]
- set _ $rv
- } {{{} {}} {{} {}} {{} {}}}
-
- test {HMGET - small hash} {
- set keys {}
- set vals {}
- foreach {k v} [array get smallhash] {
- lappend keys $k
- lappend vals $v
- }
- set err {}
- set result [r hmget smallhash {*}$keys]
- if {$vals ne $result} {
- set err "$vals != $result"
- break
- }
- set _ $err
- } {}
-
- test {HMGET - big hash} {
- set keys {}
- set vals {}
- foreach {k v} [array get bighash] {
- lappend keys $k
- lappend vals $v
- }
- set err {}
- set result [r hmget bighash {*}$keys]
- if {$vals ne $result} {
- set err "$vals != $result"
- break
- }
- set _ $err
- } {}
-
- test {HKEYS - small hash} {
- lsort [r hkeys smallhash]
- } [lsort [array names smallhash *]]
-
- test {HKEYS - big hash} {
- lsort [r hkeys bighash]
- } [lsort [array names bighash *]]
-
- test {HVALS - small hash} {
- set vals {}
- foreach {k v} [array get smallhash] {
- lappend vals $v
- }
- set _ [lsort $vals]
- } [lsort [r hvals smallhash]]
-
- test {HVALS - big hash} {
- set vals {}
- foreach {k v} [array get bighash] {
- lappend vals $v
- }
- set _ [lsort $vals]
- } [lsort [r hvals bighash]]
-
- test {HGETALL - small hash} {
- lsort [r hgetall smallhash]
- } [lsort [array get smallhash]]
-
- test {HGETALL - big hash} {
- lsort [r hgetall bighash]
- } [lsort [array get bighash]]
-
- test {HDEL and return value} {
- set rv {}
- lappend rv [r hdel smallhash nokey]
- lappend rv [r hdel bighash nokey]
- set k [lindex [array names smallhash *] 0]
- lappend rv [r hdel smallhash $k]
- lappend rv [r hdel smallhash $k]
- lappend rv [r hget smallhash $k]
- unset smallhash($k)
- set k [lindex [array names bighash *] 0]
- lappend rv [r hdel bighash $k]
- lappend rv [r hdel bighash $k]
- lappend rv [r hget bighash $k]
- unset bighash($k)
- set _ $rv
- } {0 0 1 0 {} 1 0 {}}
-
- test {HEXISTS} {
- set rv {}
- set k [lindex [array names smallhash *] 0]
- lappend rv [r hexists smallhash $k]
- lappend rv [r hexists smallhash nokey]
- set k [lindex [array names bighash *] 0]
- lappend rv [r hexists bighash $k]
- lappend rv [r hexists bighash nokey]
- } {1 0 1 0}
-
- test {Is a zipmap encoded Hash promoted on big payload?} {
- r hset smallhash foo [string repeat a 1024]
- r debug object smallhash
- } {*hashtable*}
-
- test {HINCRBY against non existing database key} {
- r del htest
- list [r hincrby htest foo 2]
- } {2}
-
- test {HINCRBY against non existing hash key} {
- set rv {}
- r hdel smallhash tmp
- r hdel bighash tmp
- lappend rv [r hincrby smallhash tmp 2]
- lappend rv [r hget smallhash tmp]
- lappend rv [r hincrby bighash tmp 2]
- lappend rv [r hget bighash tmp]
- } {2 2 2 2}
-
- test {HINCRBY against hash key created by hincrby itself} {
- set rv {}
- lappend rv [r hincrby smallhash tmp 3]
- lappend rv [r hget smallhash tmp]
- lappend rv [r hincrby bighash tmp 3]
- lappend rv [r hget bighash tmp]
- } {5 5 5 5}
-
- test {HINCRBY against hash key originally set with HSET} {
- r hset smallhash tmp 100
- r hset bighash tmp 100
- list [r hincrby smallhash tmp 2] [r hincrby bighash tmp 2]
- } {102 102}
-
- test {HINCRBY over 32bit value} {
- r hset smallhash tmp 17179869184
- r hset bighash tmp 17179869184
- list [r hincrby smallhash tmp 1] [r hincrby bighash tmp 1]
- } {17179869185 17179869185}
-
- test {HINCRBY over 32bit value with over 32bit increment} {
- r hset smallhash tmp 17179869184
- r hset bighash tmp 17179869184
- list [r hincrby smallhash tmp 17179869184] [r hincrby bighash tmp 17179869184]
- } {34359738368 34359738368}
-
- test {HINCRBY fails against hash value with spaces} {
- r hset smallhash str " 11 "
- r hset bighash str " 11 "
- catch {r hincrby smallhash str 1} smallerr
- catch {r hincrby smallhash str 1} bigerr
- set rv {}
- lappend rv [string match "ERR*not an integer*" $smallerr]
- lappend rv [string match "ERR*not an integer*" $bigerr]
- } {1 1}
-}
+++ /dev/null
-start_server default.conf {} {
- test {Basic LPUSH, RPUSH, LLENGTH, LINDEX} {
- set res [r lpush mylist a]
- append res [r lpush mylist b]
- append res [r rpush mylist c]
- append res [r llen mylist]
- append res [r rpush anotherlist d]
- append res [r lpush anotherlist e]
- append res [r llen anotherlist]
- append res [r lindex mylist 0]
- append res [r lindex mylist 1]
- append res [r lindex mylist 2]
- append res [r lindex anotherlist 0]
- append res [r lindex anotherlist 1]
- list $res [r lindex mylist 100]
- } {1233122baced {}}
-
- test {DEL a list} {
- r del mylist
- r exists mylist
- } {0}
-
- test {Create a long list and check every single element with LINDEX} {
- set ok 0
- for {set i 0} {$i < 1000} {incr i} {
- r rpush mylist $i
- }
- for {set i 0} {$i < 1000} {incr i} {
- if {[r lindex mylist $i] eq $i} {incr ok}
- if {[r lindex mylist [expr (-$i)-1]] eq [expr 999-$i]} {
- incr ok
- }
- }
- format $ok
- } {2000}
-
- test {Test elements with LINDEX in random access} {
- set ok 0
- for {set i 0} {$i < 1000} {incr i} {
- set rint [expr int(rand()*1000)]
- if {[r lindex mylist $rint] eq $rint} {incr ok}
- if {[r lindex mylist [expr (-$rint)-1]] eq [expr 999-$rint]} {
- incr ok
- }
- }
- format $ok
- } {2000}
-
- test {Check if the list is still ok after a DEBUG RELOAD} {
- r debug reload
- set ok 0
- for {set i 0} {$i < 1000} {incr i} {
- set rint [expr int(rand()*1000)]
- if {[r lindex mylist $rint] eq $rint} {incr ok}
- if {[r lindex mylist [expr (-$rint)-1]] eq [expr 999-$rint]} {
- incr ok
- }
- }
- format $ok
- } {2000}
-
- test {LLEN against non-list value error} {
- r del mylist
- r set mylist foobar
- catch {r llen mylist} err
- format $err
- } {ERR*}
-
- test {LLEN against non existing key} {
- r llen not-a-key
- } {0}
-
- test {LINDEX against non-list value error} {
- catch {r lindex mylist 0} err
- format $err
- } {ERR*}
-
- test {LINDEX against non existing key} {
- r lindex not-a-key 10
- } {}
-
- test {LPUSH against non-list value error} {
- catch {r lpush mylist 0} err
- format $err
- } {ERR*}
-
- test {RPUSH against non-list value error} {
- catch {r rpush mylist 0} err
- format $err
- } {ERR*}
-
- test {RPOPLPUSH base case} {
- r del mylist
- r rpush mylist a
- r rpush mylist b
- r rpush mylist c
- r rpush mylist d
- set v1 [r rpoplpush mylist newlist]
- set v2 [r rpoplpush mylist newlist]
- set l1 [r lrange mylist 0 -1]
- set l2 [r lrange newlist 0 -1]
- list $v1 $v2 $l1 $l2
- } {d c {a b} {c d}}
-
- test {RPOPLPUSH with the same list as src and dst} {
- r del mylist
- r rpush mylist a
- r rpush mylist b
- r rpush mylist c
- set l1 [r lrange mylist 0 -1]
- set v [r rpoplpush mylist mylist]
- set l2 [r lrange mylist 0 -1]
- list $l1 $v $l2
- } {{a b c} c {c a b}}
-
- test {RPOPLPUSH target list already exists} {
- r del mylist
- r del newlist
- r rpush mylist a
- r rpush mylist b
- r rpush mylist c
- r rpush mylist d
- r rpush newlist x
- set v1 [r rpoplpush mylist newlist]
- set v2 [r rpoplpush mylist newlist]
- set l1 [r lrange mylist 0 -1]
- set l2 [r lrange newlist 0 -1]
- list $v1 $v2 $l1 $l2
- } {d c {a b} {c d x}}
-
- test {RPOPLPUSH against non existing key} {
- r del mylist
- r del newlist
- set v1 [r rpoplpush mylist newlist]
- list $v1 [r exists mylist] [r exists newlist]
- } {{} 0 0}
-
- test {RPOPLPUSH against non list src key} {
- r del mylist
- r del newlist
- r set mylist x
- catch {r rpoplpush mylist newlist} err
- list [r type mylist] [r exists newlist] [string range $err 0 2]
- } {string 0 ERR}
-
- test {RPOPLPUSH against non list dst key} {
- r del mylist
- r del newlist
- r rpush mylist a
- r rpush mylist b
- r rpush mylist c
- r rpush mylist d
- r set newlist x
- catch {r rpoplpush mylist newlist} err
- list [r lrange mylist 0 -1] [r type newlist] [string range $err 0 2]
- } {{a b c d} string ERR}
-
- test {RPOPLPUSH against non existing src key} {
- r del mylist
- r del newlist
- r rpoplpush mylist newlist
- } {}
-
- test {Basic LPOP/RPOP} {
- r del mylist
- r rpush mylist 1
- r rpush mylist 2
- r lpush mylist 0
- list [r lpop mylist] [r rpop mylist] [r lpop mylist] [r llen mylist]
- } [list 0 2 1 0]
-
- test {LPOP/RPOP against empty list} {
- r lpop mylist
- } {}
-
- test {LPOP against non list value} {
- r set notalist foo
- catch {r lpop notalist} err
- format $err
- } {ERR*kind*}
-
- test {Mass LPUSH/LPOP} {
- set sum 0
- for {set i 0} {$i < 1000} {incr i} {
- r lpush mylist $i
- incr sum $i
- }
- set sum2 0
- for {set i 0} {$i < 500} {incr i} {
- incr sum2 [r lpop mylist]
- incr sum2 [r rpop mylist]
- }
- expr $sum == $sum2
- } {1}
-
- test {LRANGE basics} {
- for {set i 0} {$i < 10} {incr i} {
- r rpush mylist $i
- }
- list [r lrange mylist 1 -2] \
- [r lrange mylist -3 -1] \
- [r lrange mylist 4 4]
- } {{1 2 3 4 5 6 7 8} {7 8 9} 4}
-
- test {LRANGE inverted indexes} {
- r lrange mylist 6 2
- } {}
-
- test {LRANGE out of range indexes including the full list} {
- r lrange mylist -1000 1000
- } {0 1 2 3 4 5 6 7 8 9}
-
- test {LRANGE against non existing key} {
- r lrange nosuchkey 0 1
- } {}
-
- test {LTRIM basics} {
- r del mylist
- for {set i 0} {$i < 100} {incr i} {
- r lpush mylist $i
- r ltrim mylist 0 4
- }
- r lrange mylist 0 -1
- } {99 98 97 96 95}
-
- test {LTRIM stress testing} {
- set mylist {}
- set err {}
- for {set i 0} {$i < 20} {incr i} {
- lappend mylist $i
- }
-
- for {set j 0} {$j < 100} {incr j} {
- # Fill the list
- r del mylist
- for {set i 0} {$i < 20} {incr i} {
- r rpush mylist $i
- }
- # Trim at random
- set a [randomInt 20]
- set b [randomInt 20]
- r ltrim mylist $a $b
- if {[r lrange mylist 0 -1] ne [lrange $mylist $a $b]} {
- set err "[r lrange mylist 0 -1] != [lrange $mylist $a $b]"
- break
- }
- }
- set _ $err
- } {}
-
- test {LSET} {
- r del mylist
- foreach x {99 98 97 96 95} {
- r rpush mylist $x
- }
- r lset mylist 1 foo
- r lset mylist -1 bar
- r lrange mylist 0 -1
- } {99 foo 97 96 bar}
-
- test {LSET out of range index} {
- catch {r lset mylist 10 foo} err
- format $err
- } {ERR*range*}
-
- test {LSET against non existing key} {
- catch {r lset nosuchkey 10 foo} err
- format $err
- } {ERR*key*}
-
- test {LSET against non list value} {
- r set nolist foobar
- catch {r lset nolist 0 foo} err
- format $err
- } {ERR*value*}
-
- test {LREM, remove all the occurrences} {
- r flushdb
- r rpush mylist foo
- r rpush mylist bar
- r rpush mylist foobar
- r rpush mylist foobared
- r rpush mylist zap
- r rpush mylist bar
- r rpush mylist test
- r rpush mylist foo
- set res [r lrem mylist 0 bar]
- list [r lrange mylist 0 -1] $res
- } {{foo foobar foobared zap test foo} 2}
-
- test {LREM, remove the first occurrence} {
- set res [r lrem mylist 1 foo]
- list [r lrange mylist 0 -1] $res
- } {{foobar foobared zap test foo} 1}
-
- test {LREM, remove non existing element} {
- set res [r lrem mylist 1 nosuchelement]
- list [r lrange mylist 0 -1] $res
- } {{foobar foobared zap test foo} 0}
-
- test {LREM, starting from tail with negative count} {
- r flushdb
- r rpush mylist foo
- r rpush mylist bar
- r rpush mylist foobar
- r rpush mylist foobared
- r rpush mylist zap
- r rpush mylist bar
- r rpush mylist test
- r rpush mylist foo
- r rpush mylist foo
- set res [r lrem mylist -1 bar]
- list [r lrange mylist 0 -1] $res
- } {{foo bar foobar foobared zap test foo foo} 1}
-
- test {LREM, starting from tail with negative count (2)} {
- set res [r lrem mylist -2 foo]
- list [r lrange mylist 0 -1] $res
- } {{foo bar foobar foobared zap test} 2}
-
- test {LREM, deleting objects that may be encoded as integers} {
- r lpush myotherlist 1
- r lpush myotherlist 2
- r lpush myotherlist 3
- r lrem myotherlist 1 2
- r llen myotherlist
- } {2}
-}
+++ /dev/null
-start_server default.conf {} {
- test {SADD, SCARD, SISMEMBER, SMEMBERS basics} {
- r sadd myset foo
- r sadd myset bar
- list [r scard myset] [r sismember myset foo] \
- [r sismember myset bar] [r sismember myset bla] \
- [lsort [r smembers myset]]
- } {2 1 1 0 {bar foo}}
-
- test {SADD adding the same element multiple times} {
- r sadd myset foo
- r sadd myset foo
- r sadd myset foo
- r scard myset
- } {2}
-
- test {SADD against non set} {
- r lpush mylist foo
- catch {r sadd mylist bar} err
- format $err
- } {ERR*kind*}
-
- test {SREM basics} {
- r sadd myset ciao
- r srem myset foo
- lsort [r smembers myset]
- } {bar ciao}
-
- test {Mass SADD and SINTER with two sets} {
- for {set i 0} {$i < 1000} {incr i} {
- r sadd set1 $i
- r sadd set2 [expr $i+995]
- }
- lsort [r sinter set1 set2]
- } {995 996 997 998 999}
-
- test {SUNION with two sets} {
- lsort [r sunion set1 set2]
- } [lsort -uniq "[r smembers set1] [r smembers set2]"]
-
- test {SINTERSTORE with two sets} {
- r sinterstore setres set1 set2
- lsort [r smembers setres]
- } {995 996 997 998 999}
-
- test {SINTERSTORE with two sets, after a DEBUG RELOAD} {
- r debug reload
- r sinterstore setres set1 set2
- lsort [r smembers setres]
- } {995 996 997 998 999}
-
- test {SUNIONSTORE with two sets} {
- r sunionstore setres set1 set2
- lsort [r smembers setres]
- } [lsort -uniq "[r smembers set1] [r smembers set2]"]
-
- test {SUNIONSTORE against non existing keys} {
- r set setres xxx
- list [r sunionstore setres foo111 bar222] [r exists xxx]
- } {0 0}
-
- test {SINTER against three sets} {
- r sadd set3 999
- r sadd set3 995
- r sadd set3 1000
- r sadd set3 2000
- lsort [r sinter set1 set2 set3]
- } {995 999}
-
- test {SINTERSTORE with three sets} {
- r sinterstore setres set1 set2 set3
- lsort [r smembers setres]
- } {995 999}
-
- test {SUNION with non existing keys} {
- lsort [r sunion nokey1 set1 set2 nokey2]
- } [lsort -uniq "[r smembers set1] [r smembers set2]"]
-
- test {SDIFF with two sets} {
- for {set i 5} {$i < 1000} {incr i} {
- r sadd set4 $i
- }
- lsort [r sdiff set1 set4]
- } {0 1 2 3 4}
-
- test {SDIFF with three sets} {
- r sadd set5 0
- lsort [r sdiff set1 set4 set5]
- } {1 2 3 4}
-
- test {SDIFFSTORE with three sets} {
- r sdiffstore sres set1 set4 set5
- lsort [r smembers sres]
- } {1 2 3 4}
-
- test {SPOP basics} {
- r del myset
- r sadd myset 1
- r sadd myset 2
- r sadd myset 3
- list [lsort [list [r spop myset] [r spop myset] [r spop myset]]] [r scard myset]
- } {{1 2 3} 0}
-
- test {SRANDMEMBER} {
- r del myset
- r sadd myset a
- r sadd myset b
- r sadd myset c
- unset -nocomplain myset
- array set myset {}
- for {set i 0} {$i < 100} {incr i} {
- set myset([r srandmember myset]) 1
- }
- lsort [array names myset]
- } {a b c}
-
- test {SMOVE basics} {
- r sadd myset1 a
- r sadd myset1 b
- r sadd myset1 c
- r sadd myset2 x
- r sadd myset2 y
- r sadd myset2 z
- r smove myset1 myset2 a
- list [lsort [r smembers myset2]] [lsort [r smembers myset1]]
- } {{a x y z} {b c}}
-
- test {SMOVE non existing key} {
- list [r smove myset1 myset2 foo] [lsort [r smembers myset2]] [lsort [r smembers myset1]]
- } {0 {a x y z} {b c}}
-
- test {SMOVE non existing src set} {
- list [r smove noset myset2 foo] [lsort [r smembers myset2]]
- } {0 {a x y z}}
-
- test {SMOVE non existing dst set} {
- list [r smove myset2 myset3 y] [lsort [r smembers myset2]] [lsort [r smembers myset3]]
- } {1 {a x z} y}
-
- test {SMOVE wrong src key type} {
- r set x 10
- catch {r smove x myset2 foo} err
- format $err
- } {ERR*}
-
- test {SMOVE wrong dst key type} {
- r set x 10
- catch {r smove myset2 x foo} err
- format $err
- } {ERR*}
-}
+++ /dev/null
-start_server default.conf {} {
- test {ZSET basic ZADD and score update} {
- r zadd ztmp 10 x
- r zadd ztmp 20 y
- r zadd ztmp 30 z
- set aux1 [r zrange ztmp 0 -1]
- r zadd ztmp 1 y
- set aux2 [r zrange ztmp 0 -1]
- list $aux1 $aux2
- } {{x y z} {y x z}}
-
- test {ZCARD basics} {
- r zcard ztmp
- } {3}
-
- test {ZCARD non existing key} {
- r zcard ztmp-blabla
- } {0}
-
- test {ZRANK basics} {
- r zadd zranktmp 10 x
- r zadd zranktmp 20 y
- r zadd zranktmp 30 z
- list [r zrank zranktmp x] [r zrank zranktmp y] [r zrank zranktmp z]
- } {0 1 2}
-
- test {ZREVRANK basics} {
- list [r zrevrank zranktmp x] [r zrevrank zranktmp y] [r zrevrank zranktmp z]
- } {2 1 0}
-
- test {ZRANK - after deletion} {
- r zrem zranktmp y
- list [r zrank zranktmp x] [r zrank zranktmp z]
- } {0 1}
-
- test {ZSCORE} {
- set aux {}
- set err {}
- for {set i 0} {$i < 1000} {incr i} {
- set score [expr rand()]
- lappend aux $score
- r zadd zscoretest $score $i
- }
- for {set i 0} {$i < 1000} {incr i} {
- if {[r zscore zscoretest $i] != [lindex $aux $i]} {
- set err "Expected score was [lindex $aux $i] but got [r zscore zscoretest $i] for element $i"
- break
- }
- }
- set _ $err
- } {}
-
- test {ZSCORE after a DEBUG RELOAD} {
- set aux {}
- set err {}
- r del zscoretest
- for {set i 0} {$i < 1000} {incr i} {
- set score [expr rand()]
- lappend aux $score
- r zadd zscoretest $score $i
- }
- r debug reload
- for {set i 0} {$i < 1000} {incr i} {
- if {[r zscore zscoretest $i] != [lindex $aux $i]} {
- set err "Expected score was [lindex $aux $i] but got [r zscore zscoretest $i] for element $i"
- break
- }
- }
- set _ $err
- } {}
-
- test {ZRANGE and ZREVRANGE basics} {
- list [r zrange ztmp 0 -1] [r zrevrange ztmp 0 -1] \
- [r zrange ztmp 1 -1] [r zrevrange ztmp 1 -1]
- } {{y x z} {z x y} {x z} {x y}}
-
- test {ZRANGE WITHSCORES} {
- r zrange ztmp 0 -1 withscores
- } {y 1 x 10 z 30}
-
- test {ZSETs stress tester - sorting is working well?} {
- set delta 0
- for {set test 0} {$test < 2} {incr test} {
- unset -nocomplain auxarray
- array set auxarray {}
- set auxlist {}
- r del myzset
- for {set i 0} {$i < 1000} {incr i} {
- if {$test == 0} {
- set score [expr rand()]
- } else {
- set score [expr int(rand()*10)]
- }
- set auxarray($i) $score
- r zadd myzset $score $i
- # Random update
- if {[expr rand()] < .2} {
- set j [expr int(rand()*1000)]
- if {$test == 0} {
- set score [expr rand()]
- } else {
- set score [expr int(rand()*10)]
- }
- set auxarray($j) $score
- r zadd myzset $score $j
- }
- }
- foreach {item score} [array get auxarray] {
- lappend auxlist [list $score $item]
- }
- set sorted [lsort -command zlistAlikeSort $auxlist]
- set auxlist {}
- foreach x $sorted {
- lappend auxlist [lindex $x 1]
- }
- set fromredis [r zrange myzset 0 -1]
- set delta 0
- for {set i 0} {$i < [llength $fromredis]} {incr i} {
- if {[lindex $fromredis $i] != [lindex $auxlist $i]} {
- incr delta
- }
- }
- }
- format $delta
- } {0}
-
- test {ZINCRBY - can create a new sorted set} {
- r del zset
- r zincrby zset 1 foo
- list [r zrange zset 0 -1] [r zscore zset foo]
- } {foo 1}
-
- test {ZINCRBY - increment and decrement} {
- r zincrby zset 2 foo
- r zincrby zset 1 bar
- set v1 [r zrange zset 0 -1]
- r zincrby zset 10 bar
- r zincrby zset -5 foo
- r zincrby zset -5 bar
- set v2 [r zrange zset 0 -1]
- list $v1 $v2 [r zscore zset foo] [r zscore zset bar]
- } {{bar foo} {foo bar} -2 6}
-
- test {ZRANGEBYSCORE and ZCOUNT basics} {
- r del zset
- r zadd zset 1 a
- r zadd zset 2 b
- r zadd zset 3 c
- r zadd zset 4 d
- r zadd zset 5 e
- list [r zrangebyscore zset 2 4] [r zrangebyscore zset (2 (4] \
- [r zcount zset 2 4] [r zcount zset (2 (4]
- } {{b c d} c 3 1}
-
- test {ZRANGEBYSCORE withscores} {
- r del zset
- r zadd zset 1 a
- r zadd zset 2 b
- r zadd zset 3 c
- r zadd zset 4 d
- r zadd zset 5 e
- r zrangebyscore zset 2 4 withscores
- } {b 2 c 3 d 4}
-
- test {ZRANGEBYSCORE fuzzy test, 100 ranges in 1000 elements sorted set} {
- set err {}
- r del zset
- for {set i 0} {$i < 1000} {incr i} {
- r zadd zset [expr rand()] $i
- }
- for {set i 0} {$i < 100} {incr i} {
- set min [expr rand()]
- set max [expr rand()]
- if {$min > $max} {
- set aux $min
- set min $max
- set max $aux
- }
- set low [r zrangebyscore zset -inf $min]
- set ok [r zrangebyscore zset $min $max]
- set high [r zrangebyscore zset $max +inf]
- set lowx [r zrangebyscore zset -inf ($min]
- set okx [r zrangebyscore zset ($min ($max]
- set highx [r zrangebyscore zset ($max +inf]
-
- if {[r zcount zset -inf $min] != [llength $low]} {
- append err "Error, len does not match zcount\n"
- }
- if {[r zcount zset $min $max] != [llength $ok]} {
- append err "Error, len does not match zcount\n"
- }
- if {[r zcount zset $max +inf] != [llength $high]} {
- append err "Error, len does not match zcount\n"
- }
- if {[r zcount zset -inf ($min] != [llength $lowx]} {
- append err "Error, len does not match zcount\n"
- }
- if {[r zcount zset ($min ($max] != [llength $okx]} {
- append err "Error, len does not match zcount\n"
- }
- if {[r zcount zset ($max +inf] != [llength $highx]} {
- append err "Error, len does not match zcount\n"
- }
-
- foreach x $low {
- set score [r zscore zset $x]
- if {$score > $min} {
- append err "Error, score for $x is $score > $min\n"
- }
- }
- foreach x $lowx {
- set score [r zscore zset $x]
- if {$score >= $min} {
- append err "Error, score for $x is $score >= $min\n"
- }
- }
- foreach x $ok {
- set score [r zscore zset $x]
- if {$score < $min || $score > $max} {
- append err "Error, score for $x is $score outside $min-$max range\n"
- }
- }
- foreach x $okx {
- set score [r zscore zset $x]
- if {$score <= $min || $score >= $max} {
- append err "Error, score for $x is $score outside $min-$max open range\n"
- }
- }
- foreach x $high {
- set score [r zscore zset $x]
- if {$score < $max} {
- append err "Error, score for $x is $score < $max\n"
- }
- }
- foreach x $highx {
- set score [r zscore zset $x]
- if {$score <= $max} {
- append err "Error, score for $x is $score <= $max\n"
- }
- }
- }
- set _ $err
- } {}
-
- test {ZRANGEBYSCORE with LIMIT} {
- r del zset
- r zadd zset 1 a
- r zadd zset 2 b
- r zadd zset 3 c
- r zadd zset 4 d
- r zadd zset 5 e
- list \
- [r zrangebyscore zset 0 10 LIMIT 0 2] \
- [r zrangebyscore zset 0 10 LIMIT 2 3] \
- [r zrangebyscore zset 0 10 LIMIT 2 10] \
- [r zrangebyscore zset 0 10 LIMIT 20 10]
- } {{a b} {c d e} {c d e} {}}
-
- test {ZRANGEBYSCORE with LIMIT and withscores} {
- r del zset
- r zadd zset 10 a
- r zadd zset 20 b
- r zadd zset 30 c
- r zadd zset 40 d
- r zadd zset 50 e
- r zrangebyscore zset 20 50 LIMIT 2 3 withscores
- } {d 40 e 50}
-
- test {ZREMRANGEBYSCORE basics} {
- r del zset
- r zadd zset 1 a
- r zadd zset 2 b
- r zadd zset 3 c
- r zadd zset 4 d
- r zadd zset 5 e
- list [r zremrangebyscore zset 2 4] [r zrange zset 0 -1]
- } {3 {a e}}
-
- test {ZREMRANGEBYSCORE from -inf to +inf} {
- r del zset
- r zadd zset 1 a
- r zadd zset 2 b
- r zadd zset 3 c
- r zadd zset 4 d
- r zadd zset 5 e
- list [r zremrangebyscore zset -inf +inf] [r zrange zset 0 -1]
- } {5 {}}
-
- test {ZREMRANGEBYRANK basics} {
- r del zset
- r zadd zset 1 a
- r zadd zset 2 b
- r zadd zset 3 c
- r zadd zset 4 d
- r zadd zset 5 e
- list [r zremrangebyrank zset 1 3] [r zrange zset 0 -1]
- } {3 {a e}}
-
- test {ZUNION against non-existing key doesn't set destination} {
- r del zseta
- list [r zunion dst_key 1 zseta] [r exists dst_key]
- } {0 0}
-
- test {ZUNION basics} {
- r del zseta zsetb zsetc
- r zadd zseta 1 a
- r zadd zseta 2 b
- r zadd zseta 3 c
- r zadd zsetb 1 b
- r zadd zsetb 2 c
- r zadd zsetb 3 d
- list [r zunion zsetc 2 zseta zsetb] [r zrange zsetc 0 -1 withscores]
- } {4 {a 1 b 3 d 3 c 5}}
-
- test {ZUNION with weights} {
- list [r zunion zsetc 2 zseta zsetb weights 2 3] [r zrange zsetc 0 -1 withscores]
- } {4 {a 2 b 7 d 9 c 12}}
-
- test {ZUNION with AGGREGATE MIN} {
- list [r zunion zsetc 2 zseta zsetb aggregate min] [r zrange zsetc 0 -1 withscores]
- } {4 {a 1 b 1 c 2 d 3}}
-
- test {ZUNION with AGGREGATE MAX} {
- list [r zunion zsetc 2 zseta zsetb aggregate max] [r zrange zsetc 0 -1 withscores]
- } {4 {a 1 b 2 c 3 d 3}}
-
- test {ZINTER basics} {
- list [r zinter zsetc 2 zseta zsetb] [r zrange zsetc 0 -1 withscores]
- } {2 {b 3 c 5}}
-
- test {ZINTER with weights} {
- list [r zinter zsetc 2 zseta zsetb weights 2 3] [r zrange zsetc 0 -1 withscores]
- } {2 {b 7 c 12}}
-
- test {ZINTER with AGGREGATE MIN} {
- list [r zinter zsetc 2 zseta zsetb aggregate min] [r zrange zsetc 0 -1 withscores]
- } {2 {b 1 c 2}}
-
- test {ZINTER with AGGREGATE MAX} {
- list [r zinter zsetc 2 zseta zsetb aggregate max] [r zrange zsetc 0 -1 withscores]
- } {2 {b 2 c 3}}
-
- test {ZSETs skiplist implementation backlink consistency test} {
- set diff 0
- set elements 10000
- for {set j 0} {$j < $elements} {incr j} {
- r zadd myzset [expr rand()] "Element-$j"
- r zrem myzset "Element-[expr int(rand()*$elements)]"
- }
- set l1 [r zrange myzset 0 -1]
- set l2 [r zrevrange myzset 0 -1]
- for {set j 0} {$j < [llength $l1]} {incr j} {
- if {[lindex $l1 $j] ne [lindex $l2 end-$j]} {
- incr diff
- }
- }
- format $diff
- } {0}
-
- test {ZSETs ZRANK augmented skip list stress testing} {
- set err {}
- r del myzset
- for {set k 0} {$k < 10000} {incr k} {
- set i [expr {$k%1000}]
- if {[expr rand()] < .2} {
- r zrem myzset $i
- } else {
- set score [expr rand()]
- r zadd myzset $score $i
- }
- set card [r zcard myzset]
- if {$card > 0} {
- set index [randomInt $card]
- set ele [lindex [r zrange myzset $index $index] 0]
- set rank [r zrank myzset $ele]
- if {$rank != $index} {
- set err "$ele RANK is wrong! ($rank != $index)"
- break
- }
- }
- }
- set _ $err
- } {}
-}
--- /dev/null
+# Redis configuration file example
+
+# Note on units: when memory size is needed, it is possible to specifiy
+# it in the usual form of 1k 5GB 4M and so forth:
+#
+# 1k => 1000 bytes
+# 1kb => 1024 bytes
+# 1m => 1000000 bytes
+# 1mb => 1024*1024 bytes
+# 1g => 1000000000 bytes
+# 1gb => 1024*1024*1024 bytes
+#
+# units are case insensitive so 1GB 1Gb 1gB are all the same.
+
+# By default Redis does not run as a daemon. Use 'yes' if you need it.
+# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
+daemonize no
+
+# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
+# default. You can specify a custom pid file location here.
+pidfile redis.pid
+
+# Accept connections on the specified port, default is 6379
+port 6379
+
+# If you want you can bind a single interface, if the bind option is not
+# specified all the interfaces will listen for incoming connections.
+#
+# bind 127.0.0.1
+
+# Close the connection after a client is idle for N seconds (0 to disable)
+timeout 300
+
+# Set server verbosity to 'debug'
+# it can be one of:
+# debug (a lot of information, useful for development/testing)
+# verbose (many rarely useful info, but not a mess like the debug level)
+# notice (moderately verbose, what you want in production probably)
+# warning (only very important / critical messages are logged)
+loglevel verbose
+
+# Specify the log file name. Also 'stdout' can be used to force
+# Redis to log on the standard output. Note that if you use standard
+# output for logging but daemonize, logs will be sent to /dev/null
+logfile stdout
+
+# Set the number of databases. The default database is DB 0, you can select
+# a different one on a per-connection basis using SELECT <dbid> where
+# dbid is a number between 0 and 'databases'-1
+databases 16
+
+################################ SNAPSHOTTING #################################
+#
+# Save the DB on disk:
+#
+# save <seconds> <changes>
+#
+# Will save the DB if both the given number of seconds and the given
+# number of write operations against the DB occurred.
+#
+# In the example below the behaviour will be to save:
+# after 900 sec (15 min) if at least 1 key changed
+# after 300 sec (5 min) if at least 10 keys changed
+# after 60 sec if at least 10000 keys changed
+#
+# Note: you can disable saving at all commenting all the "save" lines.
+
+save 900 1
+save 300 10
+save 60 10000
+
+# Compress string objects using LZF when dump .rdb databases?
+# For default that's set to 'yes' as it's almost always a win.
+# If you want to save some CPU in the saving child set it to 'no' but
+# the dataset will likely be bigger if you have compressible values or keys.
+rdbcompression yes
+
+# The filename where to dump the DB
+dbfilename dump.rdb
+
+# The working directory.
+#
+# The DB will be written inside this directory, with the filename specified
+# above using the 'dbfilename' configuration directive.
+#
+# Also the Append Only File will be created inside this directory.
+#
+# Note that you must specify a directory here, not a file name.
+dir ./test/tmp
+
+################################# REPLICATION #################################
+
+# Master-Slave replication. Use slaveof to make a Redis instance a copy of
+# another Redis server. Note that the configuration is local to the slave
+# so for example it is possible to configure the slave to save the DB with a
+# different interval, or to listen to another port, and so on.
+#
+# slaveof <masterip> <masterport>
+
+# If the master is password protected (using the "requirepass" configuration
+# directive below) it is possible to tell the slave to authenticate before
+# starting the replication synchronization process, otherwise the master will
+# refuse the slave request.
+#
+# masterauth <master-password>
+
+################################## SECURITY ###################################
+
+# Require clients to issue AUTH <PASSWORD> before processing any other
+# commands. This might be useful in environments in which you do not trust
+# others with access to the host running redis-server.
+#
+# This should stay commented out for backward compatibility and because most
+# people do not need auth (e.g. they run their own servers).
+#
+# Warning: since Redis is pretty fast an outside user can try up to
+# 150k passwords per second against a good box. This means that you should
+# use a very strong password otherwise it will be very easy to break.
+#
+# requirepass foobared
+
+################################### LIMITS ####################################
+
+# Set the max number of connected clients at the same time. By default there
+# is no limit, and it's up to the number of file descriptors the Redis process
+# is able to open. The special value '0' means no limits.
+# Once the limit is reached Redis will close all the new connections sending
+# an error 'max number of clients reached'.
+#
+# maxclients 128
+
+# Don't use more memory than the specified amount of bytes.
+# When the memory limit is reached Redis will try to remove keys with an
+# EXPIRE set. It will try to start freeing keys that are going to expire
+# in little time and preserve keys with a longer time to live.
+# Redis will also try to remove objects from free lists if possible.
+#
+# If all this fails, Redis will start to reply with errors to commands
+# that will use more memory, like SET, LPUSH, and so on, and will continue
+# to reply to most read-only commands like GET.
+#
+# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
+# 'state' server or cache, not as a real DB. When Redis is used as a real
+# database the memory usage will grow over the weeks, it will be obvious if
+# it is going to use too much memory in the long run, and you'll have the time
+# to upgrade. With maxmemory after the limit is reached you'll start to get
+# errors for write operations, and this may even lead to DB inconsistency.
+#
+# maxmemory <bytes>
+
+############################## APPEND ONLY MODE ###############################
+
+# By default Redis asynchronously dumps the dataset on disk. If you can live
+# with the idea that the latest records will be lost if something like a crash
+# happens this is the preferred way to run Redis. If instead you care a lot
+# about your data and don't want to that a single record can get lost you should
+# enable the append only mode: when this mode is enabled Redis will append
+# every write operation received in the file appendonly.aof. This file will
+# be read on startup in order to rebuild the full dataset in memory.
+#
+# Note that you can have both the async dumps and the append only file if you
+# like (you have to comment the "save" statements above to disable the dumps).
+# Still if append only mode is enabled Redis will load the data from the
+# log file at startup ignoring the dump.rdb file.
+#
+# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
+# log file in background when it gets too big.
+
+appendonly no
+
+# The name of the append only file (default: "appendonly.aof")
+# appendfilename appendonly.aof
+
+# The fsync() call tells the Operating System to actually write data on disk
+# instead to wait for more data in the output buffer. Some OS will really flush
+# data on disk, some other OS will just try to do it ASAP.
+#
+# Redis supports three different modes:
+#
+# no: don't fsync, just let the OS flush the data when it wants. Faster.
+# always: fsync after every write to the append only log . Slow, Safest.
+# everysec: fsync only if one second passed since the last fsync. Compromise.
+#
+# The default is "everysec" that's usually the right compromise between
+# speed and data safety. It's up to you to understand if you can relax this to
+# "no" that will will let the operating system flush the output buffer when
+# it wants, for better performances (but if you can live with the idea of
+# some data loss consider the default persistence mode that's snapshotting),
+# or on the contrary, use "always" that's very slow but a bit safer than
+# everysec.
+#
+# If unsure, use "everysec".
+
+# appendfsync always
+appendfsync everysec
+# appendfsync no
+
+################################ VIRTUAL MEMORY ###############################
+
+# Virtual Memory allows Redis to work with datasets bigger than the actual
+# amount of RAM needed to hold the whole dataset in memory.
+# In order to do so very used keys are taken in memory while the other keys
+# are swapped into a swap file, similarly to what operating systems do
+# with memory pages.
+#
+# To enable VM just set 'vm-enabled' to yes, and set the following three
+# VM parameters accordingly to your needs.
+
+vm-enabled no
+# vm-enabled yes
+
+# This is the path of the Redis swap file. As you can guess, swap files
+# can't be shared by different Redis instances, so make sure to use a swap
+# file for every redis process you are running. Redis will complain if the
+# swap file is already in use.
+#
+# The best kind of storage for the Redis swap file (that's accessed at random)
+# is a Solid State Disk (SSD).
+#
+# *** WARNING *** if you are using a shared hosting the default of putting
+# the swap file under /tmp is not secure. Create a dir with access granted
+# only to Redis user and configure Redis to create the swap file there.
+vm-swap-file /tmp/redis.swap
+
+# vm-max-memory configures the VM to use at max the specified amount of
+# RAM. Everything that deos not fit will be swapped on disk *if* possible, that
+# is, if there is still enough contiguous space in the swap file.
+#
+# With vm-max-memory 0 the system will swap everything it can. Not a good
+# default, just specify the max amount of RAM you can in bytes, but it's
+# better to leave some margin. For instance specify an amount of RAM
+# that's more or less between 60 and 80% of your free RAM.
+vm-max-memory 0
+
+# Redis swap files is split into pages. An object can be saved using multiple
+# contiguous pages, but pages can't be shared between different objects.
+# So if your page is too big, small objects swapped out on disk will waste
+# a lot of space. If you page is too small, there is less space in the swap
+# file (assuming you configured the same number of total swap file pages).
+#
+# If you use a lot of small objects, use a page size of 64 or 32 bytes.
+# If you use a lot of big objects, use a bigger page size.
+# If unsure, use the default :)
+vm-page-size 32
+
+# Number of total memory pages in the swap file.
+# Given that the page table (a bitmap of free/used pages) is taken in memory,
+# every 8 pages on disk will consume 1 byte of RAM.
+#
+# The total swap size is vm-page-size * vm-pages
+#
+# With the default of 32-bytes memory pages and 134217728 pages Redis will
+# use a 4 GB swap file, that will use 16 MB of RAM for the page table.
+#
+# It's better to use the smallest acceptable value for your application,
+# but the default is large in order to work in most conditions.
+vm-pages 134217728
+
+# Max number of VM I/O threads running at the same time.
+# This threads are used to read/write data from/to swap file, since they
+# also encode and decode objects from disk to memory or the reverse, a bigger
+# number of threads can help with big objects even if they can't help with
+# I/O itself as the physical device may not be able to couple with many
+# reads/writes operations at the same time.
+#
+# The special value of 0 turn off threaded I/O and enables the blocking
+# Virtual Memory implementation.
+vm-max-threads 4
+
+############################### ADVANCED CONFIG ###############################
+
+# Glue small output buffers together in order to send small replies in a
+# single TCP packet. Uses a bit more CPU but most of the times it is a win
+# in terms of number of queries per second. Use 'yes' if unsure.
+glueoutputbuf yes
+
+# Hashes are encoded in a special way (much more memory efficient) when they
+# have at max a given numer of elements, and the biggest element does not
+# exceed a given threshold. You can configure this limits with the following
+# configuration directives.
+hash-max-zipmap-entries 64
+hash-max-zipmap-value 512
+
+# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
+# order to help rehashing the main Redis hash table (the one mapping top-level
+# keys to values). The hash table implementation redis uses (see dict.c)
+# performs a lazy rehashing: the more operation you run into an hash table
+# that is rhashing, the more rehashing "steps" are performed, so if the
+# server is idle the rehashing is never complete and some more memory is used
+# by the hash table.
+#
+# The default is to use this millisecond 10 times every second in order to
+# active rehashing the main dictionaries, freeing memory when possible.
+#
+# If unsure:
+# use "activerehashing no" if you have hard latency requirements and it is
+# not a good thing in your environment that Redis can reply form time to time
+# to queries with 2 milliseconds delay.
+#
+# use "activerehashing yes" if you don't have such hard requirements but
+# want to free memory asap when possible.
+activerehashing yes
+
+################################## INCLUDES ###################################
+
+# Include one or more other config files here. This is useful if you
+# have a standard template that goes to all redis server but also need
+# to customize a few per-server settings. Include files can include
+# other files, so use this wisely.
+#
+# include /path/to/local.conf
+# include /path/to/other.conf
--- /dev/null
+# Tcl clinet library - used by test-redis.tcl script for now
+# Copyright (C) 2009 Salvatore Sanfilippo
+# Released under the BSD license like Redis itself
+#
+# Example usage:
+#
+# set r [redis 127.0.0.1 6379]
+# $r lpush mylist foo
+# $r lpush mylist bar
+# $r lrange mylist 0 -1
+# $r close
+#
+# Non blocking usage example:
+#
+# proc handlePong {r type reply} {
+# puts "PONG $type '$reply'"
+# if {$reply ne "PONG"} {
+# $r ping [list handlePong]
+# }
+# }
+#
+# set r [redis]
+# $r blocking 0
+# $r get fo [list handlePong]
+#
+# vwait forever
+
+package require Tcl 8.5
+package provide redis 0.1
+
+namespace eval redis {}
+set ::redis::id 0
+array set ::redis::fd {}
+array set ::redis::blocking {}
+array set ::redis::callback {}
+array set ::redis::state {} ;# State in non-blocking reply reading
+array set ::redis::statestack {} ;# Stack of states, for nested mbulks
+array set ::redis::bulkarg {}
+array set ::redis::multibulkarg {}
+
+# Flag commands requiring last argument as a bulk write operation
+foreach redis_bulk_cmd {
+ set setnx rpush lpush lset lrem sadd srem sismember echo getset smove zadd zrem zscore zincrby append zrank zrevrank hget hdel hexists setex
+} {
+ set ::redis::bulkarg($redis_bulk_cmd) {}
+}
+
+# Flag commands requiring last argument as a bulk write operation
+foreach redis_multibulk_cmd {
+ mset msetnx hset hsetnx hmset hmget
+} {
+ set ::redis::multibulkarg($redis_multibulk_cmd) {}
+}
+
+unset redis_bulk_cmd
+unset redis_multibulk_cmd
+
+proc redis {{server 127.0.0.1} {port 6379}} {
+ set fd [socket $server $port]
+ fconfigure $fd -translation binary
+ set id [incr ::redis::id]
+ set ::redis::fd($id) $fd
+ set ::redis::blocking($id) 1
+ ::redis::redis_reset_state $id
+ interp alias {} ::redis::redisHandle$id {} ::redis::__dispatch__ $id
+}
+
+proc ::redis::__dispatch__ {id method args} {
+ set fd $::redis::fd($id)
+ set blocking $::redis::blocking($id)
+ if {$blocking == 0} {
+ if {[llength $args] == 0} {
+ error "Please provide a callback in non-blocking mode"
+ }
+ set callback [lindex $args end]
+ set args [lrange $args 0 end-1]
+ }
+ if {[info command ::redis::__method__$method] eq {}} {
+ if {[info exists ::redis::bulkarg($method)]} {
+ set cmd "$method "
+ append cmd [join [lrange $args 0 end-1]]
+ append cmd " [string length [lindex $args end]]\r\n"
+ append cmd [lindex $args end]
+ ::redis::redis_writenl $fd $cmd
+ } elseif {[info exists ::redis::multibulkarg($method)]} {
+ set cmd "*[expr {[llength $args]+1}]\r\n"
+ append cmd "$[string length $method]\r\n$method\r\n"
+ foreach a $args {
+ append cmd "$[string length $a]\r\n$a\r\n"
+ }
+ ::redis::redis_write $fd $cmd
+ flush $fd
+ } else {
+ set cmd "$method "
+ append cmd [join $args]
+ ::redis::redis_writenl $fd $cmd
+ }
+ if {$blocking} {
+ ::redis::redis_read_reply $fd
+ } else {
+ # Every well formed reply read will pop an element from this
+ # list and use it as a callback. So pipelining is supported
+ # in non blocking mode.
+ lappend ::redis::callback($id) $callback
+ fileevent $fd readable [list ::redis::redis_readable $fd $id]
+ }
+ } else {
+ uplevel 1 [list ::redis::__method__$method $id $fd] $args
+ }
+}
+
+proc ::redis::__method__blocking {id fd val} {
+ set ::redis::blocking($id) $val
+ fconfigure $fd -blocking $val
+}
+
+proc ::redis::__method__close {id fd} {
+ catch {close $fd}
+ catch {unset ::redis::fd($id)}
+ catch {unset ::redis::blocking($id)}
+ catch {unset ::redis::state($id)}
+ catch {unset ::redis::statestack($id)}
+ catch {unset ::redis::callback($id)}
+ catch {interp alias {} ::redis::redisHandle$id {}}
+}
+
+proc ::redis::__method__channel {id fd} {
+ return $fd
+}
+
+proc ::redis::redis_write {fd buf} {
+ puts -nonewline $fd $buf
+}
+
+proc ::redis::redis_writenl {fd buf} {
+ redis_write $fd $buf
+ redis_write $fd "\r\n"
+ flush $fd
+}
+
+proc ::redis::redis_readnl {fd len} {
+ set buf [read $fd $len]
+ read $fd 2 ; # discard CR LF
+ return $buf
+}
+
+proc ::redis::redis_bulk_read {fd} {
+ set count [redis_read_line $fd]
+ if {$count == -1} return {}
+ set buf [redis_readnl $fd $count]
+ return $buf
+}
+
+proc ::redis::redis_multi_bulk_read fd {
+ set count [redis_read_line $fd]
+ if {$count == -1} return {}
+ set l {}
+ for {set i 0} {$i < $count} {incr i} {
+ lappend l [redis_read_reply $fd]
+ }
+ return $l
+}
+
+proc ::redis::redis_read_line fd {
+ string trim [gets $fd]
+}
+
+proc ::redis::redis_read_reply fd {
+ set type [read $fd 1]
+ switch -exact -- $type {
+ : -
+ + {redis_read_line $fd}
+ - {return -code error [redis_read_line $fd]}
+ $ {redis_bulk_read $fd}
+ * {redis_multi_bulk_read $fd}
+ default {return -code error "Bad protocol, $type as reply type byte"}
+ }
+}
+
+proc ::redis::redis_reset_state id {
+ set ::redis::state($id) [dict create buf {} mbulk -1 bulk -1 reply {}]
+ set ::redis::statestack($id) {}
+}
+
+proc ::redis::redis_call_callback {id type reply} {
+ set cb [lindex $::redis::callback($id) 0]
+ set ::redis::callback($id) [lrange $::redis::callback($id) 1 end]
+ uplevel #0 $cb [list ::redis::redisHandle$id $type $reply]
+ ::redis::redis_reset_state $id
+}
+
+# Read a reply in non-blocking mode.
+proc ::redis::redis_readable {fd id} {
+ if {[eof $fd]} {
+ redis_call_callback $id eof {}
+ ::redis::__method__close $id $fd
+ return
+ }
+ if {[dict get $::redis::state($id) bulk] == -1} {
+ set line [gets $fd]
+ if {$line eq {}} return ;# No complete line available, return
+ switch -exact -- [string index $line 0] {
+ : -
+ + {redis_call_callback $id reply [string range $line 1 end-1]}
+ - {redis_call_callback $id err [string range $line 1 end-1]}
+ $ {
+ dict set ::redis::state($id) bulk \
+ [expr [string range $line 1 end-1]+2]
+ if {[dict get $::redis::state($id) bulk] == 1} {
+ # We got a $-1, hack the state to play well with this.
+ dict set ::redis::state($id) bulk 2
+ dict set ::redis::state($id) buf "\r\n"
+ ::redis::redis_readable $fd $id
+ }
+ }
+ * {
+ dict set ::redis::state($id) mbulk [string range $line 1 end-1]
+ # Handle *-1
+ if {[dict get $::redis::state($id) mbulk] == -1} {
+ redis_call_callback $id reply {}
+ }
+ }
+ default {
+ redis_call_callback $id err \
+ "Bad protocol, $type as reply type byte"
+ }
+ }
+ } else {
+ set totlen [dict get $::redis::state($id) bulk]
+ set buflen [string length [dict get $::redis::state($id) buf]]
+ set toread [expr {$totlen-$buflen}]
+ set data [read $fd $toread]
+ set nread [string length $data]
+ dict append ::redis::state($id) buf $data
+ # Check if we read a complete bulk reply
+ if {[string length [dict get $::redis::state($id) buf]] ==
+ [dict get $::redis::state($id) bulk]} {
+ if {[dict get $::redis::state($id) mbulk] == -1} {
+ redis_call_callback $id reply \
+ [string range [dict get $::redis::state($id) buf] 0 end-2]
+ } else {
+ dict with ::redis::state($id) {
+ lappend reply [string range $buf 0 end-2]
+ incr mbulk -1
+ set bulk -1
+ }
+ if {[dict get $::redis::state($id) mbulk] == 0} {
+ redis_call_callback $id reply \
+ [dict get $::redis::state($id) reply]
+ }
+ }
+ }
+ }
+}
--- /dev/null
+proc error_and_quit {config_file error} {
+ puts "!!COULD NOT START REDIS-SERVER\n"
+ puts "CONFIGURATION:"
+ puts [exec cat $config_file]
+ puts "\nERROR:"
+ puts [string trim $error]
+ exit 1
+}
+
+proc kill_server config {
+ set pid [dict get $config pid]
+
+ # check for leaks
+ catch {
+ if {[string match {*Darwin*} [exec uname -a]]} {
+ test {Check for memory leaks} {
+ exec leaks $pid
+ } {*0 leaks*}
+ }
+ }
+
+ # kill server and wait for the process to be totally exited
+ exec kill $pid
+ while 1 {
+ # with a non-zero exit status, the process is gone
+ if {[catch {exec ps -p $pid | grep redis-server} result]} {
+ break
+ }
+ after 10
+ }
+}
+
+proc start_server {filename overrides {code undefined}} {
+ set data [split [exec cat "tests/assets/$filename"] "\n"]
+ set config {}
+ foreach line $data {
+ if {[string length $line] > 0 && [string index $line 0] ne "#"} {
+ set elements [split $line " "]
+ set directive [lrange $elements 0 0]
+ set arguments [lrange $elements 1 end]
+ dict set config $directive $arguments
+ }
+ }
+
+ # use a different directory every time a server is started
+ dict set config dir [tmpdir server]
+
+ # start every server on a different port
+ dict set config port [incr ::port]
+
+ # apply overrides from arguments
+ foreach override $overrides {
+ set directive [lrange $override 0 0]
+ set arguments [lrange $override 1 end]
+ dict set config $directive $arguments
+ }
+
+ # write new configuration to temporary file
+ set config_file [tmpfile redis.conf]
+ set fp [open $config_file w+]
+ foreach directive [dict keys $config] {
+ puts -nonewline $fp "$directive "
+ puts $fp [dict get $config $directive]
+ }
+ close $fp
+
+ set stdout [format "%s/%s" [dict get $config "dir"] "stdout"]
+ set stderr [format "%s/%s" [dict get $config "dir"] "stderr"]
+ exec ./redis-server $config_file > $stdout 2> $stderr &
+ after 500
+
+ # check that the server actually started
+ if {[file size $stderr] > 0} {
+ error_and_quit $config_file [exec cat $stderr]
+ }
+
+ set line [exec head -n1 $stdout]
+ if {[string match {*already in use*} $line]} {
+ error_and_quit $config_file $line
+ }
+
+ while 1 {
+ # check that the server actually started and is ready for connections
+ if {[exec cat $stdout | grep "ready to accept" | wc -l] > 0} {
+ break
+ }
+ after 10
+ }
+
+ # find out the pid
+ regexp {^\[(\d+)\]} [exec head -n1 $stdout] _ pid
+
+ # create the client object
+ set host $::host
+ set port $::port
+ if {[dict exists $config bind]} { set host [dict get $config bind] }
+ if {[dict exists $config port]} { set port [dict get $config port] }
+ set client [redis $host $port]
+
+ # select the right db when we don't have to authenticate
+ if {![dict exists $config requirepass]} {
+ $client select 9
+ }
+
+ # setup config dict
+ dict set ret "config" $config_file
+ dict set ret "pid" $pid
+ dict set ret "stdout" $stdout
+ dict set ret "stderr" $stderr
+ dict set ret "client" $client
+
+ if {$code ne "undefined"} {
+ # append the client to the client stack
+ lappend ::clients $client
+
+ # execute provided block
+ catch { uplevel 1 $code } err
+
+ # pop the client object
+ set ::clients [lrange $::clients 0 end-1]
+
+ kill_server $ret
+
+ if {[string length $err] > 0} {
+ puts "Error executing the suite, aborting..."
+ puts $err
+ exit 1
+ }
+ } else {
+ set _ $ret
+ }
+}
--- /dev/null
+set ::passed 0
+set ::failed 0
+set ::testnum 0
+
+proc test {name code okpattern} {
+ incr ::testnum
+ # if {$::testnum < $::first || $::testnum > $::last} return
+ puts -nonewline [format "#%03d %-68s " $::testnum $name]
+ flush stdout
+ set retval [uplevel 1 $code]
+ if {$okpattern eq $retval || [string match $okpattern $retval]} {
+ puts "PASSED"
+ incr ::passed
+ } else {
+ puts "!! ERROR expected\n'$okpattern'\nbut got\n'$retval'"
+ incr ::failed
+ }
+ if {$::traceleaks} {
+ if {![string match {*0 leaks*} [exec leaks redis-server]]} {
+ puts "--------- Test $::testnum LEAKED! --------"
+ exit 1
+ }
+ }
+}
--- /dev/null
+set ::tmpcounter 0
+set ::tmproot "./tests/tmp"
+file mkdir $::tmproot
+
+# returns a dirname unique to this process to write to
+proc tmpdir {basename} {
+ set dir [file join $::tmproot $basename.[pid].[incr ::tmpcounter]]
+ file mkdir $dir
+ set _ $dir
+}
+
+# return a filename unique to this process to write to
+proc tmpfile {basename} {
+ file join $::tmproot $basename.[pid].[incr ::tmpcounter]
+}
--- /dev/null
+proc randstring {min max {type binary}} {
+ set len [expr {$min+int(rand()*($max-$min+1))}]
+ set output {}
+ if {$type eq {binary}} {
+ set minval 0
+ set maxval 255
+ } elseif {$type eq {alpha}} {
+ set minval 48
+ set maxval 122
+ } elseif {$type eq {compr}} {
+ set minval 48
+ set maxval 52
+ }
+ while {$len} {
+ append output [format "%c" [expr {$minval+int(rand()*($maxval-$minval+1))}]]
+ incr len -1
+ }
+ return $output
+}
+
+# Useful for some test
+proc zlistAlikeSort {a b} {
+ if {[lindex $a 0] > [lindex $b 0]} {return 1}
+ if {[lindex $a 0] < [lindex $b 0]} {return -1}
+ string compare [lindex $a 1] [lindex $b 1]
+}
+
+proc waitForBgsave r {
+ while 1 {
+ set i [$r info]
+ if {[string match {*bgsave_in_progress:1*} $i]} {
+ puts -nonewline "\nWaiting for background save to finish... "
+ flush stdout
+ after 1000
+ } else {
+ break
+ }
+ }
+}
+
+proc waitForBgrewriteaof r {
+ while 1 {
+ set i [$r info]
+ if {[string match {*bgrewriteaof_in_progress:1*} $i]} {
+ puts -nonewline "\nWaiting for background AOF rewrite to finish... "
+ flush stdout
+ after 1000
+ } else {
+ break
+ }
+ }
+}
+
+proc randomInt {max} {
+ expr {int(rand()*$max)}
+}
+
+proc randpath args {
+ set path [expr {int(rand()*[llength $args])}]
+ uplevel 1 [lindex $args $path]
+}
+
+proc randomValue {} {
+ randpath {
+ # Small enough to likely collide
+ randomInt 1000
+ } {
+ # 32 bit compressible signed/unsigned
+ randpath {randomInt 2000000000} {randomInt 4000000000}
+ } {
+ # 64 bit
+ randpath {randomInt 1000000000000}
+ } {
+ # Random string
+ randpath {randstring 0 256 alpha} \
+ {randstring 0 256 compr} \
+ {randstring 0 256 binary}
+ }
+}
+
+proc randomKey {} {
+ randpath {
+ # Small enough to likely collide
+ randomInt 1000
+ } {
+ # 32 bit compressible signed/unsigned
+ randpath {randomInt 2000000000} {randomInt 4000000000}
+ } {
+ # 64 bit
+ randpath {randomInt 1000000000000}
+ } {
+ # Random string
+ randpath {randstring 1 256 alpha} \
+ {randstring 1 256 compr}
+ }
+}
+
+proc createComplexDataset {r ops} {
+ for {set j 0} {$j < $ops} {incr j} {
+ set k [randomKey]
+ set f [randomValue]
+ set v [randomValue]
+ randpath {
+ set d [expr {rand()}]
+ } {
+ set d [expr {rand()}]
+ } {
+ set d [expr {rand()}]
+ } {
+ set d [expr {rand()}]
+ } {
+ set d [expr {rand()}]
+ } {
+ randpath {set d +inf} {set d -inf}
+ }
+ set t [$r type $k]
+
+ if {$t eq {none}} {
+ randpath {
+ $r set $k $v
+ } {
+ $r lpush $k $v
+ } {
+ $r sadd $k $v
+ } {
+ $r zadd $k $d $v
+ } {
+ $r hset $k $f $v
+ }
+ set t [$r type $k]
+ }
+
+ switch $t {
+ {string} {
+ # Nothing to do
+ }
+ {list} {
+ randpath {$r lpush $k $v} \
+ {$r rpush $k $v} \
+ {$r lrem $k 0 $v} \
+ {$r rpop $k} \
+ {$r lpop $k}
+ }
+ {set} {
+ randpath {$r sadd $k $v} \
+ {$r srem $k $v}
+ }
+ {zset} {
+ randpath {$r zadd $k $d $v} \
+ {$r zrem $k $v}
+ }
+ {hash} {
+ randpath {$r hset $k $f $v} \
+ {$r hdel $k $f}
+ }
+ }
+ }
+}
--- /dev/null
+# Redis test suite. Copyright (C) 2009 Salvatore Sanfilippo antirez@gmail.com
+# This softare is released under the BSD License. See the COPYING file for
+# more information.
+
+set tcl_precision 17
+source tests/support/redis.tcl
+source tests/support/server.tcl
+source tests/support/tmpfile.tcl
+source tests/support/test.tcl
+source tests/support/util.tcl
+
+set ::host 127.0.0.1
+set ::port 16379
+set ::traceleaks 0
+
+proc execute_tests name {
+ set cur $::testnum
+ source "tests/$name.tcl"
+}
+
+# setup a list to hold a stack of clients. the proc "r" provides easy
+# access to the client at the top of the stack
+set ::clients {}
+proc r {args} {
+ set client [lindex $::clients end]
+ $client {*}$args
+}
+
+proc main {} {
+ execute_tests "unit/auth"
+ execute_tests "unit/protocol"
+ execute_tests "unit/basic"
+ execute_tests "unit/type/list"
+ execute_tests "unit/type/set"
+ execute_tests "unit/type/zset"
+ execute_tests "unit/type/hash"
+ execute_tests "unit/sort"
+ execute_tests "unit/expire"
+ execute_tests "unit/other"
+
+ puts "\n[expr $::passed+$::failed] tests, $::passed passed, $::failed failed"
+ if {$::failed > 0} {
+ puts "\n*** WARNING!!! $::failed FAILED TESTS ***\n"
+ }
+
+ # clean up tmp
+ exec rm -rf {*}[glob tests/tmp/redis.conf.*]
+ exec rm -rf {*}[glob tests/tmp/server.*]
+}
+
+main
--- /dev/null
+start_server default.conf {{requirepass foobar}} {
+ test {AUTH fails when a wrong password is given} {
+ catch {r auth wrong!} err
+ format $err
+ } {ERR*invalid password}
+
+ test {Arbitrary command gives an error when AUTH is required} {
+ catch {r set foo bar} err
+ format $err
+ } {ERR*operation not permitted}
+
+ test {AUTH succeeds when the right password is given} {
+ r auth foobar
+ } {OK}
+}
--- /dev/null
+start_server default.conf {} {
+ test {DEL all keys to start with a clean DB} {
+ foreach key [r keys *] {r del $key}
+ r dbsize
+ } {0}
+
+ test {SET and GET an item} {
+ r set x foobar
+ r get x
+ } {foobar}
+
+ test {SET and GET an empty item} {
+ r set x {}
+ r get x
+ } {}
+
+ test {DEL against a single item} {
+ r del x
+ r get x
+ } {}
+
+ test {Vararg DEL} {
+ r set foo1 a
+ r set foo2 b
+ r set foo3 c
+ list [r del foo1 foo2 foo3 foo4] [r mget foo1 foo2 foo3]
+ } {3 {{} {} {}}}
+
+ test {KEYS with pattern} {
+ foreach key {key_x key_y key_z foo_a foo_b foo_c} {
+ r set $key hello
+ }
+ lsort [r keys foo*]
+ } {foo_a foo_b foo_c}
+
+ test {KEYS to get all keys} {
+ lsort [r keys *]
+ } {foo_a foo_b foo_c key_x key_y key_z}
+
+ test {DBSIZE} {
+ r dbsize
+ } {6}
+
+ test {DEL all keys} {
+ foreach key [r keys *] {r del $key}
+ r dbsize
+ } {0}
+
+ test {Very big payload in GET/SET} {
+ set buf [string repeat "abcd" 1000000]
+ r set foo $buf
+ r get foo
+ } [string repeat "abcd" 1000000]
+
+ test {Very big payload random access} {
+ set err {}
+ array set payload {}
+ for {set j 0} {$j < 100} {incr j} {
+ set size [expr 1+[randomInt 100000]]
+ set buf [string repeat "pl-$j" $size]
+ set payload($j) $buf
+ r set bigpayload_$j $buf
+ }
+ for {set j 0} {$j < 1000} {incr j} {
+ set index [randomInt 100]
+ set buf [r get bigpayload_$index]
+ if {$buf != $payload($index)} {
+ set err "Values differ: I set '$payload($index)' but I read back '$buf'"
+ break
+ }
+ }
+ unset payload
+ set _ $err
+ } {}
+
+ test {SET 10000 numeric keys and access all them in reverse order} {
+ set err {}
+ for {set x 0} {$x < 10000} {incr x} {
+ r set $x $x
+ }
+ set sum 0
+ for {set x 9999} {$x >= 0} {incr x -1} {
+ set val [r get $x]
+ if {$val ne $x} {
+ set err "Eleemnt at position $x is $val instead of $x"
+ break
+ }
+ }
+ set _ $err
+ } {}
+
+ test {DBSIZE should be 10101 now} {
+ r dbsize
+ } {10101}
+
+ test {INCR against non existing key} {
+ set res {}
+ append res [r incr novar]
+ append res [r get novar]
+ } {11}
+
+ test {INCR against key created by incr itself} {
+ r incr novar
+ } {2}
+
+ test {INCR against key originally set with SET} {
+ r set novar 100
+ r incr novar
+ } {101}
+
+ test {INCR over 32bit value} {
+ r set novar 17179869184
+ r incr novar
+ } {17179869185}
+
+ test {INCRBY over 32bit value with over 32bit increment} {
+ r set novar 17179869184
+ r incrby novar 17179869184
+ } {34359738368}
+
+ test {INCR fails against key with spaces (no integer encoded)} {
+ r set novar " 11 "
+ catch {r incr novar} err
+ format $err
+ } {ERR*}
+
+ test {INCR fails against a key holding a list} {
+ r rpush mylist 1
+ catch {r incr mylist} err
+ r rpop mylist
+ format $err
+ } {ERR*}
+
+ test {DECRBY over 32bit value with over 32bit increment, negative res} {
+ r set novar 17179869184
+ r decrby novar 17179869185
+ } {-1}
+
+ test {SETNX target key missing} {
+ r setnx novar2 foobared
+ r get novar2
+ } {foobared}
+
+ test {SETNX target key exists} {
+ r setnx novar2 blabla
+ r get novar2
+ } {foobared}
+
+ test {SETNX will overwrite EXPIREing key} {
+ r set x 10
+ r expire x 10000
+ r setnx x 20
+ r get x
+ } {20}
+
+ test {EXISTS} {
+ set res {}
+ r set newkey test
+ append res [r exists newkey]
+ r del newkey
+ append res [r exists newkey]
+ } {10}
+
+ test {Zero length value in key. SET/GET/EXISTS} {
+ r set emptykey {}
+ set res [r get emptykey]
+ append res [r exists emptykey]
+ r del emptykey
+ append res [r exists emptykey]
+ } {10}
+
+ test {Commands pipelining} {
+ set fd [r channel]
+ puts -nonewline $fd "SET k1 4\r\nxyzk\r\nGET k1\r\nPING\r\n"
+ flush $fd
+ set res {}
+ append res [string match OK* [::redis::redis_read_reply $fd]]
+ append res [::redis::redis_read_reply $fd]
+ append res [string match PONG* [::redis::redis_read_reply $fd]]
+ format $res
+ } {1xyzk1}
+
+ test {Non existing command} {
+ catch {r foobaredcommand} err
+ string match ERR* $err
+ } {1}
+
+ test {RENAME basic usage} {
+ r set mykey hello
+ r rename mykey mykey1
+ r rename mykey1 mykey2
+ r get mykey2
+ } {hello}
+
+ test {RENAME source key should no longer exist} {
+ r exists mykey
+ } {0}
+
+ test {RENAME against already existing key} {
+ r set mykey a
+ r set mykey2 b
+ r rename mykey2 mykey
+ set res [r get mykey]
+ append res [r exists mykey2]
+ } {b0}
+
+ test {RENAMENX basic usage} {
+ r del mykey
+ r del mykey2
+ r set mykey foobar
+ r renamenx mykey mykey2
+ set res [r get mykey2]
+ append res [r exists mykey]
+ } {foobar0}
+
+ test {RENAMENX against already existing key} {
+ r set mykey foo
+ r set mykey2 bar
+ r renamenx mykey mykey2
+ } {0}
+
+ test {RENAMENX against already existing key (2)} {
+ set res [r get mykey]
+ append res [r get mykey2]
+ } {foobar}
+
+ test {RENAME against non existing source key} {
+ catch {r rename nokey foobar} err
+ format $err
+ } {ERR*}
+
+ test {RENAME where source and dest key is the same} {
+ catch {r rename mykey mykey} err
+ format $err
+ } {ERR*}
+
+ test {DEL all keys again (DB 0)} {
+ foreach key [r keys *] {
+ r del $key
+ }
+ r dbsize
+ } {0}
+
+ test {DEL all keys again (DB 1)} {
+ r select 10
+ foreach key [r keys *] {
+ r del $key
+ }
+ set res [r dbsize]
+ r select 9
+ format $res
+ } {0}
+
+ test {MOVE basic usage} {
+ r set mykey foobar
+ r move mykey 10
+ set res {}
+ lappend res [r exists mykey]
+ lappend res [r dbsize]
+ r select 10
+ lappend res [r get mykey]
+ lappend res [r dbsize]
+ r select 9
+ format $res
+ } [list 0 0 foobar 1]
+
+ test {MOVE against key existing in the target DB} {
+ r set mykey hello
+ r move mykey 10
+ } {0}
+
+ test {SET/GET keys in different DBs} {
+ r set a hello
+ r set b world
+ r select 10
+ r set a foo
+ r set b bared
+ r select 9
+ set res {}
+ lappend res [r get a]
+ lappend res [r get b]
+ r select 10
+ lappend res [r get a]
+ lappend res [r get b]
+ r select 9
+ format $res
+ } {hello world foo bared}
+
+ test {MGET} {
+ r flushdb
+ r set foo BAR
+ r set bar FOO
+ r mget foo bar
+ } {BAR FOO}
+
+ test {MGET against non existing key} {
+ r mget foo baazz bar
+ } {BAR {} FOO}
+
+ test {MGET against non-string key} {
+ r sadd myset ciao
+ r sadd myset bau
+ r mget foo baazz bar myset
+ } {BAR {} FOO {}}
+
+ test {RANDOMKEY} {
+ r flushdb
+ r set foo x
+ r set bar y
+ set foo_seen 0
+ set bar_seen 0
+ for {set i 0} {$i < 100} {incr i} {
+ set rkey [r randomkey]
+ if {$rkey eq {foo}} {
+ set foo_seen 1
+ }
+ if {$rkey eq {bar}} {
+ set bar_seen 1
+ }
+ }
+ list $foo_seen $bar_seen
+ } {1 1}
+
+ test {RANDOMKEY against empty DB} {
+ r flushdb
+ r randomkey
+ } {}
+
+ test {RANDOMKEY regression 1} {
+ r flushdb
+ r set x 10
+ r del x
+ r randomkey
+ } {}
+
+ test {GETSET (set new value)} {
+ list [r getset foo xyz] [r get foo]
+ } {{} xyz}
+
+ test {GETSET (replace old value)} {
+ r set foo bar
+ list [r getset foo xyz] [r get foo]
+ } {bar xyz}
+
+ test {MSET base case} {
+ r mset x 10 y "foo bar" z "x x x x x x x\n\n\r\n"
+ r mget x y z
+ } [list 10 {foo bar} "x x x x x x x\n\n\r\n"]
+
+ test {MSET wrong number of args} {
+ catch {r mset x 10 y "foo bar" z} err
+ format $err
+ } {*wrong number*}
+
+ test {MSETNX with already existent key} {
+ list [r msetnx x1 xxx y2 yyy x 20] [r exists x1] [r exists y2]
+ } {0 0 0}
+
+ test {MSETNX with not existing keys} {
+ list [r msetnx x1 xxx y2 yyy] [r get x1] [r get y2]
+ } {1 xxx yyy}
+
+ test {MSETNX should remove all the volatile keys even on failure} {
+ r mset x 1 y 2 z 3
+ r expire y 10000
+ r expire z 10000
+ list [r msetnx x A y B z C] [r mget x y z]
+ } {0 {1 {} {}}}
+}
--- /dev/null
+start_server default.conf {} {
+ test {EXPIRE - don't set timeouts multiple times} {
+ r set x foobar
+ set v1 [r expire x 5]
+ set v2 [r ttl x]
+ set v3 [r expire x 10]
+ set v4 [r ttl x]
+ list $v1 $v2 $v3 $v4
+ } {1 5 0 5}
+
+ test {EXPIRE - It should be still possible to read 'x'} {
+ r get x
+ } {foobar}
+
+ test {EXPIRE - After 6 seconds the key should no longer be here} {
+ after 6000
+ list [r get x] [r exists x]
+ } {{} 0}
+
+ test {EXPIRE - Delete on write policy} {
+ r del x
+ r lpush x foo
+ r expire x 1000
+ r lpush x bar
+ r lrange x 0 -1
+ } {bar}
+
+ test {EXPIREAT - Check for EXPIRE alike behavior} {
+ r del x
+ r set x foo
+ r expireat x [expr [clock seconds]+15]
+ r ttl x
+ } {1[345]}
+
+ test {SETEX - Set + Expire combo operation. Check for TTL} {
+ r setex x 12 test
+ r ttl x
+ } {1[012]}
+
+ test {SETEX - Check value} {
+ r get x
+ } {test}
+
+ test {SETEX - Overwrite old key} {
+ r setex y 1 foo
+ r get y
+ } {foo}
+
+ test {SETEX - Wait for the key to expire} {
+ after 3000
+ r get y
+ } {}
+
+ test {SETEX - Wrong time parameter} {
+ catch {r setex z -10 foo} e
+ set _ $e
+ } {*invalid expire*}
+}
--- /dev/null
+start_server default.conf {} {
+ test {SAVE - make sure there are all the types as values} {
+ # Wait for a background saving in progress to terminate
+ waitForBgsave r
+ r lpush mysavelist hello
+ r lpush mysavelist world
+ r set myemptykey {}
+ r set mynormalkey {blablablba}
+ r zadd mytestzset 10 a
+ r zadd mytestzset 20 b
+ r zadd mytestzset 30 c
+ r save
+ } {OK}
+
+ foreach fuzztype {binary alpha compr} {
+ test "FUZZ stresser with data model $fuzztype" {
+ set err 0
+ for {set i 0} {$i < 10000} {incr i} {
+ set fuzz [randstring 0 512 $fuzztype]
+ r set foo $fuzz
+ set got [r get foo]
+ if {$got ne $fuzz} {
+ set err [list $fuzz $got]
+ break
+ }
+ }
+ set _ $err
+ } {0}
+ }
+
+ test {BGSAVE} {
+ waitForBgsave r
+ r flushdb
+ r save
+ r set x 10
+ r bgsave
+ waitForBgsave r
+ r debug reload
+ r get x
+ } {10}
+
+ test {SELECT an out of range DB} {
+ catch {r select 1000000} err
+ set _ $err
+ } {*invalid*}
+
+ if {![catch {package require sha1}]} {
+ test {Check consistency of different data types after a reload} {
+ r flushdb
+ createComplexDataset r 10000
+ set sha1 [r debug digest]
+ r debug reload
+ set sha1_after [r debug digest]
+ expr {$sha1 eq $sha1_after}
+ } {1}
+
+ test {Same dataset digest if saving/reloading as AOF?} {
+ r bgrewriteaof
+ waitForBgrewriteaof r
+ r debug loadaof
+ set sha1_after [r debug digest]
+ expr {$sha1 eq $sha1_after}
+ } {1}
+ }
+
+ test {EXPIRES after a reload (snapshot + append only file)} {
+ r flushdb
+ r set x 10
+ r expire x 1000
+ r save
+ r debug reload
+ set ttl [r ttl x]
+ set e1 [expr {$ttl > 900 && $ttl <= 1000}]
+ r bgrewriteaof
+ waitForBgrewriteaof r
+ set ttl [r ttl x]
+ set e2 [expr {$ttl > 900 && $ttl <= 1000}]
+ list $e1 $e2
+ } {1 1}
+
+ test {PIPELINING stresser (also a regression for the old epoll bug)} {
+ set fd2 [socket $::host $::port]
+ fconfigure $fd2 -encoding binary -translation binary
+ puts -nonewline $fd2 "SELECT 9\r\n"
+ flush $fd2
+ gets $fd2
+
+ for {set i 0} {$i < 100000} {incr i} {
+ set q {}
+ set val "0000${i}0000"
+ append q "SET key:$i [string length $val]\r\n$val\r\n"
+ puts -nonewline $fd2 $q
+ set q {}
+ append q "GET key:$i\r\n"
+ puts -nonewline $fd2 $q
+ }
+ flush $fd2
+
+ for {set i 0} {$i < 100000} {incr i} {
+ gets $fd2 line
+ gets $fd2 count
+ set count [string range $count 1 end]
+ set val [read $fd2 $count]
+ read $fd2 2
+ }
+ close $fd2
+ set _ 1
+ } {1}
+
+ test {MUTLI / EXEC basics} {
+ r del mylist
+ r rpush mylist a
+ r rpush mylist b
+ r rpush mylist c
+ r multi
+ set v1 [r lrange mylist 0 -1]
+ set v2 [r ping]
+ set v3 [r exec]
+ list $v1 $v2 $v3
+ } {QUEUED QUEUED {{a b c} PONG}}
+
+ test {DISCARD} {
+ r del mylist
+ r rpush mylist a
+ r rpush mylist b
+ r rpush mylist c
+ r multi
+ set v1 [r del mylist]
+ set v2 [r discard]
+ set v3 [r lrange mylist 0 -1]
+ list $v1 $v2 $v3
+ } {QUEUED OK {a b c}}
+
+ test {APPEND basics} {
+ list [r append foo bar] [r get foo] \
+ [r append foo 100] [r get foo]
+ } {3 bar 6 bar100}
+
+ test {APPEND basics, integer encoded values} {
+ set res {}
+ r del foo
+ r append foo 1
+ r append foo 2
+ lappend res [r get foo]
+ r set foo 1
+ r append foo 2
+ lappend res [r get foo]
+ } {12 12}
+
+ test {APPEND fuzzing} {
+ set err {}
+ foreach type {binary alpha compr} {
+ set buf {}
+ r del x
+ for {set i 0} {$i < 1000} {incr i} {
+ set bin [randstring 0 10 $type]
+ append buf $bin
+ r append x $bin
+ }
+ if {$buf != [r get x]} {
+ set err "Expected '$buf' found '[r get x]'"
+ break
+ }
+ }
+ set _ $err
+ } {}
+
+ test {SUBSTR basics} {
+ set res {}
+ r set foo "Hello World"
+ lappend res [r substr foo 0 3]
+ lappend res [r substr foo 0 -1]
+ lappend res [r substr foo -4 -1]
+ lappend res [r substr foo 5 3]
+ lappend res [r substr foo 5 5000]
+ lappend res [r substr foo -5000 10000]
+ set _ $res
+ } {Hell {Hello World} orld {} { World} {Hello World}}
+
+ test {SUBSTR against integer encoded values} {
+ r set foo 123
+ r substr foo 0 -2
+ } {12}
+
+ test {SUBSTR fuzzing} {
+ set err {}
+ for {set i 0} {$i < 1000} {incr i} {
+ set bin [randstring 0 1024 binary]
+ set _start [set start [randomInt 1500]]
+ set _end [set end [randomInt 1500]]
+ if {$_start < 0} {set _start "end-[abs($_start)-1]"}
+ if {$_end < 0} {set _end "end-[abs($_end)-1]"}
+ set s1 [string range $bin $_start $_end]
+ r set bin $bin
+ set s2 [r substr bin $start $end]
+ if {$s1 != $s2} {
+ set err "String mismatch"
+ break
+ }
+ }
+ set _ $err
+ } {}
+
+ # Leave the user with a clean DB before to exit
+ test {FLUSHDB} {
+ set aux {}
+ r select 9
+ r flushdb
+ lappend aux [r dbsize]
+ r select 10
+ r flushdb
+ lappend aux [r dbsize]
+ } {0 0}
+
+ test {Perform a final SAVE to leave a clean DB on disk} {
+ r save
+ } {OK}
+}
--- /dev/null
+start_server default.conf {} {
+ test {Handle an empty query well} {
+ set fd [r channel]
+ puts -nonewline $fd "\r\n"
+ flush $fd
+ r ping
+ } {PONG}
+
+ test {Negative multi bulk command does not create problems} {
+ set fd [r channel]
+ puts -nonewline $fd "*-10\r\n"
+ flush $fd
+ r ping
+ } {PONG}
+
+ test {Negative multi bulk payload} {
+ set fd [r channel]
+ puts -nonewline $fd "SET x -10\r\n"
+ flush $fd
+ gets $fd
+ } {*invalid bulk*}
+
+ test {Too big bulk payload} {
+ set fd [r channel]
+ puts -nonewline $fd "SET x 2000000000\r\n"
+ flush $fd
+ gets $fd
+ } {*invalid bulk*count*}
+
+ test {Multi bulk request not followed by bulk args} {
+ set fd [r channel]
+ puts -nonewline $fd "*1\r\nfoo\r\n"
+ flush $fd
+ gets $fd
+ } {*protocol error*}
+
+ test {Generic wrong number of args} {
+ catch {r ping x y z} err
+ set _ $err
+ } {*wrong*arguments*ping*}
+}
--- /dev/null
+start_server default.conf {} {
+ test {SORT ALPHA against integer encoded strings} {
+ r del mylist
+ r lpush mylist 2
+ r lpush mylist 1
+ r lpush mylist 3
+ r lpush mylist 10
+ r sort mylist alpha
+ } {1 10 2 3}
+
+ test {Create a random list and a random set} {
+ set tosort {}
+ array set seenrand {}
+ for {set i 0} {$i < 10000} {incr i} {
+ while 1 {
+ # Make sure all the weights are different because
+ # Redis does not use a stable sort but Tcl does.
+ randpath {
+ set rint [expr int(rand()*1000000)]
+ } {
+ set rint [expr rand()]
+ }
+ if {![info exists seenrand($rint)]} break
+ }
+ set seenrand($rint) x
+ r lpush tosort $i
+ r sadd tosort-set $i
+ r set weight_$i $rint
+ r hset wobj_$i weight $rint
+ lappend tosort [list $i $rint]
+ }
+ set sorted [lsort -index 1 -real $tosort]
+ set res {}
+ for {set i 0} {$i < 10000} {incr i} {
+ lappend res [lindex $sorted $i 0]
+ }
+ format {}
+ } {}
+
+ test {SORT with BY against the newly created list} {
+ r sort tosort {BY weight_*}
+ } $res
+
+ test {SORT with BY (hash field) against the newly created list} {
+ r sort tosort {BY wobj_*->weight}
+ } $res
+
+ test {SORT with GET (key+hash) with sanity check of each element (list)} {
+ set err {}
+ set l1 [r sort tosort GET # GET weight_*]
+ set l2 [r sort tosort GET # GET wobj_*->weight]
+ foreach {id1 w1} $l1 {id2 w2} $l2 {
+ set realweight [r get weight_$id1]
+ if {$id1 != $id2} {
+ set err "ID mismatch $id1 != $id2"
+ break
+ }
+ if {$realweight != $w1 || $realweight != $w2} {
+ set err "Weights mismatch! w1: $w1 w2: $w2 real: $realweight"
+ break
+ }
+ }
+ set _ $err
+ } {}
+
+ test {SORT with BY, but against the newly created set} {
+ r sort tosort-set {BY weight_*}
+ } $res
+
+ test {SORT with BY (hash field), but against the newly created set} {
+ r sort tosort-set {BY wobj_*->weight}
+ } $res
+
+ test {SORT with BY and STORE against the newly created list} {
+ r sort tosort {BY weight_*} store sort-res
+ r lrange sort-res 0 -1
+ } $res
+
+ test {SORT with BY (hash field) and STORE against the newly created list} {
+ r sort tosort {BY wobj_*->weight} store sort-res
+ r lrange sort-res 0 -1
+ } $res
+
+ test {SORT direct, numeric, against the newly created list} {
+ r sort tosort
+ } [lsort -integer $res]
+
+ test {SORT decreasing sort} {
+ r sort tosort {DESC}
+ } [lsort -decreasing -integer $res]
+
+ test {SORT speed, sorting 10000 elements list using BY, 100 times} {
+ set start [clock clicks -milliseconds]
+ for {set i 0} {$i < 100} {incr i} {
+ set sorted [r sort tosort {BY weight_* LIMIT 0 10}]
+ }
+ set elapsed [expr [clock clicks -milliseconds]-$start]
+ puts -nonewline "\n Average time to sort: [expr double($elapsed)/100] milliseconds "
+ flush stdout
+ format {}
+ } {}
+
+ test {SORT speed, as above but against hash field} {
+ set start [clock clicks -milliseconds]
+ for {set i 0} {$i < 100} {incr i} {
+ set sorted [r sort tosort {BY wobj_*->weight LIMIT 0 10}]
+ }
+ set elapsed [expr [clock clicks -milliseconds]-$start]
+ puts -nonewline "\n Average time to sort: [expr double($elapsed)/100] milliseconds "
+ flush stdout
+ format {}
+ } {}
+
+ test {SORT speed, sorting 10000 elements list directly, 100 times} {
+ set start [clock clicks -milliseconds]
+ for {set i 0} {$i < 100} {incr i} {
+ set sorted [r sort tosort {LIMIT 0 10}]
+ }
+ set elapsed [expr [clock clicks -milliseconds]-$start]
+ puts -nonewline "\n Average time to sort: [expr double($elapsed)/100] milliseconds "
+ flush stdout
+ format {}
+ } {}
+
+ test {SORT speed, pseudo-sorting 10000 elements list, BY <const>, 100 times} {
+ set start [clock clicks -milliseconds]
+ for {set i 0} {$i < 100} {incr i} {
+ set sorted [r sort tosort {BY nokey LIMIT 0 10}]
+ }
+ set elapsed [expr [clock clicks -milliseconds]-$start]
+ puts -nonewline "\n Average time to sort: [expr double($elapsed)/100] milliseconds "
+ flush stdout
+ format {}
+ } {}
+
+ test {SORT regression for issue #19, sorting floats} {
+ r flushdb
+ foreach x {1.1 5.10 3.10 7.44 2.1 5.75 6.12 0.25 1.15} {
+ r lpush mylist $x
+ }
+ r sort mylist
+ } [lsort -real {1.1 5.10 3.10 7.44 2.1 5.75 6.12 0.25 1.15}]
+
+ test {SORT with GET #} {
+ r del mylist
+ r lpush mylist 1
+ r lpush mylist 2
+ r lpush mylist 3
+ r mset weight_1 10 weight_2 5 weight_3 30
+ r sort mylist BY weight_* GET #
+ } {2 1 3}
+
+ test {SORT with constant GET} {
+ r sort mylist GET foo
+ } {{} {} {}}
+
+ test {SORT against sorted sets} {
+ r del zset
+ r zadd zset 1 a
+ r zadd zset 5 b
+ r zadd zset 2 c
+ r zadd zset 10 d
+ r zadd zset 3 e
+ r sort zset alpha desc
+ } {e d c b a}
+
+ test {Sorted sets +inf and -inf handling} {
+ r del zset
+ r zadd zset -100 a
+ r zadd zset 200 b
+ r zadd zset -300 c
+ r zadd zset 1000000 d
+ r zadd zset +inf max
+ r zadd zset -inf min
+ r zrange zset 0 -1
+ } {min c a b d max}
+}
--- /dev/null
+start_server default.conf {} {
+ test {HSET/HLEN - Small hash creation} {
+ array set smallhash {}
+ for {set i 0} {$i < 8} {incr i} {
+ set key [randstring 0 8 alpha]
+ set val [randstring 0 8 alpha]
+ if {[info exists smallhash($key)]} {
+ incr i -1
+ continue
+ }
+ r hset smallhash $key $val
+ set smallhash($key) $val
+ }
+ list [r hlen smallhash]
+ } {8}
+
+ test {Is the small hash encoded with a zipmap?} {
+ r debug object smallhash
+ } {*zipmap*}
+
+ test {HSET/HLEN - Big hash creation} {
+ array set bighash {}
+ for {set i 0} {$i < 1024} {incr i} {
+ set key [randstring 0 8 alpha]
+ set val [randstring 0 8 alpha]
+ if {[info exists bighash($key)]} {
+ incr i -1
+ continue
+ }
+ r hset bighash $key $val
+ set bighash($key) $val
+ }
+ list [r hlen bighash]
+ } {1024}
+
+ test {Is the big hash encoded with a zipmap?} {
+ r debug object bighash
+ } {*hashtable*}
+
+ test {HGET against the small hash} {
+ set err {}
+ foreach k [array names smallhash *] {
+ if {$smallhash($k) ne [r hget smallhash $k]} {
+ set err "$smallhash($k) != [r hget smallhash $k]"
+ break
+ }
+ }
+ set _ $err
+ } {}
+
+ test {HGET against the big hash} {
+ set err {}
+ foreach k [array names bighash *] {
+ if {$bighash($k) ne [r hget bighash $k]} {
+ set err "$bighash($k) != [r hget bighash $k]"
+ break
+ }
+ }
+ set _ $err
+ } {}
+
+ test {HGET against non existing key} {
+ set rv {}
+ lappend rv [r hget smallhash __123123123__]
+ lappend rv [r hget bighash __123123123__]
+ set _ $rv
+ } {{} {}}
+
+ test {HSET in update and insert mode} {
+ set rv {}
+ set k [lindex [array names smallhash *] 0]
+ lappend rv [r hset smallhash $k newval1]
+ set smallhash($k) newval1
+ lappend rv [r hget smallhash $k]
+ lappend rv [r hset smallhash __foobar123__ newval]
+ set k [lindex [array names bighash *] 0]
+ lappend rv [r hset bighash $k newval2]
+ set bighash($k) newval2
+ lappend rv [r hget bighash $k]
+ lappend rv [r hset bighash __foobar123__ newval]
+ lappend rv [r hdel smallhash __foobar123__]
+ lappend rv [r hdel bighash __foobar123__]
+ set _ $rv
+ } {0 newval1 1 0 newval2 1 1 1}
+
+ test {HSETNX target key missing - small hash} {
+ r hsetnx smallhash __123123123__ foo
+ r hget smallhash __123123123__
+ } {foo}
+
+ test {HSETNX target key exists - small hash} {
+ r hsetnx smallhash __123123123__ bar
+ set result [r hget smallhash __123123123__]
+ r hdel smallhash __123123123__
+ set _ $result
+ } {foo}
+
+ test {HSETNX target key missing - big hash} {
+ r hsetnx bighash __123123123__ foo
+ r hget bighash __123123123__
+ } {foo}
+
+ test {HSETNX target key exists - big hash} {
+ r hsetnx bighash __123123123__ bar
+ set result [r hget bighash __123123123__]
+ r hdel bighash __123123123__
+ set _ $result
+ } {foo}
+
+ test {HMSET wrong number of args} {
+ catch {r hmset smallhash key1 val1 key2} err
+ format $err
+ } {*wrong number*}
+
+ test {HMSET - small hash} {
+ set args {}
+ foreach {k v} [array get smallhash] {
+ set newval [randstring 0 8 alpha]
+ set smallhash($k) $newval
+ lappend args $k $newval
+ }
+ r hmset smallhash {*}$args
+ } {OK}
+
+ test {HMSET - big hash} {
+ set args {}
+ foreach {k v} [array get bighash] {
+ set newval [randstring 0 8 alpha]
+ set bighash($k) $newval
+ lappend args $k $newval
+ }
+ r hmset bighash {*}$args
+ } {OK}
+
+ test {HMGET against non existing key and fields} {
+ set rv {}
+ lappend rv [r hmget doesntexist __123123123__ __456456456__]
+ lappend rv [r hmget smallhash __123123123__ __456456456__]
+ lappend rv [r hmget bighash __123123123__ __456456456__]
+ set _ $rv
+ } {{{} {}} {{} {}} {{} {}}}
+
+ test {HMGET - small hash} {
+ set keys {}
+ set vals {}
+ foreach {k v} [array get smallhash] {
+ lappend keys $k
+ lappend vals $v
+ }
+ set err {}
+ set result [r hmget smallhash {*}$keys]
+ if {$vals ne $result} {
+ set err "$vals != $result"
+ break
+ }
+ set _ $err
+ } {}
+
+ test {HMGET - big hash} {
+ set keys {}
+ set vals {}
+ foreach {k v} [array get bighash] {
+ lappend keys $k
+ lappend vals $v
+ }
+ set err {}
+ set result [r hmget bighash {*}$keys]
+ if {$vals ne $result} {
+ set err "$vals != $result"
+ break
+ }
+ set _ $err
+ } {}
+
+ test {HKEYS - small hash} {
+ lsort [r hkeys smallhash]
+ } [lsort [array names smallhash *]]
+
+ test {HKEYS - big hash} {
+ lsort [r hkeys bighash]
+ } [lsort [array names bighash *]]
+
+ test {HVALS - small hash} {
+ set vals {}
+ foreach {k v} [array get smallhash] {
+ lappend vals $v
+ }
+ set _ [lsort $vals]
+ } [lsort [r hvals smallhash]]
+
+ test {HVALS - big hash} {
+ set vals {}
+ foreach {k v} [array get bighash] {
+ lappend vals $v
+ }
+ set _ [lsort $vals]
+ } [lsort [r hvals bighash]]
+
+ test {HGETALL - small hash} {
+ lsort [r hgetall smallhash]
+ } [lsort [array get smallhash]]
+
+ test {HGETALL - big hash} {
+ lsort [r hgetall bighash]
+ } [lsort [array get bighash]]
+
+ test {HDEL and return value} {
+ set rv {}
+ lappend rv [r hdel smallhash nokey]
+ lappend rv [r hdel bighash nokey]
+ set k [lindex [array names smallhash *] 0]
+ lappend rv [r hdel smallhash $k]
+ lappend rv [r hdel smallhash $k]
+ lappend rv [r hget smallhash $k]
+ unset smallhash($k)
+ set k [lindex [array names bighash *] 0]
+ lappend rv [r hdel bighash $k]
+ lappend rv [r hdel bighash $k]
+ lappend rv [r hget bighash $k]
+ unset bighash($k)
+ set _ $rv
+ } {0 0 1 0 {} 1 0 {}}
+
+ test {HEXISTS} {
+ set rv {}
+ set k [lindex [array names smallhash *] 0]
+ lappend rv [r hexists smallhash $k]
+ lappend rv [r hexists smallhash nokey]
+ set k [lindex [array names bighash *] 0]
+ lappend rv [r hexists bighash $k]
+ lappend rv [r hexists bighash nokey]
+ } {1 0 1 0}
+
+ test {Is a zipmap encoded Hash promoted on big payload?} {
+ r hset smallhash foo [string repeat a 1024]
+ r debug object smallhash
+ } {*hashtable*}
+
+ test {HINCRBY against non existing database key} {
+ r del htest
+ list [r hincrby htest foo 2]
+ } {2}
+
+ test {HINCRBY against non existing hash key} {
+ set rv {}
+ r hdel smallhash tmp
+ r hdel bighash tmp
+ lappend rv [r hincrby smallhash tmp 2]
+ lappend rv [r hget smallhash tmp]
+ lappend rv [r hincrby bighash tmp 2]
+ lappend rv [r hget bighash tmp]
+ } {2 2 2 2}
+
+ test {HINCRBY against hash key created by hincrby itself} {
+ set rv {}
+ lappend rv [r hincrby smallhash tmp 3]
+ lappend rv [r hget smallhash tmp]
+ lappend rv [r hincrby bighash tmp 3]
+ lappend rv [r hget bighash tmp]
+ } {5 5 5 5}
+
+ test {HINCRBY against hash key originally set with HSET} {
+ r hset smallhash tmp 100
+ r hset bighash tmp 100
+ list [r hincrby smallhash tmp 2] [r hincrby bighash tmp 2]
+ } {102 102}
+
+ test {HINCRBY over 32bit value} {
+ r hset smallhash tmp 17179869184
+ r hset bighash tmp 17179869184
+ list [r hincrby smallhash tmp 1] [r hincrby bighash tmp 1]
+ } {17179869185 17179869185}
+
+ test {HINCRBY over 32bit value with over 32bit increment} {
+ r hset smallhash tmp 17179869184
+ r hset bighash tmp 17179869184
+ list [r hincrby smallhash tmp 17179869184] [r hincrby bighash tmp 17179869184]
+ } {34359738368 34359738368}
+
+ test {HINCRBY fails against hash value with spaces} {
+ r hset smallhash str " 11 "
+ r hset bighash str " 11 "
+ catch {r hincrby smallhash str 1} smallerr
+ catch {r hincrby smallhash str 1} bigerr
+ set rv {}
+ lappend rv [string match "ERR*not an integer*" $smallerr]
+ lappend rv [string match "ERR*not an integer*" $bigerr]
+ } {1 1}
+}
--- /dev/null
+start_server default.conf {} {
+ test {Basic LPUSH, RPUSH, LLENGTH, LINDEX} {
+ set res [r lpush mylist a]
+ append res [r lpush mylist b]
+ append res [r rpush mylist c]
+ append res [r llen mylist]
+ append res [r rpush anotherlist d]
+ append res [r lpush anotherlist e]
+ append res [r llen anotherlist]
+ append res [r lindex mylist 0]
+ append res [r lindex mylist 1]
+ append res [r lindex mylist 2]
+ append res [r lindex anotherlist 0]
+ append res [r lindex anotherlist 1]
+ list $res [r lindex mylist 100]
+ } {1233122baced {}}
+
+ test {DEL a list} {
+ r del mylist
+ r exists mylist
+ } {0}
+
+ test {Create a long list and check every single element with LINDEX} {
+ set ok 0
+ for {set i 0} {$i < 1000} {incr i} {
+ r rpush mylist $i
+ }
+ for {set i 0} {$i < 1000} {incr i} {
+ if {[r lindex mylist $i] eq $i} {incr ok}
+ if {[r lindex mylist [expr (-$i)-1]] eq [expr 999-$i]} {
+ incr ok
+ }
+ }
+ format $ok
+ } {2000}
+
+ test {Test elements with LINDEX in random access} {
+ set ok 0
+ for {set i 0} {$i < 1000} {incr i} {
+ set rint [expr int(rand()*1000)]
+ if {[r lindex mylist $rint] eq $rint} {incr ok}
+ if {[r lindex mylist [expr (-$rint)-1]] eq [expr 999-$rint]} {
+ incr ok
+ }
+ }
+ format $ok
+ } {2000}
+
+ test {Check if the list is still ok after a DEBUG RELOAD} {
+ r debug reload
+ set ok 0
+ for {set i 0} {$i < 1000} {incr i} {
+ set rint [expr int(rand()*1000)]
+ if {[r lindex mylist $rint] eq $rint} {incr ok}
+ if {[r lindex mylist [expr (-$rint)-1]] eq [expr 999-$rint]} {
+ incr ok
+ }
+ }
+ format $ok
+ } {2000}
+
+ test {LLEN against non-list value error} {
+ r del mylist
+ r set mylist foobar
+ catch {r llen mylist} err
+ format $err
+ } {ERR*}
+
+ test {LLEN against non existing key} {
+ r llen not-a-key
+ } {0}
+
+ test {LINDEX against non-list value error} {
+ catch {r lindex mylist 0} err
+ format $err
+ } {ERR*}
+
+ test {LINDEX against non existing key} {
+ r lindex not-a-key 10
+ } {}
+
+ test {LPUSH against non-list value error} {
+ catch {r lpush mylist 0} err
+ format $err
+ } {ERR*}
+
+ test {RPUSH against non-list value error} {
+ catch {r rpush mylist 0} err
+ format $err
+ } {ERR*}
+
+ test {RPOPLPUSH base case} {
+ r del mylist
+ r rpush mylist a
+ r rpush mylist b
+ r rpush mylist c
+ r rpush mylist d
+ set v1 [r rpoplpush mylist newlist]
+ set v2 [r rpoplpush mylist newlist]
+ set l1 [r lrange mylist 0 -1]
+ set l2 [r lrange newlist 0 -1]
+ list $v1 $v2 $l1 $l2
+ } {d c {a b} {c d}}
+
+ test {RPOPLPUSH with the same list as src and dst} {
+ r del mylist
+ r rpush mylist a
+ r rpush mylist b
+ r rpush mylist c
+ set l1 [r lrange mylist 0 -1]
+ set v [r rpoplpush mylist mylist]
+ set l2 [r lrange mylist 0 -1]
+ list $l1 $v $l2
+ } {{a b c} c {c a b}}
+
+ test {RPOPLPUSH target list already exists} {
+ r del mylist
+ r del newlist
+ r rpush mylist a
+ r rpush mylist b
+ r rpush mylist c
+ r rpush mylist d
+ r rpush newlist x
+ set v1 [r rpoplpush mylist newlist]
+ set v2 [r rpoplpush mylist newlist]
+ set l1 [r lrange mylist 0 -1]
+ set l2 [r lrange newlist 0 -1]
+ list $v1 $v2 $l1 $l2
+ } {d c {a b} {c d x}}
+
+ test {RPOPLPUSH against non existing key} {
+ r del mylist
+ r del newlist
+ set v1 [r rpoplpush mylist newlist]
+ list $v1 [r exists mylist] [r exists newlist]
+ } {{} 0 0}
+
+ test {RPOPLPUSH against non list src key} {
+ r del mylist
+ r del newlist
+ r set mylist x
+ catch {r rpoplpush mylist newlist} err
+ list [r type mylist] [r exists newlist] [string range $err 0 2]
+ } {string 0 ERR}
+
+ test {RPOPLPUSH against non list dst key} {
+ r del mylist
+ r del newlist
+ r rpush mylist a
+ r rpush mylist b
+ r rpush mylist c
+ r rpush mylist d
+ r set newlist x
+ catch {r rpoplpush mylist newlist} err
+ list [r lrange mylist 0 -1] [r type newlist] [string range $err 0 2]
+ } {{a b c d} string ERR}
+
+ test {RPOPLPUSH against non existing src key} {
+ r del mylist
+ r del newlist
+ r rpoplpush mylist newlist
+ } {}
+
+ test {Basic LPOP/RPOP} {
+ r del mylist
+ r rpush mylist 1
+ r rpush mylist 2
+ r lpush mylist 0
+ list [r lpop mylist] [r rpop mylist] [r lpop mylist] [r llen mylist]
+ } [list 0 2 1 0]
+
+ test {LPOP/RPOP against empty list} {
+ r lpop mylist
+ } {}
+
+ test {LPOP against non list value} {
+ r set notalist foo
+ catch {r lpop notalist} err
+ format $err
+ } {ERR*kind*}
+
+ test {Mass LPUSH/LPOP} {
+ set sum 0
+ for {set i 0} {$i < 1000} {incr i} {
+ r lpush mylist $i
+ incr sum $i
+ }
+ set sum2 0
+ for {set i 0} {$i < 500} {incr i} {
+ incr sum2 [r lpop mylist]
+ incr sum2 [r rpop mylist]
+ }
+ expr $sum == $sum2
+ } {1}
+
+ test {LRANGE basics} {
+ for {set i 0} {$i < 10} {incr i} {
+ r rpush mylist $i
+ }
+ list [r lrange mylist 1 -2] \
+ [r lrange mylist -3 -1] \
+ [r lrange mylist 4 4]
+ } {{1 2 3 4 5 6 7 8} {7 8 9} 4}
+
+ test {LRANGE inverted indexes} {
+ r lrange mylist 6 2
+ } {}
+
+ test {LRANGE out of range indexes including the full list} {
+ r lrange mylist -1000 1000
+ } {0 1 2 3 4 5 6 7 8 9}
+
+ test {LRANGE against non existing key} {
+ r lrange nosuchkey 0 1
+ } {}
+
+ test {LTRIM basics} {
+ r del mylist
+ for {set i 0} {$i < 100} {incr i} {
+ r lpush mylist $i
+ r ltrim mylist 0 4
+ }
+ r lrange mylist 0 -1
+ } {99 98 97 96 95}
+
+ test {LTRIM stress testing} {
+ set mylist {}
+ set err {}
+ for {set i 0} {$i < 20} {incr i} {
+ lappend mylist $i
+ }
+
+ for {set j 0} {$j < 100} {incr j} {
+ # Fill the list
+ r del mylist
+ for {set i 0} {$i < 20} {incr i} {
+ r rpush mylist $i
+ }
+ # Trim at random
+ set a [randomInt 20]
+ set b [randomInt 20]
+ r ltrim mylist $a $b
+ if {[r lrange mylist 0 -1] ne [lrange $mylist $a $b]} {
+ set err "[r lrange mylist 0 -1] != [lrange $mylist $a $b]"
+ break
+ }
+ }
+ set _ $err
+ } {}
+
+ test {LSET} {
+ r del mylist
+ foreach x {99 98 97 96 95} {
+ r rpush mylist $x
+ }
+ r lset mylist 1 foo
+ r lset mylist -1 bar
+ r lrange mylist 0 -1
+ } {99 foo 97 96 bar}
+
+ test {LSET out of range index} {
+ catch {r lset mylist 10 foo} err
+ format $err
+ } {ERR*range*}
+
+ test {LSET against non existing key} {
+ catch {r lset nosuchkey 10 foo} err
+ format $err
+ } {ERR*key*}
+
+ test {LSET against non list value} {
+ r set nolist foobar
+ catch {r lset nolist 0 foo} err
+ format $err
+ } {ERR*value*}
+
+ test {LREM, remove all the occurrences} {
+ r flushdb
+ r rpush mylist foo
+ r rpush mylist bar
+ r rpush mylist foobar
+ r rpush mylist foobared
+ r rpush mylist zap
+ r rpush mylist bar
+ r rpush mylist test
+ r rpush mylist foo
+ set res [r lrem mylist 0 bar]
+ list [r lrange mylist 0 -1] $res
+ } {{foo foobar foobared zap test foo} 2}
+
+ test {LREM, remove the first occurrence} {
+ set res [r lrem mylist 1 foo]
+ list [r lrange mylist 0 -1] $res
+ } {{foobar foobared zap test foo} 1}
+
+ test {LREM, remove non existing element} {
+ set res [r lrem mylist 1 nosuchelement]
+ list [r lrange mylist 0 -1] $res
+ } {{foobar foobared zap test foo} 0}
+
+ test {LREM, starting from tail with negative count} {
+ r flushdb
+ r rpush mylist foo
+ r rpush mylist bar
+ r rpush mylist foobar
+ r rpush mylist foobared
+ r rpush mylist zap
+ r rpush mylist bar
+ r rpush mylist test
+ r rpush mylist foo
+ r rpush mylist foo
+ set res [r lrem mylist -1 bar]
+ list [r lrange mylist 0 -1] $res
+ } {{foo bar foobar foobared zap test foo foo} 1}
+
+ test {LREM, starting from tail with negative count (2)} {
+ set res [r lrem mylist -2 foo]
+ list [r lrange mylist 0 -1] $res
+ } {{foo bar foobar foobared zap test} 2}
+
+ test {LREM, deleting objects that may be encoded as integers} {
+ r lpush myotherlist 1
+ r lpush myotherlist 2
+ r lpush myotherlist 3
+ r lrem myotherlist 1 2
+ r llen myotherlist
+ } {2}
+}
--- /dev/null
+start_server default.conf {} {
+ test {SADD, SCARD, SISMEMBER, SMEMBERS basics} {
+ r sadd myset foo
+ r sadd myset bar
+ list [r scard myset] [r sismember myset foo] \
+ [r sismember myset bar] [r sismember myset bla] \
+ [lsort [r smembers myset]]
+ } {2 1 1 0 {bar foo}}
+
+ test {SADD adding the same element multiple times} {
+ r sadd myset foo
+ r sadd myset foo
+ r sadd myset foo
+ r scard myset
+ } {2}
+
+ test {SADD against non set} {
+ r lpush mylist foo
+ catch {r sadd mylist bar} err
+ format $err
+ } {ERR*kind*}
+
+ test {SREM basics} {
+ r sadd myset ciao
+ r srem myset foo
+ lsort [r smembers myset]
+ } {bar ciao}
+
+ test {Mass SADD and SINTER with two sets} {
+ for {set i 0} {$i < 1000} {incr i} {
+ r sadd set1 $i
+ r sadd set2 [expr $i+995]
+ }
+ lsort [r sinter set1 set2]
+ } {995 996 997 998 999}
+
+ test {SUNION with two sets} {
+ lsort [r sunion set1 set2]
+ } [lsort -uniq "[r smembers set1] [r smembers set2]"]
+
+ test {SINTERSTORE with two sets} {
+ r sinterstore setres set1 set2
+ lsort [r smembers setres]
+ } {995 996 997 998 999}
+
+ test {SINTERSTORE with two sets, after a DEBUG RELOAD} {
+ r debug reload
+ r sinterstore setres set1 set2
+ lsort [r smembers setres]
+ } {995 996 997 998 999}
+
+ test {SUNIONSTORE with two sets} {
+ r sunionstore setres set1 set2
+ lsort [r smembers setres]
+ } [lsort -uniq "[r smembers set1] [r smembers set2]"]
+
+ test {SUNIONSTORE against non existing keys} {
+ r set setres xxx
+ list [r sunionstore setres foo111 bar222] [r exists xxx]
+ } {0 0}
+
+ test {SINTER against three sets} {
+ r sadd set3 999
+ r sadd set3 995
+ r sadd set3 1000
+ r sadd set3 2000
+ lsort [r sinter set1 set2 set3]
+ } {995 999}
+
+ test {SINTERSTORE with three sets} {
+ r sinterstore setres set1 set2 set3
+ lsort [r smembers setres]
+ } {995 999}
+
+ test {SUNION with non existing keys} {
+ lsort [r sunion nokey1 set1 set2 nokey2]
+ } [lsort -uniq "[r smembers set1] [r smembers set2]"]
+
+ test {SDIFF with two sets} {
+ for {set i 5} {$i < 1000} {incr i} {
+ r sadd set4 $i
+ }
+ lsort [r sdiff set1 set4]
+ } {0 1 2 3 4}
+
+ test {SDIFF with three sets} {
+ r sadd set5 0
+ lsort [r sdiff set1 set4 set5]
+ } {1 2 3 4}
+
+ test {SDIFFSTORE with three sets} {
+ r sdiffstore sres set1 set4 set5
+ lsort [r smembers sres]
+ } {1 2 3 4}
+
+ test {SPOP basics} {
+ r del myset
+ r sadd myset 1
+ r sadd myset 2
+ r sadd myset 3
+ list [lsort [list [r spop myset] [r spop myset] [r spop myset]]] [r scard myset]
+ } {{1 2 3} 0}
+
+ test {SRANDMEMBER} {
+ r del myset
+ r sadd myset a
+ r sadd myset b
+ r sadd myset c
+ unset -nocomplain myset
+ array set myset {}
+ for {set i 0} {$i < 100} {incr i} {
+ set myset([r srandmember myset]) 1
+ }
+ lsort [array names myset]
+ } {a b c}
+
+ test {SMOVE basics} {
+ r sadd myset1 a
+ r sadd myset1 b
+ r sadd myset1 c
+ r sadd myset2 x
+ r sadd myset2 y
+ r sadd myset2 z
+ r smove myset1 myset2 a
+ list [lsort [r smembers myset2]] [lsort [r smembers myset1]]
+ } {{a x y z} {b c}}
+
+ test {SMOVE non existing key} {
+ list [r smove myset1 myset2 foo] [lsort [r smembers myset2]] [lsort [r smembers myset1]]
+ } {0 {a x y z} {b c}}
+
+ test {SMOVE non existing src set} {
+ list [r smove noset myset2 foo] [lsort [r smembers myset2]]
+ } {0 {a x y z}}
+
+ test {SMOVE non existing dst set} {
+ list [r smove myset2 myset3 y] [lsort [r smembers myset2]] [lsort [r smembers myset3]]
+ } {1 {a x z} y}
+
+ test {SMOVE wrong src key type} {
+ r set x 10
+ catch {r smove x myset2 foo} err
+ format $err
+ } {ERR*}
+
+ test {SMOVE wrong dst key type} {
+ r set x 10
+ catch {r smove myset2 x foo} err
+ format $err
+ } {ERR*}
+}
--- /dev/null
+start_server default.conf {} {
+ test {ZSET basic ZADD and score update} {
+ r zadd ztmp 10 x
+ r zadd ztmp 20 y
+ r zadd ztmp 30 z
+ set aux1 [r zrange ztmp 0 -1]
+ r zadd ztmp 1 y
+ set aux2 [r zrange ztmp 0 -1]
+ list $aux1 $aux2
+ } {{x y z} {y x z}}
+
+ test {ZCARD basics} {
+ r zcard ztmp
+ } {3}
+
+ test {ZCARD non existing key} {
+ r zcard ztmp-blabla
+ } {0}
+
+ test {ZRANK basics} {
+ r zadd zranktmp 10 x
+ r zadd zranktmp 20 y
+ r zadd zranktmp 30 z
+ list [r zrank zranktmp x] [r zrank zranktmp y] [r zrank zranktmp z]
+ } {0 1 2}
+
+ test {ZREVRANK basics} {
+ list [r zrevrank zranktmp x] [r zrevrank zranktmp y] [r zrevrank zranktmp z]
+ } {2 1 0}
+
+ test {ZRANK - after deletion} {
+ r zrem zranktmp y
+ list [r zrank zranktmp x] [r zrank zranktmp z]
+ } {0 1}
+
+ test {ZSCORE} {
+ set aux {}
+ set err {}
+ for {set i 0} {$i < 1000} {incr i} {
+ set score [expr rand()]
+ lappend aux $score
+ r zadd zscoretest $score $i
+ }
+ for {set i 0} {$i < 1000} {incr i} {
+ if {[r zscore zscoretest $i] != [lindex $aux $i]} {
+ set err "Expected score was [lindex $aux $i] but got [r zscore zscoretest $i] for element $i"
+ break
+ }
+ }
+ set _ $err
+ } {}
+
+ test {ZSCORE after a DEBUG RELOAD} {
+ set aux {}
+ set err {}
+ r del zscoretest
+ for {set i 0} {$i < 1000} {incr i} {
+ set score [expr rand()]
+ lappend aux $score
+ r zadd zscoretest $score $i
+ }
+ r debug reload
+ for {set i 0} {$i < 1000} {incr i} {
+ if {[r zscore zscoretest $i] != [lindex $aux $i]} {
+ set err "Expected score was [lindex $aux $i] but got [r zscore zscoretest $i] for element $i"
+ break
+ }
+ }
+ set _ $err
+ } {}
+
+ test {ZRANGE and ZREVRANGE basics} {
+ list [r zrange ztmp 0 -1] [r zrevrange ztmp 0 -1] \
+ [r zrange ztmp 1 -1] [r zrevrange ztmp 1 -1]
+ } {{y x z} {z x y} {x z} {x y}}
+
+ test {ZRANGE WITHSCORES} {
+ r zrange ztmp 0 -1 withscores
+ } {y 1 x 10 z 30}
+
+ test {ZSETs stress tester - sorting is working well?} {
+ set delta 0
+ for {set test 0} {$test < 2} {incr test} {
+ unset -nocomplain auxarray
+ array set auxarray {}
+ set auxlist {}
+ r del myzset
+ for {set i 0} {$i < 1000} {incr i} {
+ if {$test == 0} {
+ set score [expr rand()]
+ } else {
+ set score [expr int(rand()*10)]
+ }
+ set auxarray($i) $score
+ r zadd myzset $score $i
+ # Random update
+ if {[expr rand()] < .2} {
+ set j [expr int(rand()*1000)]
+ if {$test == 0} {
+ set score [expr rand()]
+ } else {
+ set score [expr int(rand()*10)]
+ }
+ set auxarray($j) $score
+ r zadd myzset $score $j
+ }
+ }
+ foreach {item score} [array get auxarray] {
+ lappend auxlist [list $score $item]
+ }
+ set sorted [lsort -command zlistAlikeSort $auxlist]
+ set auxlist {}
+ foreach x $sorted {
+ lappend auxlist [lindex $x 1]
+ }
+ set fromredis [r zrange myzset 0 -1]
+ set delta 0
+ for {set i 0} {$i < [llength $fromredis]} {incr i} {
+ if {[lindex $fromredis $i] != [lindex $auxlist $i]} {
+ incr delta
+ }
+ }
+ }
+ format $delta
+ } {0}
+
+ test {ZINCRBY - can create a new sorted set} {
+ r del zset
+ r zincrby zset 1 foo
+ list [r zrange zset 0 -1] [r zscore zset foo]
+ } {foo 1}
+
+ test {ZINCRBY - increment and decrement} {
+ r zincrby zset 2 foo
+ r zincrby zset 1 bar
+ set v1 [r zrange zset 0 -1]
+ r zincrby zset 10 bar
+ r zincrby zset -5 foo
+ r zincrby zset -5 bar
+ set v2 [r zrange zset 0 -1]
+ list $v1 $v2 [r zscore zset foo] [r zscore zset bar]
+ } {{bar foo} {foo bar} -2 6}
+
+ test {ZRANGEBYSCORE and ZCOUNT basics} {
+ r del zset
+ r zadd zset 1 a
+ r zadd zset 2 b
+ r zadd zset 3 c
+ r zadd zset 4 d
+ r zadd zset 5 e
+ list [r zrangebyscore zset 2 4] [r zrangebyscore zset (2 (4] \
+ [r zcount zset 2 4] [r zcount zset (2 (4]
+ } {{b c d} c 3 1}
+
+ test {ZRANGEBYSCORE withscores} {
+ r del zset
+ r zadd zset 1 a
+ r zadd zset 2 b
+ r zadd zset 3 c
+ r zadd zset 4 d
+ r zadd zset 5 e
+ r zrangebyscore zset 2 4 withscores
+ } {b 2 c 3 d 4}
+
+ test {ZRANGEBYSCORE fuzzy test, 100 ranges in 1000 elements sorted set} {
+ set err {}
+ r del zset
+ for {set i 0} {$i < 1000} {incr i} {
+ r zadd zset [expr rand()] $i
+ }
+ for {set i 0} {$i < 100} {incr i} {
+ set min [expr rand()]
+ set max [expr rand()]
+ if {$min > $max} {
+ set aux $min
+ set min $max
+ set max $aux
+ }
+ set low [r zrangebyscore zset -inf $min]
+ set ok [r zrangebyscore zset $min $max]
+ set high [r zrangebyscore zset $max +inf]
+ set lowx [r zrangebyscore zset -inf ($min]
+ set okx [r zrangebyscore zset ($min ($max]
+ set highx [r zrangebyscore zset ($max +inf]
+
+ if {[r zcount zset -inf $min] != [llength $low]} {
+ append err "Error, len does not match zcount\n"
+ }
+ if {[r zcount zset $min $max] != [llength $ok]} {
+ append err "Error, len does not match zcount\n"
+ }
+ if {[r zcount zset $max +inf] != [llength $high]} {
+ append err "Error, len does not match zcount\n"
+ }
+ if {[r zcount zset -inf ($min] != [llength $lowx]} {
+ append err "Error, len does not match zcount\n"
+ }
+ if {[r zcount zset ($min ($max] != [llength $okx]} {
+ append err "Error, len does not match zcount\n"
+ }
+ if {[r zcount zset ($max +inf] != [llength $highx]} {
+ append err "Error, len does not match zcount\n"
+ }
+
+ foreach x $low {
+ set score [r zscore zset $x]
+ if {$score > $min} {
+ append err "Error, score for $x is $score > $min\n"
+ }
+ }
+ foreach x $lowx {
+ set score [r zscore zset $x]
+ if {$score >= $min} {
+ append err "Error, score for $x is $score >= $min\n"
+ }
+ }
+ foreach x $ok {
+ set score [r zscore zset $x]
+ if {$score < $min || $score > $max} {
+ append err "Error, score for $x is $score outside $min-$max range\n"
+ }
+ }
+ foreach x $okx {
+ set score [r zscore zset $x]
+ if {$score <= $min || $score >= $max} {
+ append err "Error, score for $x is $score outside $min-$max open range\n"
+ }
+ }
+ foreach x $high {
+ set score [r zscore zset $x]
+ if {$score < $max} {
+ append err "Error, score for $x is $score < $max\n"
+ }
+ }
+ foreach x $highx {
+ set score [r zscore zset $x]
+ if {$score <= $max} {
+ append err "Error, score for $x is $score <= $max\n"
+ }
+ }
+ }
+ set _ $err
+ } {}
+
+ test {ZRANGEBYSCORE with LIMIT} {
+ r del zset
+ r zadd zset 1 a
+ r zadd zset 2 b
+ r zadd zset 3 c
+ r zadd zset 4 d
+ r zadd zset 5 e
+ list \
+ [r zrangebyscore zset 0 10 LIMIT 0 2] \
+ [r zrangebyscore zset 0 10 LIMIT 2 3] \
+ [r zrangebyscore zset 0 10 LIMIT 2 10] \
+ [r zrangebyscore zset 0 10 LIMIT 20 10]
+ } {{a b} {c d e} {c d e} {}}
+
+ test {ZRANGEBYSCORE with LIMIT and withscores} {
+ r del zset
+ r zadd zset 10 a
+ r zadd zset 20 b
+ r zadd zset 30 c
+ r zadd zset 40 d
+ r zadd zset 50 e
+ r zrangebyscore zset 20 50 LIMIT 2 3 withscores
+ } {d 40 e 50}
+
+ test {ZREMRANGEBYSCORE basics} {
+ r del zset
+ r zadd zset 1 a
+ r zadd zset 2 b
+ r zadd zset 3 c
+ r zadd zset 4 d
+ r zadd zset 5 e
+ list [r zremrangebyscore zset 2 4] [r zrange zset 0 -1]
+ } {3 {a e}}
+
+ test {ZREMRANGEBYSCORE from -inf to +inf} {
+ r del zset
+ r zadd zset 1 a
+ r zadd zset 2 b
+ r zadd zset 3 c
+ r zadd zset 4 d
+ r zadd zset 5 e
+ list [r zremrangebyscore zset -inf +inf] [r zrange zset 0 -1]
+ } {5 {}}
+
+ test {ZREMRANGEBYRANK basics} {
+ r del zset
+ r zadd zset 1 a
+ r zadd zset 2 b
+ r zadd zset 3 c
+ r zadd zset 4 d
+ r zadd zset 5 e
+ list [r zremrangebyrank zset 1 3] [r zrange zset 0 -1]
+ } {3 {a e}}
+
+ test {ZUNION against non-existing key doesn't set destination} {
+ r del zseta
+ list [r zunion dst_key 1 zseta] [r exists dst_key]
+ } {0 0}
+
+ test {ZUNION basics} {
+ r del zseta zsetb zsetc
+ r zadd zseta 1 a
+ r zadd zseta 2 b
+ r zadd zseta 3 c
+ r zadd zsetb 1 b
+ r zadd zsetb 2 c
+ r zadd zsetb 3 d
+ list [r zunion zsetc 2 zseta zsetb] [r zrange zsetc 0 -1 withscores]
+ } {4 {a 1 b 3 d 3 c 5}}
+
+ test {ZUNION with weights} {
+ list [r zunion zsetc 2 zseta zsetb weights 2 3] [r zrange zsetc 0 -1 withscores]
+ } {4 {a 2 b 7 d 9 c 12}}
+
+ test {ZUNION with AGGREGATE MIN} {
+ list [r zunion zsetc 2 zseta zsetb aggregate min] [r zrange zsetc 0 -1 withscores]
+ } {4 {a 1 b 1 c 2 d 3}}
+
+ test {ZUNION with AGGREGATE MAX} {
+ list [r zunion zsetc 2 zseta zsetb aggregate max] [r zrange zsetc 0 -1 withscores]
+ } {4 {a 1 b 2 c 3 d 3}}
+
+ test {ZINTER basics} {
+ list [r zinter zsetc 2 zseta zsetb] [r zrange zsetc 0 -1 withscores]
+ } {2 {b 3 c 5}}
+
+ test {ZINTER with weights} {
+ list [r zinter zsetc 2 zseta zsetb weights 2 3] [r zrange zsetc 0 -1 withscores]
+ } {2 {b 7 c 12}}
+
+ test {ZINTER with AGGREGATE MIN} {
+ list [r zinter zsetc 2 zseta zsetb aggregate min] [r zrange zsetc 0 -1 withscores]
+ } {2 {b 1 c 2}}
+
+ test {ZINTER with AGGREGATE MAX} {
+ list [r zinter zsetc 2 zseta zsetb aggregate max] [r zrange zsetc 0 -1 withscores]
+ } {2 {b 2 c 3}}
+
+ test {ZSETs skiplist implementation backlink consistency test} {
+ set diff 0
+ set elements 10000
+ for {set j 0} {$j < $elements} {incr j} {
+ r zadd myzset [expr rand()] "Element-$j"
+ r zrem myzset "Element-[expr int(rand()*$elements)]"
+ }
+ set l1 [r zrange myzset 0 -1]
+ set l2 [r zrevrange myzset 0 -1]
+ for {set j 0} {$j < [llength $l1]} {incr j} {
+ if {[lindex $l1 $j] ne [lindex $l2 end-$j]} {
+ incr diff
+ }
+ }
+ format $diff
+ } {0}
+
+ test {ZSETs ZRANK augmented skip list stress testing} {
+ set err {}
+ r del myzset
+ for {set k 0} {$k < 10000} {incr k} {
+ set i [expr {$k%1000}]
+ if {[expr rand()] < .2} {
+ r zrem myzset $i
+ } else {
+ set score [expr rand()]
+ r zadd myzset $score $i
+ }
+ set card [r zcard myzset]
+ if {$card > 0} {
+ set index [randomInt $card]
+ set ele [lindex [r zrange myzset $index $index] 0]
+ set rank [r zrank myzset $ele]
+ if {$rank != $index} {
+ set err "$ele RANK is wrong! ($rank != $index)"
+ break
+ }
+ }
+ }
+ set _ $err
+ } {}
+}