]> git.saurik.com Git - redis.git/blob - doc/Benchmarks.html
First version of evented Redis Tcl client, that will be used for BLPOP and Pub/Sub...
[redis.git] / doc / Benchmarks.html
1
2 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN">
3 <html>
4 <head>
5 <link type="text/css" rel="stylesheet" href="style.css" />
6 </head>
7 <body>
8 <div id="page">
9
10 <div id='header'>
11 <a href="index.html">
12 <img style="border:none" alt="Redis Documentation" src="redis.png">
13 </a>
14 </div>
15
16 <div id="pagecontent">
17 <div class="index">
18 <!-- This is a (PRE) block. Make sure it's left aligned or your toc title will be off. -->
19 <b>Benchmarks: Contents</b><br>&nbsp;&nbsp;<a href="#How Fast is Redis?">How Fast is Redis?</a><br>&nbsp;&nbsp;<a href="#Latency percentiles">Latency percentiles</a>
20 </div>
21
22 <h1 class="wikiname">Benchmarks</h1>
23
24 <div class="summary">
25
26 </div>
27
28 <div class="narrow">
29 <h1><a name="How Fast is Redis?">How Fast is Redis?</a></h1>Redis includes the <code name="code" class="python">redis-benchmark</code> utility that simulates <a href="SETs.html">SETs</a>/GETs done by N clients at the same time sending M total queries (it is similar to the Apache's <code name="code" class="python">ab</code> utility). Below you'll find the full output of the benchmark executed against a Linux box.<br/><br/><ul><li> The test was done with 50 simultaneous clients performing 100000 requests.</li><li> The value SET and GET is a 256 bytes string.</li><li> The Linux box is running <b>Linux 2.6</b>, it's <b>Xeon X3320 2.5Ghz</b>.</li><li> Text executed using the loopback interface (127.0.0.1).</li></ul>
30 Results: <b>about 110000 <a href="SETs.html">SETs</a> per second, about 81000 GETs per second.</b><h1><a name="Latency percentiles">Latency percentiles</a></h1><pre class="codeblock python" name="code">
31 ./redis-benchmark -n 100000
32
33 ====== SET ======
34 100007 requests completed in 0.88 seconds
35 50 parallel clients
36 3 bytes payload
37 keep alive: 1
38
39 58.50% &lt;= 0 milliseconds
40 99.17% &lt;= 1 milliseconds
41 99.58% &lt;= 2 milliseconds
42 99.85% &lt;= 3 milliseconds
43 99.90% &lt;= 6 milliseconds
44 100.00% &lt;= 9 milliseconds
45 114293.71 requests per second
46
47 ====== GET ======
48 100000 requests completed in 1.23 seconds
49 50 parallel clients
50 3 bytes payload
51 keep alive: 1
52
53 43.12% &lt;= 0 milliseconds
54 96.82% &lt;= 1 milliseconds
55 98.62% &lt;= 2 milliseconds
56 100.00% &lt;= 3 milliseconds
57 81234.77 requests per second
58
59 ====== INCR ======
60 100018 requests completed in 1.46 seconds
61 50 parallel clients
62 3 bytes payload
63 keep alive: 1
64
65 32.32% &lt;= 0 milliseconds
66 96.67% &lt;= 1 milliseconds
67 99.14% &lt;= 2 milliseconds
68 99.83% &lt;= 3 milliseconds
69 99.88% &lt;= 4 milliseconds
70 99.89% &lt;= 5 milliseconds
71 99.96% &lt;= 9 milliseconds
72 100.00% &lt;= 18 milliseconds
73 68458.59 requests per second
74
75 ====== LPUSH ======
76 100004 requests completed in 1.14 seconds
77 50 parallel clients
78 3 bytes payload
79 keep alive: 1
80
81 62.27% &lt;= 0 milliseconds
82 99.74% &lt;= 1 milliseconds
83 99.85% &lt;= 2 milliseconds
84 99.86% &lt;= 3 milliseconds
85 99.89% &lt;= 5 milliseconds
86 99.93% &lt;= 7 milliseconds
87 99.96% &lt;= 9 milliseconds
88 100.00% &lt;= 22 milliseconds
89 100.00% &lt;= 208 milliseconds
90 88109.25 requests per second
91
92 ====== LPOP ======
93 100001 requests completed in 1.39 seconds
94 50 parallel clients
95 3 bytes payload
96 keep alive: 1
97
98 54.83% &lt;= 0 milliseconds
99 97.34% &lt;= 1 milliseconds
100 99.95% &lt;= 2 milliseconds
101 99.96% &lt;= 3 milliseconds
102 99.96% &lt;= 4 milliseconds
103 100.00% &lt;= 9 milliseconds
104 100.00% &lt;= 208 milliseconds
105 71994.96 requests per second
106 </pre>Notes: changing the payload from 256 to 1024 or 4096 bytes does not change the numbers significantly (but reply packets are glued together up to 1024 bytes so GETs may be slower with big payloads). The same for the number of clients, from 50 to 256 clients I got the same numbers. With only 10 clients it starts to get a bit slower.<br/><br/>You can expect different results from different boxes. For example a low profile box like <b>Intel core duo T5500 clocked at 1.66Ghz running Linux 2.6</b> will output the following:
107 <pre class="codeblock python python" name="code">
108 ./redis-benchmark -q -n 100000
109 SET: 53684.38 requests per second
110 GET: 45497.73 requests per second
111 INCR: 39370.47 requests per second
112 LPUSH: 34803.41 requests per second
113 LPOP: 37367.20 requests per second
114 </pre>Another one using a 64 bit box, a Xeon L5420 clocked at 2.5 Ghz:<br/><br/><pre class="codeblock python python python" name="code">
115 ./redis-benchmark -q -n 100000
116 PING: 111731.84 requests per second
117 SET: 108114.59 requests per second
118 GET: 98717.67 requests per second
119 INCR: 95241.91 requests per second
120 LPUSH: 104712.05 requests per second
121 LPOP: 93722.59 requests per second
122 </pre>
123 </div>
124
125 </div>
126 </div>
127 </body>
128 </html>
129