]> git.saurik.com Git - redis.git/blame - redis.conf
Now MULTI returns +OK as well
[redis.git] / redis.conf
CommitLineData
ed9b544e 1# Redis configuration file example
2
3# By default Redis does not run as a daemon. Use 'yes' if you need it.
4# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
5daemonize no
6
ed329fcf
LH
7# When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.
8# You can specify a custom pid file location here.
9pidfile /var/run/redis.pid
10
ed9b544e 11# Accept connections on the specified port, default is 6379
12port 6379
13
14# If you want you can bind a single interface, if the bind option is not
15# specified all the interfaces will listen for connections.
16#
17# bind 127.0.0.1
18
0150db36 19# Close the connection after a client is idle for N seconds (0 to disable)
ed9b544e 20timeout 300
21
121f70cf 22# Set server verbosity to 'debug'
23# it can be one of:
24# debug (a lot of information, useful for development/testing)
25# notice (moderately verbose, what you want in production probably)
26# warning (only very important / critical messages are logged)
27loglevel debug
28
29# Specify the log file name. Also 'stdout' can be used to force
30# the demon to log on the standard output. Note that if you use standard
31# output for logging but daemonize, logs will be sent to /dev/null
32logfile stdout
33
34# Set the number of databases. The default database is DB 0, you can select
35# a different one on a per-connection basis using SELECT <dbid> where
36# dbid is a number between 0 and 'databases'-1
37databases 16
38
39################################ SNAPSHOTTING #################################
40#
ed9b544e 41# Save the DB on disk:
42#
43# save <seconds> <changes>
44#
45# Will save the DB if both the given number of seconds and the given
46# number of write operations against the DB occurred.
47#
48# In the example below the behaviour will be to save:
49# after 900 sec (15 min) if at least 1 key changed
50# after 300 sec (5 min) if at least 10 keys changed
51# after 60 sec if at least 10000 keys changed
52save 900 1
53save 300 10
54save 60 10000
55
121f70cf 56# Compress string objects using LZF when dump .rdb databases?
b0553789 57# For default that's set to 'yes' as it's almost always a win.
58# If you want to save some CPU in the saving child set it to 'no' but
59# the dataset will likely be bigger if you have compressible values or keys.
60rdbcompression yes
121f70cf 61
b8b553c8 62# The filename where to dump the DB
63dbfilename dump.rdb
64
ed9b544e 65# For default save/load DB in/from the working directory
66# Note that you must specify a directory not a file name.
67dir ./
68
ed9b544e 69################################# REPLICATION #################################
70
71# Master-Slave replication. Use slaveof to make a Redis instance a copy of
72# another Redis server. Note that the configuration is local to the slave
73# so for example it is possible to configure the slave to save the DB with a
74# different interval, or to listen to another port, and so on.
3f477979 75#
ed9b544e 76# slaveof <masterip> <masterport>
77
3f477979 78# If the master is password protected (using the "requirepass" configuration
79# directive below) it is possible to tell the slave to authenticate before
80# starting the replication synchronization process, otherwise the master will
81# refuse the slave request.
82#
83# masterauth <master-password>
84
f2aa84bd 85################################## SECURITY ###################################
86
87# Require clients to issue AUTH <PASSWORD> before processing any other
88# commands. This might be useful in environments in which you do not trust
89# others with access to the host running redis-server.
90#
91# This should stay commented out for backward compatibility and because most
92# people do not need auth (e.g. they run their own servers).
3f477979 93#
290deb8b 94# requirepass foobared
f2aa84bd 95
285add55 96################################### LIMITS ####################################
97
98# Set the max number of connected clients at the same time. By default there
99# is no limit, and it's up to the number of file descriptors the Redis process
100# is able to open. The special value '0' means no limts.
101# Once the limit is reached Redis will close all the new connections sending
102# an error 'max number of clients reached'.
3f477979 103#
285add55 104# maxclients 128
105
3fd78bcd 106# Don't use more memory than the specified amount of bytes.
107# When the memory limit is reached Redis will try to remove keys with an
108# EXPIRE set. It will try to start freeing keys that are going to expire
109# in little time and preserve keys with a longer time to live.
110# Redis will also try to remove objects from free lists if possible.
111#
112# If all this fails, Redis will start to reply with errors to commands
113# that will use more memory, like SET, LPUSH, and so on, and will continue
114# to reply to most read-only commands like GET.
144d479b 115#
116# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
117# 'state' server or cache, not as a real DB. When Redis is used as a real
118# database the memory usage will grow over the weeks, it will be obvious if
119# it is going to use too much memory in the long run, and you'll have the time
120# to upgrade. With maxmemory after the limit is reached you'll start to get
121# errors for write operations, and this may even lead to DB inconsistency.
3f477979 122#
3fd78bcd 123# maxmemory <bytes>
124
44b38ef4 125############################## APPEND ONLY MODE ###############################
126
127# By default Redis asynchronously dumps the dataset on disk. If you can live
128# with the idea that the latest records will be lost if something like a crash
129# happens this is the preferred way to run Redis. If instead you care a lot
130# about your data and don't want to that a single record can get lost you should
131# enable the append only mode: when this mode is enabled Redis will append
132# every write operation received in the file appendonly.log. This file will
133# be read on startup in order to rebuild the full dataset in memory.
134#
135# Note that you can have both the async dumps and the append only file if you
136# like (you have to comment the "save" statements above to disable the dumps).
137# Still if append only mode is enabled Redis will load the data from the
138# log file at startup ignoring the dump.rdb file.
0154acdc 139#
140# The name of the append only file is "appendonly.log"
49b99ab4 141#
142# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
143# log file in background when it gets too big.
44b38ef4 144
4e141d5a 145appendonly no
44b38ef4 146
4e141d5a 147# The fsync() call tells the Operating System to actually write data on disk
48f0308a 148# instead to wait for more data in the output buffer. Some OS will really flush
149# data on disk, some other OS will just try to do it ASAP.
150#
151# Redis supports three different modes:
152#
153# no: don't fsync, just let the OS flush the data when it wants. Faster.
154# always: fsync after every write to the append only log . Slow, Safest.
155# everysec: fsync only if one second passed since the last fsync. Compromise.
156#
4e141d5a 157# The default is "always" that's the safer of the options. It's up to you to
158# understand if you can relax this to "everysec" that will fsync every second
159# or to "no" that will let the operating system flush the output buffer when
160# it want, for better performances (but if you can live with the idea of
161# some data loss consider the default persistence mode that's snapshotting).
48f0308a 162
4e141d5a 163appendfsync always
48f0308a 164# appendfsync everysec
4e141d5a 165# appendfsync no
48f0308a 166
ed9b544e 167############################### ADVANCED CONFIG ###############################
168
169# Glue small output buffers together in order to send small replies in a
170# single TCP packet. Uses a bit more CPU but most of the times it is a win
171# in terms of number of queries per second. Use 'yes' if unsure.
172glueoutputbuf yes
10c43610 173
174# Use object sharing. Can save a lot of memory if you have many common
175# string in your dataset, but performs lookups against the shared objects
176# pool so it uses more CPU and can be a bit slower. Usually it's a good
177# idea.
e52c65b9 178#
179# When object sharing is enabled (shareobjects yes) you can use
180# shareobjectspoolsize to control the size of the pool used in order to try
181# object sharing. A bigger pool size will lead to better sharing capabilities.
182# In general you want this value to be at least the double of the number of
183# very common strings you have in your dataset.
184#
185# WARNING: object sharing is experimental, don't enable this feature
186# in production before of Redis 1.0-stable. Still please try this feature in
187# your development environment so that we can test it better.
10c43610 188shareobjects no
e52c65b9 189shareobjectspoolsize 1024