#
# bind 127.0.0.1
-# Close the connection after a client is idle for N seconds
+# Close the connection after a client is idle for N seconds (0 to disable)
timeout 300
# Save the DB on disk:
save 300 10
save 60 10000
+# The filename where to dump the DB
+dbfilename dump.rdb
+
# For default save/load DB in/from the working directory
# Note that you must specify a directory not a file name.
dir ./
# output for logging but daemonize, logs will be sent to /dev/null
logfile stdout
-# Set the number of databases.
+# Set the number of databases. The default database is DB 0, you can select
+# a different one on a per-connection basis using SELECT <dbid> where
+# dbid is a number between 0 and 'databases'-1
databases 16
################################# REPLICATION #################################
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
-#requirepass foobared
+# requirepass foobared
+
+################################### LIMITS ####################################
+
+# Set the max number of connected clients at the same time. By default there
+# is no limit, and it's up to the number of file descriptors the Redis process
+# is able to open. The special value '0' means no limts.
+# Once the limit is reached Redis will close all the new connections sending
+# an error 'max number of clients reached'.
+
+# maxclients 128
+
+# Don't use more memory than the specified amount of bytes.
+# When the memory limit is reached Redis will try to remove keys with an
+# EXPIRE set. It will try to start freeing keys that are going to expire
+# in little time and preserve keys with a longer time to live.
+# Redis will also try to remove objects from free lists if possible.
+#
+# If all this fails, Redis will start to reply with errors to commands
+# that will use more memory, like SET, LPUSH, and so on, and will continue
+# to reply to most read-only commands like GET.
+#
+# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
+# 'state' server or cache, not as a real DB. When Redis is used as a real
+# database the memory usage will grow over the weeks, it will be obvious if
+# it is going to use too much memory in the long run, and you'll have the time
+# to upgrade. With maxmemory after the limit is reached you'll start to get
+# errors for write operations, and this may even lead to DB inconsistency.
+
+# maxmemory <bytes>
############################### ADVANCED CONFIG ###############################
# string in your dataset, but performs lookups against the shared objects
# pool so it uses more CPU and can be a bit slower. Usually it's a good
# idea.
+#
+# When object sharing is enabled (shareobjects yes) you can use
+# shareobjectspoolsize to control the size of the pool used in order to try
+# object sharing. A bigger pool size will lead to better sharing capabilities.
+# In general you want this value to be at least the double of the number of
+# very common strings you have in your dataset.
+#
+# WARNING: object sharing is experimental, don't enable this feature
+# in production before of Redis 1.0-stable. Still please try this feature in
+# your development environment so that we can test it better.
shareobjects no
+shareobjectspoolsize 1024