X-Git-Url: https://git.saurik.com/redis.git/blobdiff_plain/6d4371d46939ef6b10b9205af6af56e1146cdf91..44b38ef43259e8805b01db01ad9a1c67479c6194:/redis.conf diff --git a/redis.conf b/redis.conf index e8c6cd82..fac5ba60 100644 --- a/redis.conf +++ b/redis.conf @@ -78,6 +78,52 @@ databases 16 # requirepass foobared +################################### LIMITS #################################### + +# Set the max number of connected clients at the same time. By default there +# is no limit, and it's up to the number of file descriptors the Redis process +# is able to open. The special value '0' means no limts. +# Once the limit is reached Redis will close all the new connections sending +# an error 'max number of clients reached'. + +# maxclients 128 + +# Don't use more memory than the specified amount of bytes. +# When the memory limit is reached Redis will try to remove keys with an +# EXPIRE set. It will try to start freeing keys that are going to expire +# in little time and preserve keys with a longer time to live. +# Redis will also try to remove objects from free lists if possible. +# +# If all this fails, Redis will start to reply with errors to commands +# that will use more memory, like SET, LPUSH, and so on, and will continue +# to reply to most read-only commands like GET. +# +# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a +# 'state' server or cache, not as a real DB. When Redis is used as a real +# database the memory usage will grow over the weeks, it will be obvious if +# it is going to use too much memory in the long run, and you'll have the time +# to upgrade. With maxmemory after the limit is reached you'll start to get +# errors for write operations, and this may even lead to DB inconsistency. + +# maxmemory + +############################## APPEND ONLY MODE ############################### + +# By default Redis asynchronously dumps the dataset on disk. If you can live +# with the idea that the latest records will be lost if something like a crash +# happens this is the preferred way to run Redis. If instead you care a lot +# about your data and don't want to that a single record can get lost you should +# enable the append only mode: when this mode is enabled Redis will append +# every write operation received in the file appendonly.log. This file will +# be read on startup in order to rebuild the full dataset in memory. +# +# Note that you can have both the async dumps and the append only file if you +# like (you have to comment the "save" statements above to disable the dumps). +# Still if append only mode is enabled Redis will load the data from the +# log file at startup ignoring the dump.rdb file. + +# appendonly yes + ############################### ADVANCED CONFIG ############################### # Glue small output buffers together in order to send small replies in a @@ -89,4 +135,15 @@ glueoutputbuf yes # string in your dataset, but performs lookups against the shared objects # pool so it uses more CPU and can be a bit slower. Usually it's a good # idea. +# +# When object sharing is enabled (shareobjects yes) you can use +# shareobjectspoolsize to control the size of the pool used in order to try +# object sharing. A bigger pool size will lead to better sharing capabilities. +# In general you want this value to be at least the double of the number of +# very common strings you have in your dataset. +# +# WARNING: object sharing is experimental, don't enable this feature +# in production before of Redis 1.0-stable. Still please try this feature in +# your development environment so that we can test it better. shareobjects no +shareobjectspoolsize 1024