X-Git-Url: https://git.saurik.com/redis.git/blobdiff_plain/8fb13ce816b85ac414921ecca420671bf74a3eea..4eddb121563a9e50cf2ee66e2a0ff7c186dd696a:/doc/AppendOnlyFileHowto.html?ds=sidebyside diff --git a/doc/AppendOnlyFileHowto.html b/doc/AppendOnlyFileHowto.html index b30a27ca..295e7548 100644 --- a/doc/AppendOnlyFileHowto.html +++ b/doc/AppendOnlyFileHowto.html @@ -16,7 +16,7 @@
-AppendOnlyFileHowto: Contents
    General Information
    Log rewriting
    Wait... but how does this work?
    How durable is the append only file? +AppendOnlyFileHowto: Contents
  Append Only File HOWTO
    General Information
    Log rewriting
    Wait... but how does this work?
    How durable is the append only file?
    What should I do if my Append Only File gets corrupted?

AppendOnlyFileHowto

@@ -26,11 +26,12 @@
- = Append Only File HOWTO =

General Information

Append only file is an alternative durability option for Redis. What this mean? Let's start with some fact:

+ #sidebar RedisGuides +

Append Only File HOWTO

General Information

Append only file is an alternative durability option for Redis. What this mean? Let's start with some fact:

What is the solution? To use append only file as alternative to snapshotting. How it works?

-

Log rewriting

As you can guess... the append log file gets bigger and bigger, every time there is a new operation changing the dataset. Even if you set always the same key "mykey" to the values of "1", "2", "3", ... up to 10000000000 in the end you'll have just a single key in the dataset, just a few bytes! but how big will be the append log file? Very very big.

So Redis supports an interesting feature: it is able to rebuild the append log file, in background, without to stop processing client commands. The key is the command BGREWRITEAOF. This command basically is able to use the dataset in memory in order to rewrite the shortest sequence of commands able to rebuild the exact dataset that is currently in memory.

So from time to time when the log gets too big, try this command. It's safe as if it fails you will not lost your old log (but you can make a backup copy given that currently 1.1 is still in beta!).

Wait... but how does this work?

Basically it uses the same fork() copy-on-write trick that snapshotting already uses. This is how the algorithm works:

+

Log rewriting

As you can guess... the append log file gets bigger and bigger, every time there is a new operation changing the dataset. Even if you set always the same key "mykey" to the values of "1", "2", "3", ... up to 10000000000 in the end you'll have just a single key in the dataset, just a few bytes! but how big will be the append log file? Very very big.

So Redis supports an interesting feature: it is able to rebuild the append log file, in background, without to stop processing client commands. The key is the command BGREWRITEAOF. This command basically is able to use the dataset in memory in order to rewrite the shortest sequence of commands able to rebuild the exact dataset that is currently in memory.

So from time to time when the log gets too big, try this command. It's safe as if it fails you will not lost your old log (but you can make a backup copy given that currently 1.1 is still in beta!).

Wait... but how does this work?

Basically it uses the same fork() copy-on-write trick that snapshotting already uses. This is how the algorithm works:

How durable is the append only file?

Check redis.conf, you can configure how many times Redis will fsync() data on disk. There are three options:

-Warning: by default Redis will fsync() after every command! This is because the Redis authors want to ship a default configuration that is the safest pick. But the best compromise for most datasets is to fsync() one time every second. +The suggested (and default) policy is "everysec". It is both very fast and pretty safe. The "always" policy is very slow in practice, even if it was improved in Redis 2.0.0 there is no way to make fsync() faster than it is.

What should I do if my Append Only File gets corrupted?

It is possible that the server crashes while writing the AOF file (this still should never lead to inconsistencies) corrupting the file in a way that is no longer loadable by Redis. When this happens you can fix this problem using the following procedure: