efficiently store only the changes to that tarball from one day to the next.
For small files, bup's compression won't be as good as xdelta's, but for
anything over a few megabytes in size, bup's compression will actually
-*work*, which is a bit advantage over xdelta.
+*work*, which is a big advantage over xdelta.
How does hashsplitting work? It's deceptively simple. We read through the
-file one byte at a time, calculating a rolling checksum of the last 128
-bytes. (Why 128? No reason. Literally. We picked it out of the air.
+file one byte at a time, calculating a rolling checksum of the last 64
+bytes. (Why 64? No reason. Literally. We picked it out of the air.
Probably some other number is better. Feel free to join the mailing list
and tell us which one and why.) (The rolling checksum idea is actually
stolen from rsync and xdelta, although we use it differently. And they use
Save us! ... oh boy, I sure hope he doesn't read this)
In any case, rollsum seems to do pretty well at its job.
-You can find it in bupsplit.c. Basically, it converts the last 128 bytes
+You can find it in bupsplit.c. Basically, it converts the last 64 bytes
read into a 32-bit integer. What we then do is take the lowest 13
bits of the rollsum, and if they're all 1's, we consider that to be the end
of a chunk. This happens on average once every 2^13 = 8192 bytes, so the
the same. All that matters to the hashsplitting algorithm is the 32-byte
"separator" sequence, and a single change can only affect, at most, one
separator sequence or the bytes between two separator sequences. And
-because of rollsum, about one in 8192 possible 128-byte sequences is a
+because of rollsum, about one in 8192 possible 64-byte sequences is a
separator sequence. Like magic, the hashsplit chunking algorithm will chunk
your file the same way every time, even without knowing how it had chunked
it previously.
As an overhead percentage, 0.25% basically doesn't matter. 488 megs sounds
like a lot, but compared to the 200GB you have to store anyway, it's
irrelevant. What *is* relevant is that 488 megs is a lot of memory you have
-to use in order to to keep track of the list. Worse, if you back up an
+to use in order to keep track of the list. Worse, if you back up an
almost-identical file tomorrow, you'll have *another* 488 meg blob to keep
track of, and it'll be almost but not quite the same as last time.
"frequently" and that git handles much more frequent changes than, say, svn
can handle. But that's not the same kind of "frequently" we're talking
about. Imagine you're backing up all the files on your disk, and one of
-those files is a 100 GB database file with hundreds of daily users. You
+those files is a 100 GB database file with hundreds of daily users. Your
disk changes so frequently you can't even back up all the revisions even if
you were backing stuff up 24 hours a day. That's "frequently.")
repository. There's just one more thing we have to deal with:
filesystem metadata. Git repositories are really only intended to
store file contents with a small bit of extra information, like
-symlink targets and and executable bits, so we have to store the rest
+symlink targets and executable bits, so we have to store the rest
some other way.
Bup stores more complete metadata in the VFS in a file named .bupm in
exist in the repository.
Determination of dirtiness is a little more complicated than it sounds. The
-most dirtiness-relevant relevant flag in the bupindex is IX_HASHVALID; if
+most dirtiness-relevant flag in the bupindex is IX_HASHVALID; if
this flag is reset, the file *definitely* is dirty and needs to be backed
up. But a file may be dirty even if IX_HASHVALID is set, and that's the
confusing part.