efficiently store only the changes to that tarball from one day to the next.
For small files, bup's compression won't be as good as xdelta's, but for
anything over a few megabytes in size, bup's compression will actually
-*work*, which is a bit advantage over xdelta.
+*work*, which is a big advantage over xdelta.
How does hashsplitting work? It's deceptively simple. We read through the
-file one byte at a time, calculating a rolling checksum of the last 128
-bytes. (Why 128? No reason. Literally. We picked it out of the air.
+file one byte at a time, calculating a rolling checksum of the last 64
+bytes. (Why 64? No reason. Literally. We picked it out of the air.
Probably some other number is better. Feel free to join the mailing list
and tell us which one and why.) (The rolling checksum idea is actually
stolen from rsync and xdelta, although we use it differently. And they use
Save us! ... oh boy, I sure hope he doesn't read this)
In any case, rollsum seems to do pretty well at its job.
-You can find it in bupsplit.c. Basically, it converts the last 128 bytes
+You can find it in bupsplit.c. Basically, it converts the last 64 bytes
read into a 32-bit integer. What we then do is take the lowest 13
bits of the rollsum, and if they're all 1's, we consider that to be the end
of a chunk. This happens on average once every 2^13 = 8192 bytes, so the
the same. All that matters to the hashsplitting algorithm is the 32-byte
"separator" sequence, and a single change can only affect, at most, one
separator sequence or the bytes between two separator sequences. And
-because of rollsum, about one in 8192 possible 128-byte sequences is a
+because of rollsum, about one in 8192 possible 64-byte sequences is a
separator sequence. Like magic, the hashsplit chunking algorithm will chunk
your file the same way every time, even without knowing how it had chunked
it previously.
As an overhead percentage, 0.25% basically doesn't matter. 488 megs sounds
like a lot, but compared to the 200GB you have to store anyway, it's
irrelevant. What *is* relevant is that 488 megs is a lot of memory you have
-to use in order to to keep track of the list. Worse, if you back up an
+to use in order to keep track of the list. Worse, if you back up an
almost-identical file tomorrow, you'll have *another* 488 meg blob to keep
track of, and it'll be almost but not quite the same as last time.
"frequently" and that git handles much more frequent changes than, say, svn
can handle. But that's not the same kind of "frequently" we're talking
about. Imagine you're backing up all the files on your disk, and one of
-those files is a 100 GB database file with hundreds of daily users. You
+those files is a 100 GB database file with hundreds of daily users. Your
disk changes so frequently you can't even back up all the revisions even if
you were backing stuff up 24 hours a day. That's "frequently.")
Imagine you had a midx file for your 200 packs. midx files are a lot like
idx files; they have a lookup table at the beginning that narrows down the
-initial search, followed by a binary search. The unlike idx files (which
+initial search, followed by a binary search. Then unlike idx files (which
have a fixed-size 256-entry lookup table) midx tables have a variably-sized
table that makes sure the entire binary search can be contained to a single
page of the midx file. Basically, the lookup table tells you which page to
repository. There's just one more thing we have to deal with:
filesystem metadata. Git repositories are really only intended to
store file contents with a small bit of extra information, like
-symlink targets and and executable bits, so we have to store the rest
+symlink targets and executable bits, so we have to store the rest
some other way.
Bup stores more complete metadata in the VFS in a file named .bupm in
tree information.
The nice thing about this design is that you can walk through each
-file in a tree just by opening the tree and the .bupmeta contents, and
+file in a tree just by opening the tree and the .bupm contents, and
iterating through both at the same time.
+Since the contents of any .bupm file should match the state of the
+filesystem when it was *indexed*, bup must record the detailed
+metadata in the index. To do this, bup records four values in the
+index, the atime, mtime, and ctime (as timespecs), and an integer
+offset into a secondary "metadata store" which has the same name as
+the index, but with ".meta" appended. This secondary store contains
+the encoded Metadata object corresponding to each path in the index.
+
+Currently, in order to decrease the storage required for the metadata
+store, bup only writes unique values there, reusing offsets when
+appropriate across the index. The effectiveness of this approach
+relies on the expectation that there will be many duplicate metadata
+records. Storing the full timestamps in the index is intended to make
+that more likely, because it makes it unnecessary to record those
+values in the secondary store. So bup clears them before encoding the
+Metadata objects destined for the index, and timestamp differences
+don't contribute to the uniqueness of the metadata.
+
Bup supports recording and restoring hardlinks, and it does so by
tracking sets of paths that correspond to the same dev/inode pair when
indexing. This information is stored in an optional file with the
exist in the repository.
Determination of dirtiness is a little more complicated than it sounds. The
-most dirtiness-relevant relevant flag in the bupindex is IX_HASHVALID; if
+most dirtiness-relevant flag in the bupindex is IX_HASHVALID; if
this flag is reset, the file *definitely* is dirty and needs to be backed
up. But a file may be dirty even if IX_HASHVALID is set, and that's the
confusing part.