How to make a block-level incremental backup using LVM?

Fernando Lozano fernando at lozano.eti.br
Fri Dec 14 14:54:12 UTC 2012


Hi Alan,

>> backups using dump, dd, and some LVM or ext utility? Maybe using
>> inotify? Why no open source backup tool seems to be doing this?
> Because it turns out to be a dumb way of trying to do it. It's also near
> impossible to get a consistent image. Plus it's becoming clear that
> "block device" as a concept is on the way out. Current SSDs provide one
> for compatibility.

I understand this -- my current backup scripts use Oracle plsql 
statements so I can get a consitent image of database data files. It's a 
pain in the ass to manage all redo log files generated during the 
backup, and which are needed for proper restore.

But most comercial, high-end solutions seem to be going that way. Their 
approach may be best described as "logging" as you called, but anyway 
doing filesystem tree walks for every incremental backup is proving to 
be too expensive, and that's why I'm looking for an alternative to rsync.

>> Would any option allow me to restore an individual file? (I guess we can
>> live with restoring entire file systems, it's just a matter of
>> segregating a few file trees instead of having everything on the same
>> logical volume.)
> A block dump doesn't even guarantee you can restore the volume unless its
> an atomic snapshot of everything involved, including journals if they are
> on another device.

Commercial tools promise this ability. How do they get the block-to-file 
mapping to do the restore? I was looking for a way to do that so I could 
do the same using LVM snapshots.

But LVM snapshots are a "whole" disk. If I try to backup them using dd 
or rsync, they are the same as a full backup. How to backup just the 
snapshot changed blocks and later restore them (of course after 
restoring the full volume, or to a mirror)?


> A block dump may also be useless if you get fs corruption as your copy
> will have the same corruption if it's not caught early and is gradually
> spreading through the fs.
Most times this is the same as a file-level backup: it's useless to 
restore corrupted files. I have to go back to the last non-corrupted 
file copy using an older backup.

I really don't like a rsync-based backup because there's no way I can 
check the backup files like I can using a tar backup. I try to have both 
a tarball somewhere and a rsync "mirror" elsewhere. The problem is that 
both are taking too long to complete and even longer to restore. I have 
the same complaints about drdb: the only reliable way to check if the 
copy is fine is comparing with the source.

There has to be a better way to restore a few TB of backup consisting of 
lots of small files. :-(

Thanks for the tip about ceph.


[]s, Fernando Lozano



More information about the users mailing list