Thanks Colin & Jacob.|
With several hundred Gb's of data being archived, the local tarbell option is probably not going to work for me.
Does "tarsnap -t -f" show file modification date based on what the filesystem is reporting, on when tarsnap detects a change?
To provide more details, I had a number of sectors on an SSD silently faile so I needed to identify and restore files that were corrupted by this evemt. The filesystem did not report any change in modification date on these files, so couldn't rely on this to identify which files to restore. Hence my question around reporting on the files impacted by block changes between archives, to both identify an expected change, and recover from this.
If tarsnap can't do this, perhaps I need to start capturing a hash of each file at the time of backup, and compare those between archives.
On 19/6/19 7:15 am, Jacob Larsen wrote:
I had the same issue a while back. I was told it was not easily fixed due to the layers in Tarsnap. I ended up making a regular tarball and fed that to tarsnap. That way I had a local tarball that matched the actual data in the archive. Then I could extract it and compere at the next backup. A bit data heavy process but it gave me what I needed. It is scriptable, so it is possible to let your backup script log the changed files on each backup run, but it has a pretty high cost in disk I/O, plus you need to keep a copy of your data around between backups.