> On 8 Sep 2016, at 14:40, Scott Wheeler <email@example.com> wrote:
>> On Sep 8, 2016, at 11:43 AM, Daniel Neades <firstname.lastname@example.org> wrote:
>> The slow list and restore times also make testing Tarsnap backups *extremely* painful. For example, we backup dumps (4–5 GB in size) of one of our databases. We can restore a dump via SSH from a remote backup machine located on a different continent in a matter of minutes. Restoring the identical dump from Tarsnap takes hours.
>> I am not sure we’d have chosen Tarsnap had we realized how slow these essential and common operations would be.
> I realize this probably won't help if you're restoring single file database dumps, but for doing complete (rather than hand-picking single files) restores with a lot of files (about 70k in our case) using multiple tarsnap processes can speed things up dramatically. I wrote a little Ruby tool to do this for us years ago:
> Again though, if that can be done with a tiny Ruby wrapper, it should be done in the default client. It's the only thing that makes doing complete restores for a catastrophic case of complete data loss almost tenable for us with Tarsnap.
That is helpful for people with lots of files (though not, as you surmized, for us); thank you for mentioning that here. It is a shame that people are having to do these sorts of work-arounds, though – being able to restore reasonably quickly from a backup ought to be a core capability of any backup solution.
Director, Araxis Ltd