[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Unanticipated time and costs for full recovery
> On 13. Aug 2025, at 17:55, jacob@larsen.net wrote:
>
> On 2025-08-13 10:17, Tomaž Šolc via tarsnap-users wrote:
>
>> I'm not sure if these download rates are a technical limitation or downloads are throttled by country or average monthly spend or something, but maybe it's something that should be displayed as prominently on the website as the download and storage prices.
>
>
> I once wrote a Python script to try and run this restore process more in parallel. It basically lists all the files, sorts by size and separates in a number of buckets of as equal size as possible. It then initiates N restore processes of the lists of files, to run it in parallel. This will have some overhead, so it is likely an expensive option to restore data, but it has the potential to be faster (but worst-case is not great). I am due for a recovery test, so I will probably get to evaluate this a bit closer soon.
That’s what Redsnapper does too, but it still confuses me that if I could do it in a 120 line Ruby script that the default tarsnap client doesn’t do the same. I wrote redsnapper specifically for the case of doing disaster recovery with Tarsnap with a large set of files, but we stopped using it for that (now it’s just our “backup backup” or used to pull a small set of files) shortly thereafter because of speed issues. It really seems like this is something that Tarsnap needs to address.
-Scott