[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: accelerate archive recovery
On 06/03/11 01:34, Chris Webb wrote:
> Kevin Gilpin <kevin.gilpin@praxeon.com> writes:
>> Is there anything I can do to make the -x operation go faster? Tarsnap is
>> only using about 10% of my CPU, I am restoring an EC2 instance so it should
>> have a very fast pipe to tarsnap. It is taking a couple of hours to restore
>> 10GB; I need it to be faster than that because the archive size will
>> ultimately be up to 100GB and I can't wait 20 hours for a recovery.
>
> Interesting. Sounds like my guess yesterday that my performance problems
> were due to latency from the UK might be completely incorrect!
Tarsnap extract performance is currently latency-bound; the latency in question
is client->EC2->S3, and the EC2->S3 step is about 50 ms. I'm working towards
fixing this, but it's nontrivial.
The best workaround right now is to do parallel extracts; if you can split your
data between multiple archives, or use --include and --exclude options when
extracting so that each 'tarsnap -x' is doing a subset of the files, you should
be able to use more bandwidth.
--
Colin Percival
Security Officer, FreeBSD | freebsd.org | The power to serve
Founder / author, Tarsnap | tarsnap.com | Online backups for the truly paranoid