[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Question about mitigating large storage costs in case of ransomware
- To: Rob Hoelz <rob@hoelz.ro>, tarsnap-users@tarsnap.com
- Subject: Re: Question about mitigating large storage costs in case of ransomware
- From: Colin Percival <cperciva@tarsnap.com>
- Date: Wed, 12 Jul 2017 12:46:01 -0700
- In-reply-to: <20170712093639.440f9300@pyxis>
- References: <20170712093639.440f9300@pyxis>
On 07/12/17 06:36, Rob Hoelz wrote:
> I just started using tarsnap, and I was wondering if there exists an
> option (or the potential interest in developing an option) to put a cap
> on an archive size. The reason I ask is on the off chance another
> tarsnap user or I gets bitten by ransomware and it encrypts their /home
> directory, I would expect the archive size for an automated backup to
> really balloon in size (seeing as it's kind of hard to deduplicate and
> compress random data!), and it would be nice to avoid burning through
> one's account balance to add insult to injury. Not that I intend on
> getting bitten by ransomware, but you never know!
>
> The closest thing I could find is the --maxbw option; is there a
> corresponding option for archive size that I'm not seeing on the
> manpage?
I think --maxbw is what you're going to want. For one thing, ransomware
might compress your data before encrypting it (if nothing else, to reduce
the chance of running out of disk space in the process), so the archive
wouldn't necessarily look any larger; for another, since --maxbw essentially
sets a limit on the amount of *new* data, it could stop a runaway backup
long before the *total archive size* hit an unreasonably high level.
--
Colin Percival
Security Officer Emeritus, FreeBSD | The power to serve
Founder, Tarsnap | www.tarsnap.com | Online backups for the truly paranoid