How to Harden Your Backup Strategy Against Ransomware
Published on: Monday, Apr 13, 2026 By Admin
Most teams think about ransomware as a production problem. The servers get hit, files get encrypted, and you restore from backup. Done.
But modern ransomware doesn’t work that way. Attackers have gotten smarter. They don’t just encrypt your live data and demand payment. They spend days or weeks inside your network first, quietly identifying and corrupting your backups before they pull the trigger. By the time you realize something’s wrong, your recovery path is already gone. That’s the part nobody talks about enough, and it’s exactly why “we have backups” is no longer a complete answer.
Why Ransomware Specifically Targets Backups
This isn’t speculation. It’s the documented behavior of most major ransomware variants deployed against businesses today.
Attackers know that a working backup kills their leverage. If you can restore cleanly in an hour, the ransom demand is worthless. So the first thing a sophisticated attacker does after gaining access is look for backup agents, backup destinations, and scheduled backup tasks. Then they either delete them, encrypt them, or just wait long enough that every restore point contains the malware.
The implication is uncomfortable but important: your backup system is part of your attack surface. It needs to be hardened the same way you’d harden anything else that touches sensitive data.
Here’s how to actually do that.
The 3-2-1 Rule Still Matters, But It’s Not Enough on Its Own
You’ve probably heard of the 3-2-1 backup rule. Three copies of your data. Two different storage media types. One copy offsite. It’s a solid foundation and you should absolutely be following it.
But 3-2-1 was designed for hardware failure scenarios, not adversarial ones. If an attacker has compromised your backup agent and your backup destination is reachable from that same compromised machine, you’ve got three encrypted copies of nothing useful.
What 3-2-1 needs in a ransomware context is an additional layer: immutability and network isolation. Your offsite copy needs to be somewhere that a compromised server cannot reach and modify. That’s the part most teams skip.
Choosing where your backup data actually lives matters a lot here. If your backup bucket is mounted on the same network as your production systems, it’s not really offsite in any meaningful sense.
Immutable Backups: What They Are and How to Set Them Up
Immutability means a backup, once written, cannot be modified or deleted for a defined period. Not by your backup software, not by an admin, and not by malware running with root access.
Most S3-compatible storage providers support this through a feature called Object Lock. When enabled, objects are written in WORM mode, which stands for Write Once Read Many. Even if an attacker gets your storage credentials, they can’t delete or overwrite objects that are locked.
There are two modes worth knowing:
- Governance mode: Prevents deletion by most users, but privileged IAM users can override it. Useful for operational flexibility.
- Compliance mode: Nobody can delete or overwrite the object. Not even the root account. This is the one you want for ransomware protection.
Setting a 30-day compliance lock on your daily backups means an attacker would need to wait 30 days before their changes could propagate through your entire backup history. You’ll catch it long before then if you have proper monitoring in place.
One practical thing to check: make sure your backup tool actually uses Object Lock when writing to S3. Not all of them do by default. And if you’re bringing your own bucket, [confirm your storage provider supports Object Lock before assuming it does.](/blog/why-byob-bring-your-own-bucket-is-a-big improvement-for-cloud-backups)
Separating Backup Credentials from Production Systems
This is where a lot of teams make a quiet but serious mistake. They use the same IAM user or service account for everything. The production app writes to S3, the backup agent writes to S3, and it all lives under one set of credentials. Clean and simple.
Until it isn’t.
If an attacker compromises your production server and finds those credentials, they now have write access to your backup bucket. They can delete your backups directly, without even touching the backup software.
Backup credentials should be separate, scoped, and ideally write-only from the production side.
Here’s the model that works:
- Create a dedicated IAM user for backups only.
- Give that user permission to write to your backup bucket, but not to list or delete objects.
- Store those credentials only on the backup agent, not in your application config or environment variables.
- Use a different credential with read access only for restores, and keep that one somewhere offline or in a secrets manager that isn’t accessible from production.
This won’t stop every attack, but it significantly limits what an attacker can do even if they fully compromise a server.
Air-Gapping Your Backups (Without Making Recovery Painful)
True air-gapping, physically disconnected storage with no network path, is the gold standard. It’s also operationally painful for most teams. You can’t air-gap cloud storage in the traditional sense.
But you can get close enough to matter.
The practical equivalent for most teams is a separate cloud account with no trust relationship to your production account. Your production AWS account has no IAM role, no VPC peering, and no cross-account access to the backup account. Backups are pushed there through a scoped write-only credential, but there’s no path from the backup account back to production, and no path from a compromised production server into the backup account’s management console.
Combined with Object Lock, this architecture means an attacker would need to:
- Compromise your production server
- Find the backup credentials
- Somehow gain access to the separate backup account console
- Figure out how to override compliance-mode Object Lock
That’s not impossible, but it’s several layers harder. And most ransomware is not that targeted or sophisticated. Most of it is opportunistic.
Monitoring for Backup Tampering
You can build a great backup architecture and still get caught off guard if nobody’s watching it. Ransomware attacks that specifically target backups often start with small things: a backup job that quietly fails, a retention policy that gets modified, a storage bucket that starts shrinking instead of growing.
These are signals. You need to be watching for them.
At a minimum, set up alerts for:
- Any backup job that fails or produces an empty result
- Unexpected changes to your backup schedule or retention configuration
- Deletion events in your backup storage (S3 bucket-level logging makes this easy)
- Large drops in backup storage size that don’t correspond to scheduled retention cleanup
Getting backup alerts set up properly is one of the highest-leverage things you can do. Not just “did the backup run” but “did it run correctly, does the output look right, and did anything in the backup configuration change since last time.”
If you’re using a centralized backup dashboard, you should be able to see all of this across all your servers in one place. If you’re managing backups server by server through cron jobs and shell scripts, you’re almost certainly missing some of these signals.
Testing Restores as a Security Practice, Not Just a Reliability Practice
Most teams test restores occasionally, usually after a scare or as part of a compliance audit. That’s better than never, but it misses something important.
Regular restore testing is also your best detection mechanism for backup corruption.
If an attacker has been silently corrupting your backups over the past three weeks, you won’t know until you try to restore. And if you only test restores once a quarter, that’s potentially three months of corrupted backups before you discover the problem.
A monthly restore test, ideally to an isolated environment, serves two purposes at once:
- It confirms your recovery process actually works when you need it.
- It gives you a signal if something has gone wrong with backup integrity, whether from an attack or a more mundane software bug.
The thing I’d push teams on specifically for ransomware scenarios: test restores from different points in your backup history, not just the most recent one. If an attacker has been inside your network for two weeks, your last 14 backups might all contain the malware. You need to know which restore point is actually clean before you’re under pressure to make that call.
Encryption: Yours, Not Just Theirs
Ransomware encrypts your data so you can’t read it. The ironic defense is to encrypt your backups yourself first, with keys you control, so even if an attacker gets access to the backup storage, they can’t read the contents.
This matters for a few reasons:
- Credential compromise doesn’t mean data compromise. If your bucket credentials are stolen, the attacker gets a bunch of encrypted blobs they can’t do anything with.
- Storage provider breaches don’t expose your data. If you’re using a third-party S3-compatible provider and they have a security incident, your data is still safe.
- Compliance requirements often mandate this anyway. If you’re dealing with SOC 2 or HIPAA, encryption at rest with customer-managed keys is usually a requirement, not a nice-to-have.
The practical thing to verify: who holds the encryption keys? If your backup provider manages the keys on your behalf, that’s better than no encryption, but it’s not the same as you holding the keys yourself. A sophisticated attacker who compromises your backup provider account might be able to access the keys too. Customer-managed encryption with keys stored separately is the stronger option.
A Checklist for Ransomware-Resistant Backups
To pull this all together, here’s what a hardened backup strategy actually looks like in practice:
Storage architecture:
- Backups stored in a separate cloud account with no trust relationship to production
- Object Lock enabled in compliance mode with a minimum 30-day retention lock
- Bucket logging enabled so all access and deletion events are recorded
Credential hygiene:
- Separate IAM credentials for backup writes and backup reads
- Write credentials scoped to put-only access, not delete or list
- Read credentials stored offline or in a secrets manager, not on production systems
Monitoring:
- Alerts for failed backup jobs within a defined window
- Alerts for any backup configuration changes
- Alerts for unexpected deletion events in backup storage
- Regular review of backup storage size trends
Testing:
- Monthly restore tests to an isolated environment
- Periodic tests of older restore points, not just the most recent backup
- Documented restore process that your whole team can follow, not just the person who set it up
Encryption:
- Customer-managed encryption keys stored separately from backup data
- Encryption applied before data leaves the source server
None of this is technically exotic. It’s all available through standard cloud storage features and any decent backup tool. The gap for most teams isn’t capability, it’s configuration. The defaults are rarely the secure option.
Conclusion
Ransomware targeting backups isn’t a theoretical risk anymore. It’s a documented, common attack pattern. And the cost of getting hit without a clean restore point is high enough that this deserves serious attention even if you’ve never had an incident.
The three things I’d focus on first if I were starting from scratch:
- Get Object Lock enabled on your backup storage. This is the single highest-impact change you can make. Immutable backups survive even credential compromise.
- Separate your backup credentials from your production systems. Limit the blast radius if a server gets hit.
- Test restores regularly, from multiple points in your history. You need to know your clean restore point before you’re under pressure to find it fast.
If you want to see how Snapbucket handles these scenarios, take a look at our features overview or check out the integrations page to see which S3-compatible storage providers we support. If you’re managing multiple servers and want centralized visibility into backup health across all of them, the hosted backup dashboard is a good place to start.
Backups only matter if they survive the same event that took down your production systems. Make sure yours will.