Guides

S3-Compatible Backup Storage: Stop Paying AWS Prices When You Don't Have To

Published on: Saturday, Mar 21, 2026 By Admin

S3-Compatible Backup Storage: Stop Paying AWS Prices When You Don't Have To

Most teams default to AWS S3 for backup storage because it’s familiar. You already have an AWS account. You know the interface. It works. So you point your backups at an S3 bucket and move on.

That’s fine until you actually look at your bill. S3 egress fees are brutal, especially when you’re restoring frequently or storing large snapshots across multiple servers. And here’s what nobody tells you upfront: you don’t have to use AWS S3 just because your backups use the S3 protocol. The S3 API is a standard now. Dozens of providers support it. And some of them will cut your storage costs by 60-80% without any meaningful tradeoff in reliability.

What “S3-Compatible” Actually Means

The S3 API started as Amazon’s proprietary interface for their object storage product. But over time it became the de facto standard for cloud object storage. Other providers adopted the same API so developers didn’t have to rewrite their integrations every time they switched vendors.

What this means practically: if your backup tool can talk to S3, it can talk to Backblaze B2, Cloudflare R2, Wasabi, MinIO, or any other provider that supports the S3 protocol. The requests look the same. The authentication works the same way. You change a URL and some credentials, and your backups flow to a completely different provider.

There’s no technical lock-in here. The lock-in is psychological. Teams stick with AWS because switching feels risky, not because it actually is.

The Real Cost Breakdown You Should Know

Before picking a storage provider, you need to understand what you’re actually paying for. Object storage pricing has three main components:

  • Storage cost: Usually priced per GB per month
  • Request costs: Charged per PUT, GET, LIST operation
  • Egress (data transfer out): Charged per GB when you download data

AWS S3 sits at roughly $0.023 per GB per month for standard storage, plus egress fees that can hit $0.09 per GB depending on your region and destination. For a team with 500GB of backups doing regular restores, those egress costs add up fast.

Compare that to:

  • Cloudflare R2: $0.015 per GB per month, zero egress fees
  • Backblaze B2: $0.006 per GB per month, free egress when paired with Cloudflare
  • Wasabi: $0.0068 per GB per month, no egress fees, but a 90-day minimum storage policy applies

The storage cost difference is significant. But the egress difference is where teams actually get surprised. If you’re restoring large servers or pulling backup data frequently, egress fees with AWS can eclipse your storage costs.

That said, cheaper isn’t always better. The right provider depends on what you’re actually doing with your backups.

How to Pick the Right Provider for Backup Storage

Not all backup use cases are the same. Here’s how to think through the decision:

You Restore Infrequently (True Disaster Recovery)

If your backups exist mostly as insurance and you rarely actually restore from them, egress fees matter less. You can optimize heavily on storage cost. Backblaze B2 at $0.006/GB is hard to beat here.

Just make sure you actually test restores periodically. Backup storage is worthless if the restore fails when you need it. “Rarely restore” shouldn’t mean “never verify.”

You Restore Often (Active Operations)

If you’re frequently restoring environments, running dev environments off snapshots, or doing any kind of active data retrieval, egress fees will kill you on AWS. R2 or Wasabi become much more attractive because egress is either free or dramatically cheaper.

You Have Compliance or Data Residency Requirements

Some teams can’t just pick whatever’s cheapest. GDPR, HIPAA, SOC 2, and other frameworks often have requirements about where data lives and how it’s encrypted at rest and in transit. In these cases, you need to check whether your provider has the certifications you need and whether they can store data in specific regions.

AWS S3 has the broadest compliance coverage. That’s partly why enterprises default to it despite the cost. But Wasabi and Backblaze B2 also hold meaningful compliance certifications. Check the current docs for each provider rather than trusting secondhand summaries.

You Already Have Multi-Cloud Infrastructure

If you’re running workloads across AWS, GCP, and Azure, it might make sense to keep backups in the same cloud to minimize cross-provider data transfer. But this is only worth optimizing if you’re at a scale where it actually shows up in your costs.

For most teams, this level of optimization is premature. Pick a dedicated storage provider and keep it simple.

Setting Up S3-Compatible Storage: What It Actually Looks Like

The setup process is roughly the same regardless of which provider you choose. Here’s the general flow:

  1. Create a storage bucket on your chosen provider (B2 calls them “buckets” too, R2 calls them “buckets”, Wasabi calls them “buckets” - it’s all consistent)
  2. Generate access credentials - you’ll get an Access Key ID and a Secret Access Key, same structure as AWS
  3. Note the endpoint URL - this is where S3-compatible providers differ from AWS. Instead of s3.amazonaws.com, you’ll have a provider-specific URL
  4. Configure your backup tool with those credentials and the endpoint URL
  5. Set bucket-level permissions - make sure your backup agent can write to the bucket but doesn’t have more permissions than it needs

One thing to pay attention to: bucket naming and region settings. Some providers are global by default (R2, for example), while others require you to specify a region. If your backup tool is strict about region configuration, this can cause confusing errors.

Also worth checking: whether your provider supports object versioning and lifecycle policies. These matter a lot for backup retention. You want to be able to automatically expire old backups without manual cleanup.

Encryption: What the Provider Does vs. What You Should Do

Every major S3-compatible provider offers server-side encryption at rest. This is table stakes. But “encrypted at rest” doesn’t mean much if someone gets your access credentials or if the provider themselves is a threat model for you.

For backup data specifically, client-side encryption is worth considering. This means your data is encrypted before it leaves your servers. Even if someone gets into your storage bucket, they get encrypted blobs they can’t read.

The tradeoff: client-side encryption adds complexity to your restore process. You need to manage encryption keys carefully. Lose the keys and you lose the data. This is a real operational concern, not a theoretical one.

Most teams land somewhere in the middle: server-side encryption from the provider, strict access controls on credentials, and limited bucket permissions for the backup agent. For teams handling sensitive data or operating in regulated industries, client-side encryption is worth the added operational overhead.

One practical note: store your access keys in secrets management, not in config files committed to your repo. This sounds obvious but it’s still a common mistake.

Backup Storage Isn’t the Same as Backup Management

There’s an important distinction that gets blurry in practice. Choosing where your backups live (storage) is a separate decision from how you schedule, monitor, and restore them (management).

You can have the perfect storage setup with great costs and strong encryption, but if you don’t have visibility into what’s actually being backed up, when, and whether those backups are actually restorable, you’ve only solved half the problem.

This is where a lot of DIY backup setups fall apart. Teams write a cron job, point it at B2 or Wasabi, and assume everything’s working. Then six months later they discover the cron job silently started failing three weeks ago because the server ran out of disk space during the snapshot process. Nobody noticed. The backups stopped.

Good backup management means:

  • Knowing which servers are being backed up and how often
  • Getting alerted when a backup fails or is delayed
  • Being able to verify that a backup is actually valid and restorable
  • Having a clear restore process that doesn’t require you to read documentation under pressure during an incident

This is the piece that Snapbucket’s centralized dashboard handles. The storage is yours. You bring your own bucket from whatever provider makes sense for your situation. But the visibility, scheduling, alerting, and restore workflow all live in one place.

Mixing Providers: When It Makes Sense and When It Doesn’t

Some teams run multiple storage providers. Production backups go to one provider for compliance reasons. Dev environment snapshots go to a cheaper provider. Long-term archival goes somewhere with even lower storage costs but slower retrieval.

This can work well if you’re intentional about it. But it adds operational complexity. You’ve got more credentials to manage, more billing accounts to monitor, more places where something can break.

My take: unless you have a specific reason to split storage (compliance requirements, significant cost difference between use cases, or different reliability requirements), start with one provider. Get that working well. Only add complexity when you have a clear reason.

The flexibility of S3-compatible storage means you can switch later without rebuilding your backup infrastructure. That’s the point. You’re not locked in. So start simple.

Common Mistakes When Switching Storage Providers

A few things I’ve seen trip teams up:

Not updating all the places credentials are stored. If you’ve got backups running on ten servers and you rotate your storage credentials, you need to update all ten. Missing one means that server’s backups silently stop working.

Forgetting about lifecycle policies. If you’re migrating from AWS S3 where you had lifecycle rules set up, those rules don’t follow you to the new provider. You need to recreate them. Otherwise you’ll end up accumulating snapshots indefinitely and wondering why your storage bill is climbing.

Testing with small files and assuming large restores work. Restoring a 5MB test file is not the same as restoring a 200GB server snapshot. Test at realistic sizes before you rely on a new setup in production.

Ignoring the 90-day minimum at Wasabi. Wasabi charges for 90 days of storage even if you delete a file before that. For backup workflows where you’re frequently rotating old snapshots, this can lead to unexpectedly high bills. Read the pricing terms before committing.

Using the root access key. Create a dedicated IAM user or access key with only the permissions your backup agent needs. Scoping credentials is basic hygiene and it significantly limits your exposure if a key is ever compromised.

Conclusion

S3-compatible storage has made it practical to separate your backup tool from your storage provider. That’s a good thing. It means you can optimize storage costs without rebuilding your backup workflow, and you can switch providers as your needs change.

A few things to take away:

  1. Evaluate providers based on your actual usage pattern. Storage cost, egress cost, and compliance coverage matter differently depending on how often you restore and what data you’re protecting.

  2. Don’t confuse storage with management. Having cheap, reliable storage doesn’t tell you whether your backups are actually running, healthy, or restorable. You need both.

  3. Start simple and stay flexible. One provider, clear credentials management, lifecycle policies configured, and a tested restore workflow. That’s the foundation.

If you want to see how Snapbucket handles the management layer while letting you bring your own storage, the integrations page shows which providers are supported out of the box. And if you want to get into specifics about how the agent works or how restores are handled, check out the features or reach out directly. Happy to talk through your setup.