I enabled S3 versioning on a bucket for compliance and protection, thinking it would just store a backup of each file. A month later, my S3 bill was 10 times higher than expected. Tens of thousands of old versions were sitting in the bucket, each one costing me money to store. I realized that versioning doesn’t delete old versions—every update creates a new version, and all old ones are preserved indefinitely. In this post, I’ll walk through exactly what causes this and how to fix it.
The Problem
You enabled S3 versioning to protect against accidental deletes or to maintain a history. The first month seemed fine, but by month two or three, your S3 storage bill is 5-10x higher than expected. A bucket that should hold 100GB is now storing 500GB+.
| Cost Issue | Description |
|---|---|
| Exponential Growth | File size × number of versions = total storage cost. A 10MB file updated daily for 30 days = 300MB stored. |
| Delete Markers | Deleting a versioned object creates a “delete marker” but doesn’t free storage—old versions remain. |
| Unexpected Billing | Versioning is not mentioned in cost estimates, and old versions are billed at full Standard storage rates. |
Why Does This Happen?
- Every overwrite creates a new version — When you upload a file with the same key, S3 doesn’t replace it; it creates a new version with a new version ID. The old version stays in the bucket and is billed.
- Delete operations create delete markers — When you delete an object in a versioned bucket, the deletion itself is a version (a delete marker). The actual object versions remain stored and billed.
- No automatic cleanup — S3 doesn’t delete old versions automatically. They accumulate until you explicitly remove them or configure a lifecycle rule.
- Lifecycle rules for non-current versions aren’t configured — Many users configure lifecycle rules for current versions but forget to add rules for non-current versions, which continue to accumulate.
The Fix
Step 1: Understand Your Current Versions
Check how many versions exist:
# List all versions (including delete markers)
aws s3api list-object-versions --bucket my-bucket --max-items 10
# Count versions per object
aws s3api list-object-versions --bucket my-bucket \
--query 'Versions[].Key' \
--output text | sort | uniq -c
Step 2: Create Lifecycle Rule for Non-Current Versions
Create a lifecycle configuration that expires old versions after a set period:
cat > lifecycle-config.json <<EOF
{
"Rules": [
{
"ID": "delete-old-versions",
"Status": "Enabled",
"NoncurrentVersionTransitions": [
{
"NoncurrentDays": 30,
"StorageClass": "GLACIER"
}
],
"NoncurrentVersionExpiration": {
"NoncurrentDays": 90
}
},
{
"ID": "delete-markers",
"Status": "Enabled",
"ExpiredObjectDeleteMarker": {
"Days": 1
}
}
]
}
EOF
aws s3api put-bucket-lifecycle-configuration \
--bucket my-bucket \
--lifecycle-configuration file://lifecycle-config.json
This configuration:
- Transitions non-current versions to Glacier after 30 days (cheaper storage)
- Permanently deletes non-current versions after 90 days
- Removes delete markers after 1 day
Step 3: Clean Up Existing Versions (Important!)
Lifecycle rules only apply to future versions. For existing old versions, you must clean them up manually or with S3 Batch Operations:
# Option 1: Use AWS CLI to delete old versions (example for a specific object)
aws s3api list-object-versions --bucket my-bucket --prefix myfile.txt \
--query 'Versions[0:5]' \
--output json > versions.json
# Then delete each version
aws s3api delete-objects --bucket my-bucket --delete file://delete-versions.json
For bulk cleanup, use S3 Batch Operations through the console or SDKs (CLI is limited for this task).
Step 4: Verify the Lifecycle Configuration
Check the configuration is applied:
aws s3api get-bucket-lifecycle-configuration --bucket my-bucket
Step 5: Monitor Storage Over Time
After applying lifecycle rules, old versions will gradually transition and expire. Check progress:
# Check bucket size and version count
aws s3api list-object-versions --bucket my-bucket --max-items 1000
Storage costs will decrease over the next 30-90 days as the lifecycle rules execute.
How to Run This
- Replace
my-bucketwith your bucket name - Adjust
NoncurrentDaysvalues based on your compliance requirements:- Use 30 days for operational backups
- Use 90 days if you need 3-month history
- Use 365 days if you need annual compliance
- Apply the lifecycle configuration immediately
- For existing versions, use S3 Batch Operations or manual cleanup for files with many versions
- Monitor the next billing cycle to see storage cost decrease
Is This Safe?
Versioning is essential for compliance and disaster recovery. Managing versions with lifecycle rules is safe when configured correctly. Always set a retention period shorter than your regulatory requirements (e.g., if compliance requires 7 years, set expiration to 7 years, not forever). For critical files, keep versioning enabled; for transient data, disable it to save costs.
Key Takeaway
S3 versioning costs balloon because old versions accumulate indefinitely. Configure lifecycle rules to transition old versions to cheaper storage classes and expire them after your retention period. This reduces costs dramatically while maintaining compliance.
Have questions or ran into a different S3 issue? Connect with me on LinkedIn or X.