I created an S3 lifecycle rule to move objects to Glacier after 30 days to save on storage costs. Weeks passed, and the objects were still sitting in S3 Standard storage, costing me money every month. I checked the rule a dozen times—it looked correct in the console—but nothing happened. After exploring the S3 API and understanding how lifecycle rules actually work, I discovered the issue wasn’t the rule itself but how it was configured and when it applied. In this post, I’ll walk through exactly what causes this and how to fix it.
The Problem
You’ve configured an S3 lifecycle rule to transition objects to Glacier after 30 days (or expire them after 90 days), but objects older than the transition days remain in Standard storage. The rule appears enabled in the console, yet nothing changes.
| Error Type | Description |
|---|---|
| Objects Not Transitioning | Objects remain in Standard storage past the transition date. No errors logged. |
| Rule Not Executing | Rule shows as enabled but lifecycle transitions never happen. |
| Partial Transitions | Some objects transition, others don’t, inconsistently. |
Why Does This Happen?
- Lifecycle rule has a prefix filter that doesn’t match — The rule specifies
Prefix: "archive/"but your objects are in the root or a different prefix. Objects outside the filter are silently skipped. - Objects uploaded before the rule was created — Lifecycle rules apply prospectively to objects uploaded after the rule is created. Existing objects must be handled separately (though there’s a nuance here: objects created before the rule can still transition if the rule applies to them by path).
- Bucket versioning is enabled—rule targets wrong version type — When versioning is enabled, you need separate rules for current versions and non-current versions. A rule for current versions won’t transition non-current versions.
- Objects are too small for the transition — S3 Glacier has a 128KB minimum object size. Smaller objects are stored but billed at 128KB. Some may not meet minimums for the destination storage class.
- Lifecycle rule is disabled — The rule exists but has
Status: "Disabled". Check the status in both the console and CLI.
The Fix
Step 1: Check Existing Lifecycle Configuration
Retrieve the current lifecycle configuration:
aws s3api get-bucket-lifecycle-configuration --bucket my-bucket
Look for the Rules array. Check:
Statusis"Enabled"(not"Disabled")Filter.Prefixmatches the objects you want to transitionTransitionsspecifies the correct storage class and daysExpirationhas the correct days (if expiring objects)
Step 2: Create or Update Lifecycle Configuration
If the configuration is missing or incorrect, create a new one:
cat > lifecycle-config.json <<EOF
{
"Rules": [
{
"ID": "transition-to-glacier",
"Status": "Enabled",
"Filter": {
"Prefix": "data/"
},
"Transitions": [
{
"Days": 30,
"StorageClass": "GLACIER"
},
{
"Days": 90,
"StorageClass": "DEEP_ARCHIVE"
}
],
"Expiration": {
"Days": 365
}
}
]
}
EOF
aws s3api put-bucket-lifecycle-configuration \
--bucket my-bucket \
--lifecycle-configuration file://lifecycle-config.json
Step 3: For Non-Current Versions (if Versioning is Enabled)
If versioning is enabled, add a separate rule for non-current versions:
cat > lifecycle-config.json <<EOF
{
"Rules": [
{
"ID": "current-version-transition",
"Status": "Enabled",
"Filter": {
"Prefix": "data/"
},
"Transitions": [
{
"Days": 30,
"StorageClass": "GLACIER"
}
]
},
{
"ID": "noncurrent-version-expiration",
"Status": "Enabled",
"NoncurrentVersionTransitions": [
{
"NoncurrentDays": 7,
"StorageClass": "GLACIER"
}
],
"NoncurrentVersionExpiration": {
"NoncurrentDays": 30
}
}
]
}
EOF
aws s3api put-bucket-lifecycle-configuration \
--bucket my-bucket \
--lifecycle-configuration file://lifecycle-config.json
Step 4: Understand Lifecycle Transition Timing
Lifecycle transitions happen once per day, usually overnight UTC. If you create a rule at 10 AM, transitions won’t begin until the next day. For testing:
# To see when transitions will happen, check an object's metadata
aws s3api head-object --bucket my-bucket --key data/test.txt
# Look for "StorageClass" in the output
Step 5: For Existing Objects, Use S3 Batch Operations
To transition objects that existed before the lifecycle rule:
# List all objects and their versions
aws s3api list-objects-v2 --bucket my-bucket --prefix data/
# Create an S3 Batch Operations job to transition them
# (This is best done via the console or SDK, as the CLI is limited)
How to Run This
- Replace
my-bucketwith your bucket name - Replace the
Prefixwith the actual prefix of your objects - Adjust
Daysvalues to match your transition schedule (30 for Glacier, 90 for Deep Archive are examples) - Apply the configuration and wait for the daily lifecycle job to run
- Check object storage class the next day:
aws s3api head-object --bucket my-bucket --key data/test.txt
Is This Safe?
Lifecycle rules are safe and designed for cost optimization. Transitioning to Glacier or Deep Archive stores objects for long-term retention while reducing costs. Always verify your transition timeline meets compliance requirements, and test with non-critical objects first.
Key Takeaway
S3 lifecycle rules fail silently when the prefix doesn’t match, the rule is disabled, or versioning requires separate rule types. Configure once, wait for the daily lifecycle job, and your storage costs will drop automatically.
Have questions or ran into a different S3 issue? Connect with me on LinkedIn or X.