I spent three hours last week helping a team debug why their Lambda function was getting 403 Access Denied when trying to read from an S3 bucket it had always been able to read from. The IAM policy was correct. The bucket policy was correct. The Lambda execution role showed the right permissions in the IAM simulator. The actual culprit turned out to be a newly-applied SCP at the organization level that denied access to any KMS key without a specific tag — and the bucket’s encryption key didn’t have that tag.

S3 403s are notorious because there are at least eight different layers that can deny a request, and the error message never tells you which one. Here’s a systematic approach.

The Problem

An S3 operation fails with one of these errors:

Error What It Means
AccessDenied: Access Denied Something in the chain denied the request — but not which
AccessDenied: User is not authorized to perform s3:GetObject IAM policy attached to the caller doesn’t allow the action
AccessDenied: ...access point... Access point policy is restricting the request
KMS.AccessDeniedException: not authorized to perform kms:Decrypt Object is encrypted and the caller cannot use the KMS key
InvalidAccessKeyId / SignatureDoesNotMatch Credentials are wrong (common in cross-account scenarios)

The same principal might read Object A successfully and fail on Object B if they’re encrypted with different KMS keys — a classic symptom of KMS-layer denial masquerading as an S3 issue.

Why Does This Happen?

  • Multiple authorization layers evaluated together: S3 requests are authorized against the IAM identity policy, bucket policy, ACLs, access point policies, VPC endpoint policy, SCPs, and (if applicable) the KMS key policy. Any one of these can deny, and the error doesn’t say which.
  • Bucket policy requires encryption headers: If the bucket policy includes a Condition that the request must include s3:x-amz-server-side-encryption, uploads without that header get 403’d even when everything else is correct.
  • KMS key policy doesn’t grant access to the caller: When an object is encrypted with a customer-managed KMS key, the caller needs both s3:GetObject and kms:Decrypt on the key. The KMS key policy is a separate resource policy — adding IAM permissions alone isn’t enough.
  • VPC endpoint policy restricting buckets: Many organizations configure S3 Gateway endpoints with policies that only allow access to specific buckets. A request from an EC2 instance using that endpoint will be denied even if the principal and bucket have correct permissions.
  • Block Public Access overriding bucket policy: Enabling “Block all public access” at the account or bucket level overrides any bucket policy that tries to grant public access. This is almost always what you want, but it surprises teams migrating legacy workloads.
  • S3 Object Ownership settings and ACLs: If ACLs are enabled and the object was uploaded by a different account, the bucket owner might not be able to read its own objects. The fix is usually to disable ACLs entirely with BucketOwnerEnforced.

The Fix

Step 1: Use IAM Access Analyzer Policy Simulator

Don’t guess — simulate the exact request. This tells you which policy layer denied it:

aws iam simulate-principal-policy \
  --policy-source-arn arn:aws:iam::123456789012:role/my-lambda-role \
  --action-names s3:GetObject \
  --resource-arns arn:aws:s3:::my-bucket/path/to/object.json \
  --output json

Look at EvalDecision. If it’s implicitDeny, the principal lacks the necessary permission. If it’s explicitDeny, something specifically blocks it — usually an SCP.

Step 2: Check the Bucket Policy

aws s3api get-bucket-policy \
  --bucket my-bucket \
  --query Policy \
  --output text | python3 -m json.tool

Look for Effect: Deny statements first — those always win. Common problem conditions to check:

  • aws:SecureTransport: denies HTTP requests
  • s3:x-amz-server-side-encryption: denies unencrypted uploads
  • aws:SourceVpce: denies requests from outside a specific VPC endpoint
  • aws:PrincipalOrgID: denies requests from outside the organization

Step 3: Check for Block Public Access

aws s3api get-public-access-block \
  --bucket my-bucket \
  --query PublicAccessBlockConfiguration \
  --output json

If any flag is true and you intended public access, you have to decide — remove the block (rarely right) or use pre-signed URLs / CloudFront instead (usually right).

Check what key encrypts the bucket by default:

aws s3api get-bucket-encryption \
  --bucket my-bucket \
  --query 'ServerSideEncryptionConfiguration.Rules[0].ApplyServerSideEncryptionByDefault' \
  --output json

If you see a KMSMasterKeyID, the caller must have kms:Decrypt on that key. Check the key policy:

aws kms get-key-policy \
  --key-id alias/my-bucket-key \
  --policy-name default \
  --output text | python3 -m json.tool

The policy needs a statement allowing your principal to call kms:Decrypt and, for uploads, kms:GenerateDataKey.

If the role has IAM permissions but the key policy doesn’t grant it, add a statement to the key policy:

{
  "Sid": "AllowLambdaRole",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::123456789012:role/my-lambda-role"
  },
  "Action": [
    "kms:Decrypt",
    "kms:GenerateDataKey"
  ],
  "Resource": "*"
}

Step 5: Check VPC Endpoint Policy

If the caller is inside a VPC and S3 traffic flows through an endpoint:

aws ec2 describe-vpc-endpoints \
  --filters "Name=service-name,Values=com.amazonaws.us-east-1.s3" \
  --query "VpcEndpoints[*].{Id:VpcEndpointId,Policy:PolicyDocument}" \
  --output json

If the policy restricts buckets by ARN, add yours:

aws ec2 modify-vpc-endpoint \
  --vpc-endpoint-id vpce-abc12345 \
  --policy-document file://endpoint-policy.json

Step 6: Check SCPs (For Organization Accounts)

SCPs can’t be checked from a member account — you need to run this from the management account:

aws organizations list-policies-for-target \
  --target-id ou-abc1-def23456 \
  --filter SERVICE_CONTROL_POLICY \
  --output table

Then inspect each SCP for Deny statements on s3:* or kms:* actions. SCPs frequently restrict KMS usage to keys with specific tags or in specific regions.

Step 7: Enable Server Access Logging for Ongoing Diagnosis

If the problem is intermittent, turn on access logs to capture the exact auth failure reason:

aws s3api put-bucket-logging \
  --bucket my-bucket \
  --bucket-logging-status '{
    "LoggingEnabled": {
      "TargetBucket": "my-log-bucket",
      "TargetPrefix": "access-logs/"
    }
  }'

Logs include the operation, HTTP status, and error code for every request.

Is This Safe?

Mostly yes. All the diagnostic commands are read-only. Modifying the KMS key policy is additive (adding a new Allow statement). VPC endpoint policy changes should be reviewed carefully — if the policy is shared with many workloads, a wrong change can break everything. Never remove Block Public Access on production buckets without first confirming no sensitive data is in them, because once public access is allowed, content could be accessed and cached externally before you notice.

Key Takeaway

S3 403 errors are a puzzle with too many pieces. Before editing any policy, run iam simulate-principal-policy — it tells you whether the denial is explicit (something specifically blocks it, usually SCP or bucket policy condition) or implicit (the permission was never granted). Remember that KMS is a completely separate authorization layer: a role with full s3:* access still can’t read encrypted objects without kms:Decrypt. When in doubt, check all the layers — IAM, bucket policy, KMS, VPC endpoint, SCP, Block Public Access — in that order.


Have questions or ran into a different S3 issue? Connect with me on LinkedIn or X.