Storagecritical

Amazon S3

S3 is infinitely scalable object storage. Objects are stored in buckets with globally unique names. S3 provides 11 nines (99.999999999%) durability by storing data across multiple AZs within a region.

Memory anchor

S3 is an infinite filing cabinet — any drawer (bucket), any folder, any file size. Each drawer has a lock (bucket policy), and you can set the papers to self-destruct after a set time (lifecycle policy).

Expected depth

Storage classes: Standard (frequent access), Intelligent-Tiering (auto-moves between tiers based on access patterns), Standard-IA (infrequent access, 30-day minimum), One Zone-IA (single AZ, cheaper, no HA), Glacier Instant Retrieval, Glacier Flexible Retrieval (minutes to hours), Glacier Deep Archive (12-hour retrieval, cheapest). Lifecycle policies automate transitions. S3 versioning protects against accidental deletes. MFA Delete requires MFA for permanent deletes. Server-side encryption: SSE-S3 (AWS-managed keys), SSE-KMS (customer-controlled keys), SSE-C (customer-provided keys).

Deep — senior internals

S3 is eventually consistent for overwrites and deletes by default (strong consistency was added for new and overwritten objects in 2020 — S3 now provides strong read-after-write consistency for all operations). S3 Transfer Acceleration uses CloudFront edge locations to accelerate uploads. Multipart upload is required for objects > 5GB and recommended for > 100MB. S3 Replication: Cross-Region Replication (CRR) for DR and compliance; Same-Region Replication (SRR) for log aggregation and test environment seeding. S3 Object Lambda lets you transform data on read — apply image resizing or PII masking without storing multiple copies. S3 Select queries within objects using SQL expressions, reading only a subset of data.

🎤Interview-ready answer

S3 is my default for binary/blob storage — images, logs, backups, static assets, ML datasets. I select storage classes by access pattern: Standard for hot data, Intelligent-Tiering for uncertain patterns, Glacier for archives. I enable versioning for production buckets, block public access by default, and use bucket policies + IAM for access control. For encryption, I use SSE-KMS when I need audit trails and key rotation control.

Common trap

Claiming S3 is eventually consistent — before December 2020 this was true for overwrites and deletes, but since then S3 provides strong read-after-write consistency for all operations at no extra cost. Candidates who memorized older material still design unnecessary workarounds (polling, delays) for a problem that no longer exists.