Pricing risks
EFS pricing on top of S3 pricing
S3 Files uses EFS infrastructure as the caching layer. You pay EFS rates (~$0.30/GB-month) for the hot cache PLUS S3 storage costs for the underlying data. That's double billing on active data.
Architect note: a 10 TB hot dataset could cost $3,000/month in cache alone — vs $230/month in plain S3.
32 KB minimum metering per I/O
Every read/write is metered at a minimum of 32 KB regardless of actual file size. If your app reads thousands of tiny files (configs, JSON blobs, small logs), you'll be billed 32 KB per op — costs inflate dramatically.
Architect note: microservice configs or ML feature stores with small files = cost nightmare.
Data transfer charges still apply
Cross-AZ traffic between your mount target and EC2 instances incurs standard data transfer fees. High-throughput workloads reading across AZs will see this add up fast.
Architect note: always place mount target and compute in the same AZ for cost control.
No free tier — pricing mirrors EFS exactly
There is no free tier for S3 Files. Unlike plain S3 which has generous free egress within the same region, every byte through the NFS layer is billed at EFS rates. Dev/test environments will rack up costs quickly.
Technical disadvantages
60-second sync lag — not real-time consistency
Writes land in the EFS cache first and sync back to S3 within ~60–70 seconds. If your EC2 crashes or the mount is lost before sync, data written in that window can be lost. Not suitable for critical transactional writes.
Architect note: never use this as a primary write store for anything you can't afford to lose in a 60s window.
VPC-only access — no public internet mounting
Mount targets live inside your VPC. On-prem systems, hybrid workloads, or external partners cannot mount directly. You still need S3 API access or a VPN/Direct Connect for outside-VPC access.
1,024-byte S3 key length limit bites deeply nested paths
Deep directory trees with long file names can hit the S3 key length ceiling. Monorepos, ML experiment logs, or nested config hierarchies may silently fail to create files.
Architect note: audit your path lengths before migrating workloads.
Not a replacement for EFS in all scenarios
If all your data is hot (100% active at all times), EFS is still cheaper and lower latency. S3 Files wins when most data is cold/warm and only a fraction is actively accessed at a time.
IAM complexity — two roles required
You need a file system access role (for S3 Files → S3 bucket) AND a compute resource role (for EC2/Lambda → mount). Getting this wrong is the most common setup failure and can silently deny all access.
Object versioning conflicts
If bucket versioning is enabled, deleting a file via NFS creates a delete marker in S3. Teams not aware of this will be surprised by storage costs from accumulated versions of "deleted" files.
Architect's verdict — use it or skip it?
Use S3 Files when...
Large cold datasets, occasional hot access. ML training pipelines. Legacy apps needing POSIX. Shared access across many compute resources. You already pay for S3 and want to eliminate EFS duplication.
Avoid S3 Files when...
Thousands of tiny files per second. Critical transactional writes (60s sync risk). 100% hot data workloads — plain EFS is cheaper. On-prem or hybrid access required. Cost-sensitive dev/test environments.