From S3/CloudFront to Amplify
- Directory Index Limitations: CloudFront’s root object configuration only applies to the root domain, not subdirectories. A CloudFront Function workaround exists but requires ongoing maintenance.
- S3 Website Hosting: A basic static file server that requires public S3 bucket access and complex CloudFront CDN configuration, introducing security exposure and operational complexity.
- AWS Amplify:
Supports per-directory
index.htmlfiles with integrated SSL certificate management, custom HTTP response headers, and CloudFront. I chose this solution for its static hosting capabilities. - Migration Strategy: I tested Amplify by deploying to a separate domain before switching production domains to validate functionality and identify configuration issues.
| Feature | S3 + CloudFront | AWS Amplify |
|---|---|---|
| Directory index files | Root only (or CloudFront Function) | Built-in support |
| SSL certificates | Manual ACM setup | Automatic provisioning |
| Custom headers | CloudFront Function required | YAML configuration |
| Bucket access | Public or OAI complexity | Private (managed internally) |
| Terraform support | Full | Partial (workarounds needed) |
My initial website architecture used an
S3 bucket as a CloudFront origin, which functioned correctly for a single-page site. When expanding to multiple pages organized in directories, I required each directory to have its own index.html root object.
S3 website hosting would have required making the bucket publicly accessible, introducing security exposure even for static content. CloudFront requires manual configuration of the S3 website endpoint for the origin, adding complexity compared to direct S3 bucket origins that are automatically populated by the AWS console interface. Custom response headers, redirects, and error page support is limited and can conflict with CloudFront configuration.
AWS Amplify is the recommended solution for static website hosting on AWS. It offers support for directory-level
index.html files, automatic configuration of Route53 DNS and SSL certificates, and URI pattern based customizable HTTP response headers. While Amplify is primarily marketed for single-page applications (SPAs), it works equally well for static site generators like
Astro. Amplify uses CloudFront internally without exposing configuration options. The default CloudFront configuration appears reasonably strict, making this limitation acceptable for my security requirements.
Before migrating the production site, I created a test deployment under a different domain to validate functionality. This allowed me to identify potential issues with domain configuration and content delivery before switching the main domain. Testing proved essential since configuring Amplify through Terraform presented significant challenges. I needed to delete and recreate the test site multiple times to resolve issues with automatic SSL certificate generation, redirects, and custom headers.
Terraform configuration
- State Management: Amplify automatically creates sub-resources like DNS verification records, only some of which are directly accessible through Terraform. These must be imported into Terraform state for consistent infrastructure management.
- Custom Headers: Implemented
using
templatefilefunction to insert dynamic SRI hashes. - Custom Domains:
Managed through
aws_amplify_domain_associationresource with additional Route53 configuration.
The Terraform configuration for the website consists of modular components that handle different aspects of the AWS infrastructure. The Amplify module includes resources for the Amplify app itself, branch configuration, domain association, and custom headers. Additional modules manage supporting infrastructure like S3 buckets for deployment artifacts and DNS configuration. Terraform state is stored remotely in an S3 bucket.
Importing auto-generated resources maintains the infrastructure-as-code approach while working within Amplify's architectural constraints. Although Amplify uses CloudFront internally, it does not expose CloudFront configuration options. The default CloudFront configuration appears adequate for basic security requirements, though detailed security analysis remains pending.
I implemented custom headers, including Content Security Policy with SRI hashes, using Terraform’s templatefile
function to insert dynamic values into a YAML configuration. This allows the security headers to be updated with each deployment while keeping the configuration declarative and version-controlled. As part of the build process, Astro exports the integrity hashes as a
TypeScript file. Since
@kindspells/astro-shield
does not provide direct JSON export functionality, I added a postbuild script to the
npm
package configuration that automatically writes the hashes to a JSON file that Terraform can consume.
Challenges and workarounds
- Terraform Provider Limitations: AWS Amplify’s rapid evolution outpaces Terraform provider capabilities.
- Domain Provisioning Issues:
Amplify domain association enters
PENDING_VERIFICATIONstate indefinitely, requiring resource deletion and recreation. - Custom Headers Bug: Terraform always detects header changes due to provider issue #34318, requiring AWS CLI workaround.
- Deploying to Amplify:
Terraform lacks deployment resource support, requiring
AWS CLI
start-deploymentintegration viaterraform_data.
Working with AWS Amplify through Terraform presented several challenges, primarily stemming from Amplify’s rapid feature evolution outpacing the Terraform provider’s capabilities. Domain provisioning failed repeatedly during initial setup and testing phases. Multiple deletion and recreation cycles were required before successful domain association, suggesting race conditions or incomplete state synchronization within Amplify’s domain verification process.
The custom headers workaround using ignore_changes proved unreliable because it required manual intervention for legitimate updates. Switching to AWS CLI execution via terraform_data
provides better automation while maintaining infrastructure-as-code principles. This approach worked better than alternatives like external deployment scripts or CI/CD pipeline integration, which would have introduced additional dependencies outside the Terraform workflow.
Manual deployment triggering through Terraform keeps consistent with the infrastructure management approach while working around provider limitations. GitHub integration requires an additional tool in the process and managing webhook configurations. The current AWS CLI approach keeps deployment control within the same tool managing infrastructure state and configuration.
Security considerations
- Content Security Policy (CSP):
Implemented with
SRI hashes
for all scripts and stylesheets, with
default-src 'none'baseline policy. - Resource Separation: Disabled inline resources to maintain security boundaries at minimal performance cost.
- Access Control: Configured S3 bucket policies and IAM roles with least privilege principles.
- SSL Management: Leveraged Amplify’s automated certificate provisioning while maintaining infrastructure control through Terraform.
The CSP approach was chosen over simpler alternatives because static sites still face injection risks through compromised CDNs or build tools. Available tooling for generating and managing CSP and SRI hashes requires improvement, but I automated as much of the process as possible to ensure consistent implementation without manual hash management.
Security benefits outweigh minimal performance costs. Future iterations may explore selective inlining for non-executable resources.
Access controls for supporting infrastructure follow least privilege principles, with S3 bucket policies and IAM roles configured to provide only the necessary permissions for deployment and operation. While IAM management through Terraform would be ideal, documentation for this approach is sparse and not commonly practiced. I plan to investigate comprehensive IAM management options for future infrastructure iterations.
SSL certificate management is handled through Amplify’s automated provisioning, which creates and renews certificates through AWS Certificate Manager. While this automation reduces manual maintenance, the certificates are not tracked in Terraform state. I manage the DNS verification record by importing it into Terraform after Amplify creates it, but the actual certificates remain outside Terraform management.
Deployment process
- Build Pipeline:
npm run buildgenerates optimized static assets with SRI hashes. - Terraform Application:
npm run deployapplies infrastructure changes including updated assets and CSP headers. - S3 Staging: Distribution files are synchronized to S3 bucket before Amplify deployment due to Terraform provider limitations.
- Amplify Deployment:
Terraform triggers deployment via AWS CLI
start-deploymentcommand when distribution files change. - Amplify Response Headers:
Terraform triggers updates via AWS CLI
update-app --custom-headerswhen SRI hashes or template changes occur.
The build process is handled by an npm script, which invokes Astro’s build command to generate optimized static assets in the dist/ folder. The postbuild npm script automatically calculates SRI hashes for all JavaScript and CSS files, which are recorded for later inclusion in the Content Security Policy headers.
An npm script applies the Terraform configuration to update infrastructure as needed and deploys updated assets and headers. The process first synchronizes the generated static files to an S3 bucket, which serves as a staging area for deployment. Terraform then triggers Amplify to deploy content from this S3 bucket to the live environment and updates response headers if necessary.
S3 staging exists because Terraform cannot directly deploy files to Amplify. Direct file upload to Amplify would require switching to GitHub integration or similar, which introduces additional complexity and external dependencies. The S3 approach maintains full control within the AWS ecosystem while working around provider limitations.
The AWS CLI workaround for deployment triggers emerged from Terraform’s lack of deployment resource support. Alternative approaches like GitHub webhooks or CodePipeline would introduce external dependencies or additional AWS services. This approach keeps the deployment process contained within the existing Terraform workflow.
Header updates require AWS CLI due to the previously mentioned Terraform provider bug that always detects header changes even when identical. A terraform_data resource tracks changes by hashing the rendered template, triggering updates only when SRI hashes or template content actually change. This approach proved more reliable than manually managing the Terraform
ignore_changes lifecycle rule.
Monitoring and observability
- Amplify Console Metrics: Built-in deployment tracking, build success/failure visibility, and basic traffic analytics through the AWS Amplify console.
- Error Monitoring: Configurable Amplify Alerts for 4xx error rates, request count spikes, and deployment failures with SNS notifications.
- Certificate Monitoring: AWS Certificate Manager automatically handles renewal notifications and provides expiration alerts through CloudWatch events.
- Budget Alerts: Existing Terraform configuration includes cost monitoring to prevent unexpected billing charges.
The monitoring approach uses AWS-native tools to avoid introducing additional dependencies or complex observability stacks for a static website. Third-party monitoring solutions like Datadog or New Relic would introduce unnecessary complexity and cost for the limited monitoring requirements of a personal website with predictable traffic patterns.
Amplify and Budget alerts serve as an effective early warning system for infrastructure misconfigurations or unexpected traffic spikes that could result in billing surprises. This approach proved more practical than implementing detailed CloudWatch metrics for a low-traffic site where cost anomalies are often the first indicator of technical issues requiring attention.
Future improvements
- Branch Understanding: Explore Amplify’s branch capabilities for potential development/production environment separation.
- Performance Monitoring: Implement CloudWatch RUM for client-side performance metrics and Core Web Vitals tracking.
- Advanced Alerting: Expand Amplify alerts to include deployment duration thresholds and geographic traffic pattern monitoring.
- Recovery Documentation: Document domain recovery procedures and infrastructure recreation processes from Git and Terraform state.
- Provider Monitoring: Track AWS provider updates to leverage improvements to Amplify support.
- Backup Strategy: Establish cross-region S3 replication for critical deployment artifacts and Terraform state files.
- Enhanced Observability: Implement structured logging and custom CloudWatch dashboards for monitoring deployment pipelines and infrastructure health.
While not currently leveraged, Amplify’s branch capabilities offer potential for separating development and production environments. Further exploration of this feature could enable more sophisticated deployment workflows with proper staging and testing before production releases.
Performance monitoring improvements would focus on implementing CloudWatch RUM to gather client-side performance data including Core Web Vitals metrics. This would provide insights into real user experience across different devices and network conditions, helping identify optimization opportunities for page load times and interactive elements.
Advanced alerting would build upon the current Amplify alerts configuration to include deployment duration monitoring and geographic traffic analysis. Recovery planning would focus on documentation rather than redundant infrastructure, including procedures for domain transfers, SSL certificate replacement, and complete infrastructure recreation.
Terraform’s AWS provider releases rapidly. I will be monitoring the releases to notice when new features or updates apply to my infrastructure and to ensure that I don’t fall behind.