Deploying Static Websites with Terraform

Using AWS Amplify for static site generator hosting

From S3/CloudFront to Amplify

Feature S3 + CloudFront AWS Amplify
Directory index files Root only (or CloudFront Function) Built-in support
SSL certificates Manual ACM setup Automatic provisioning
Custom headers CloudFront Function required YAML configuration
Bucket access Public or OAI complexity Private (managed internally)
Terraform support Full Partial (workarounds needed)

My initial website architecture used an S3 bucket as a CloudFront origin, which functioned correctly for a single-page site. When expanding to multiple pages organized in directories, I required each directory to have its own index.html root object.

S3 website hosting would have required making the bucket publicly accessible, introducing security exposure even for static content. CloudFront requires manual configuration of the S3 website endpoint for the origin, adding complexity compared to direct S3 bucket origins that are automatically populated by the AWS console interface. Custom response headers, redirects, and error page support is limited and can conflict with CloudFront configuration.

AWS Amplify is the recommended solution for static website hosting on AWS. It offers support for directory-level index.html files, automatic configuration of Route53 DNS and SSL certificates, and URI pattern based customizable HTTP response headers. While Amplify is primarily marketed for single-page applications (SPAs), it works equally well for static site generators like Astro. Amplify uses CloudFront internally without exposing configuration options. The default CloudFront configuration appears reasonably strict, making this limitation acceptable for my security requirements.

Before migrating the production site, I created a test deployment under a different domain to validate functionality. This allowed me to identify potential issues with domain configuration and content delivery before switching the main domain. Testing proved essential since configuring Amplify through Terraform presented significant challenges. I needed to delete and recreate the test site multiple times to resolve issues with automatic SSL certificate generation, redirects, and custom headers.

Top

Terraform configuration

The Terraform configuration for the website consists of modular components that handle different aspects of the AWS infrastructure. The Amplify module includes resources for the Amplify app itself, branch configuration, domain association, and custom headers. Additional modules manage supporting infrastructure like S3 buckets for deployment artifacts and DNS configuration. Terraform state is stored remotely in an S3 bucket.

Importing auto-generated resources maintains the infrastructure-as-code approach while working within Amplify's architectural constraints. Although Amplify uses CloudFront internally, it does not expose CloudFront configuration options. The default CloudFront configuration appears adequate for basic security requirements, though detailed security analysis remains pending.

I implemented custom headers, including Content Security Policy with SRI hashes, using Terraform’s templatefile function to insert dynamic values into a YAML configuration. This allows the security headers to be updated with each deployment while keeping the configuration declarative and version-controlled. As part of the build process, Astro exports the integrity hashes as a TypeScript file. Since @kindspells/astro-shield does not provide direct JSON export functionality, I added a postbuild script to the npm package configuration that automatically writes the hashes to a JSON file that Terraform can consume.

Top

Challenges and workarounds

Working with AWS Amplify through Terraform presented several challenges, primarily stemming from Amplify’s rapid feature evolution outpacing the Terraform provider’s capabilities. Domain provisioning failed repeatedly during initial setup and testing phases. Multiple deletion and recreation cycles were required before successful domain association, suggesting race conditions or incomplete state synchronization within Amplify’s domain verification process.

The custom headers workaround using ignore_changes proved unreliable because it required manual intervention for legitimate updates. Switching to AWS CLI execution via terraform_data provides better automation while maintaining infrastructure-as-code principles. This approach worked better than alternatives like external deployment scripts or CI/CD pipeline integration, which would have introduced additional dependencies outside the Terraform workflow.

Manual deployment triggering through Terraform keeps consistent with the infrastructure management approach while working around provider limitations. GitHub integration requires an additional tool in the process and managing webhook configurations. The current AWS CLI approach keeps deployment control within the same tool managing infrastructure state and configuration.

Top

Security considerations

The CSP approach was chosen over simpler alternatives because static sites still face injection risks through compromised CDNs or build tools. Available tooling for generating and managing CSP and SRI hashes requires improvement, but I automated as much of the process as possible to ensure consistent implementation without manual hash management.

Security benefits outweigh minimal performance costs. Future iterations may explore selective inlining for non-executable resources.

Access controls for supporting infrastructure follow least privilege principles, with S3 bucket policies and IAM roles configured to provide only the necessary permissions for deployment and operation. While IAM management through Terraform would be ideal, documentation for this approach is sparse and not commonly practiced. I plan to investigate comprehensive IAM management options for future infrastructure iterations.

SSL certificate management is handled through Amplify’s automated provisioning, which creates and renews certificates through AWS Certificate Manager. While this automation reduces manual maintenance, the certificates are not tracked in Terraform state. I manage the DNS verification record by importing it into Terraform after Amplify creates it, but the actual certificates remain outside Terraform management.

Top

Deployment process

The build process is handled by an npm script, which invokes Astro’s build command to generate optimized static assets in the dist/ folder. The postbuild npm script automatically calculates SRI hashes for all JavaScript and CSS files, which are recorded for later inclusion in the Content Security Policy headers.

An npm script applies the Terraform configuration to update infrastructure as needed and deploys updated assets and headers. The process first synchronizes the generated static files to an S3 bucket, which serves as a staging area for deployment. Terraform then triggers Amplify to deploy content from this S3 bucket to the live environment and updates response headers if necessary.

S3 staging exists because Terraform cannot directly deploy files to Amplify. Direct file upload to Amplify would require switching to GitHub integration or similar, which introduces additional complexity and external dependencies. The S3 approach maintains full control within the AWS ecosystem while working around provider limitations.

The AWS CLI workaround for deployment triggers emerged from Terraform’s lack of deployment resource support. Alternative approaches like GitHub webhooks or CodePipeline would introduce external dependencies or additional AWS services. This approach keeps the deployment process contained within the existing Terraform workflow.

Header updates require AWS CLI due to the previously mentioned Terraform provider bug that always detects header changes even when identical. A terraform_data resource tracks changes by hashing the rendered template, triggering updates only when SRI hashes or template content actually change. This approach proved more reliable than manually managing the Terraform ignore_changes lifecycle rule.

Top

Monitoring and observability

The monitoring approach uses AWS-native tools to avoid introducing additional dependencies or complex observability stacks for a static website. Third-party monitoring solutions like Datadog or New Relic would introduce unnecessary complexity and cost for the limited monitoring requirements of a personal website with predictable traffic patterns.

Amplify and Budget alerts serve as an effective early warning system for infrastructure misconfigurations or unexpected traffic spikes that could result in billing surprises. This approach proved more practical than implementing detailed CloudWatch metrics for a low-traffic site where cost anomalies are often the first indicator of technical issues requiring attention.

Top

Future improvements

While not currently leveraged, Amplify’s branch capabilities offer potential for separating development and production environments. Further exploration of this feature could enable more sophisticated deployment workflows with proper staging and testing before production releases.

Performance monitoring improvements would focus on implementing CloudWatch RUM to gather client-side performance data including Core Web Vitals metrics. This would provide insights into real user experience across different devices and network conditions, helping identify optimization opportunities for page load times and interactive elements.

Advanced alerting would build upon the current Amplify alerts configuration to include deployment duration monitoring and geographic traffic analysis. Recovery planning would focus on documentation rather than redundant infrastructure, including procedures for domain transfers, SSL certificate replacement, and complete infrastructure recreation.

Terraform’s AWS provider releases rapidly. I will be monitoring the releases to notice when new features or updates apply to my infrastructure and to ensure that I don’t fall behind.

Top