DevOps Tools
How to Write Your First Terraform Configuration
The fastest way to understand Terraform is to use it. This tutorial walks you through installing Terraform, writing your first configuration file, and provisioning a real AWS EC2 instance from scratch. By the end, you will have a running server in the cloud, created entirely from a text file on your machine.
No prior Terraform experience is needed. If you know what a server is and have used a terminal before, you have enough background to follow along. For broader context on what Terraform is and why it matters, see the Terraform for beginners guide. For how Terraform compares to other infrastructure tools, see Ansible vs Terraform.
Step 1: Install Terraform
Terraform is a single binary. Installation takes under a minute.
macOS
brew install terraform
Linux (Ubuntu/Debian)
wget -O https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform
Windows
Download the binary from terraform.io/downloads and add it to your PATH. Or use Chocolatey:
choco install terraform
Verify the installation
terraform -version
You should see output like Terraform v1.7.x. The exact version does not matter for this tutorial.
Step 2: Configure AWS credentials
Terraform needs permission to create resources in your AWS account. If you have not already configured the AWS CLI, do it now:
aws configure
Enter your AWS Access Key ID, Secret Access Key, default region (use eu-west-1 or us-east-1), and output format (json).
If you do not have an AWS account, create one at aws.amazon.com. The free tier includes 750 hours per month of t2.micro or t3.micro EC2 instances for the first 12 months -- more than enough for this tutorial. For a complete guide to the AWS free tier and core services, see AWS for beginners.
Security note: Never hardcode AWS credentials in your Terraform files. The aws configure command stores them securely in ~/.aws/credentials, and Terraform reads them automatically.
Step 3: Create your project directory
Every Terraform project lives in its own directory. Create one now:
mkdir my-first-terraform
cd my-first-terraform
Terraform will look for .tf files in whatever directory you run it from. You can split your configuration across multiple files -- Terraform reads all .tf files in the directory and merges them.
Step 4: Write your first configuration file
Create a file called main.tf. This is where you define what infrastructure you want.
# Configure the Terraform settings
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
required_version = ">= 1.7.0"
}
# Configure the AWS provider
provider "aws" {
region = "eu-west-1"
}
# Look up the latest Ubuntu AMI
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical (Ubuntu publisher)
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
}
}
# Create a security group that allows SSH access
resource "aws_security_group" "server_sg" {
name = "first-server-sg"
description = "Allow SSH inbound traffic"
ingress {
description = "SSH from anywhere"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "Allow all outbound"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "first-server-sg"
}
}
# Create the EC2 instance
resource "aws_instance" "my_server" {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
vpc_security_group_ids = [aws_security_group.server_sg.id]
tags = {
Name = "my-first-terraform-server"
Environment = "learning"
ManagedBy = "terraform"
}
}
Let us break down every section.
The terraform block
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
required_version = ">= 1.7.0"
}
This tells Terraform two things: which providers you need (the AWS provider from HashiCorp, version 5.x), and which version of Terraform itself is required. The ~> 5.0 syntax means "any version from 5.0 up to but not including 6.0."
The provider block
provider "aws" {
region = "eu-west-1"
}
This configures the AWS provider. You are telling Terraform to create resources in the eu-west-1 (Ireland) region. Terraform reads your AWS credentials from ~/.aws/credentials automatically.
The data source
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"]
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
}
}
A data source reads information about existing resources. This one looks up the latest Ubuntu 22.04 AMI (Amazon Machine Image) so you do not need to hardcode an AMI ID that changes over time.
The resources
Resources are the infrastructure Terraform will create. Your configuration defines two:
- aws_security_group -- A firewall rule that allows SSH (port 22) access to your server
- aws_instance -- The actual EC2 virtual server
Notice how the instance references the security group: vpc_security_group_ids = [aws_security_group.server_sg.id]. Terraform understands this dependency and creates the security group first, then the instance.
Step 5: Add variables
Create a file called variables.tf:
variable "instance_type" {
description = "The EC2 instance type to use"
type = string
default = "t3.micro"
}
Variables make your configuration flexible. Instead of hardcoding t3.micro in the resource, you reference var.instance_type. You can override the default at apply time without changing any code:
terraform apply -var="instance_type=t3.small"
This is how the same configuration works for development (small, cheap instances) and production (large, powerful instances).
Step 6: Add outputs
Create a file called outputs.tf:
output "instance_id" {
description = "The ID of the EC2 instance"
value = aws_instance.my_server.id
}
output "instance_public_ip" {
description = "The public IP address of the EC2 instance"
value = aws_instance.my_server.public_ip
}
output "instance_public_dns" {
description = "The public DNS name of the EC2 instance"
value = aws_instance.my_server.public_dns
}
Outputs display useful information after Terraform creates your infrastructure. After terraform apply finishes, you will see the instance ID, public IP, and DNS name printed to your terminal. You can also retrieve them later with terraform output.
Step 7: The workflow -- init, plan, apply
You now have three files in your project:
my-first-terraform/
main.tf # Infrastructure definition
variables.tf # Input variables
outputs.tf # Output values
Time to run Terraform.
terraform init
terraform init
This command does three things:
- Downloads the AWS provider plugin specified in your
terraformblock - Creates a
.terraformdirectory to store the plugin - Creates a
.terraform.lock.hclfile to lock the provider version
You run init once per project (or whenever you add new providers). It is safe to run multiple times.
terraform plan
terraform plan
This is the dry run. Terraform reads your configuration, compares it to the current state (nothing, since this is a new project), and shows you exactly what it will create:
Plan: 2 to add, 0 to change, 0 to destroy.
Two resources: one security group and one EC2 instance. Review the plan carefully. This is your chance to catch mistakes before any real infrastructure is created.
Always run plan before apply. This rule has saved countless engineers from accidental deletions and misconfigurations.
terraform apply
terraform apply
Terraform shows the plan again and asks for confirmation:
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
Type yes and press Enter. Terraform creates the security group, then the EC2 instance (because the instance depends on the security group). After a minute or two, you will see your outputs:
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
instance_id = "i-0abc123def456789"
instance_public_ip = "54.78.123.45"
instance_public_dns = "ec2-54-78-123-45.eu-west-1.compute.amazonaws.com"
That is it. You have a running server in AWS, created entirely from code.
Step 8: Understanding state
After running apply, you will notice a new file in your directory: terraform.tfstate. This is the state file.
What the state file does
The state file is a JSON record of every resource Terraform manages. It maps your configuration to real-world resources. When you change your main.tf and run apply again, Terraform compares the configuration to the state file to determine what needs to change.
For example, if you change the instance type from t3.micro to t3.small, Terraform reads the state, sees that the current instance is t3.micro, and plans to destroy the old instance and create a new one with t3.small.
Critical rules for state files
- Never edit the state file manually. Terraform manages it. Manual edits will cause inconsistencies that are painful to fix.
- Never commit it to Git. State files can contain sensitive data like database passwords and API keys. Add
terraform.tfstateandterraform.tfstate.backupto your.gitignore. - Use remote state for teams. When multiple people work on the same infrastructure, the state file must be stored centrally. The most common approach is an S3 bucket with DynamoDB locking:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "my-first-terraform/terraform.tfstate"
region = "eu-west-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
Remote state is a day-two concern. For learning, local state is fine. But know that production Terraform always uses remote state.
Step 9: Make a change
The power of Terraform is not just creating infrastructure -- it is managing infrastructure over time. Let us make a change.
Open main.tf and add a new tag to your EC2 instance:
tags = {
Name = "my-first-terraform-server"
Environment = "learning"
ManagedBy = "terraform"
Project = "first-config"
}
Now run the workflow:
terraform plan
Terraform shows that it will modify the instance in-place (adding a tag does not require replacement):
Plan: 0 to add, 1 to change, 0 to destroy.
terraform apply
The tag is added. The server keeps running. No downtime. This is infrastructure as code in action -- you change the code, review the plan, and apply. The same workflow whether you are adding a tag or redesigning an entire network architecture.
Step 10: Clean up with destroy
When you are done learning, destroy everything Terraform created:
terraform destroy
Terraform shows you everything it will delete and asks for confirmation. Type yes. Every resource is removed. No orphaned servers running up charges. No security groups lingering in your account. Clean.
Get into the habit of destroying resources after practice sessions. The free tier covers a lot, but forgetting to destroy a NAT gateway or a load balancer can lead to unexpected charges.
The complete command reference
These are the commands you will use most often:
| Command | What it does |
|---|---|
terraform init | Downloads providers, initialises the project |
terraform plan | Shows what will change (dry run) |
terraform apply | Creates or updates infrastructure |
terraform destroy | Removes all managed infrastructure |
terraform fmt | Formats your .tf files to a consistent style |
terraform validate | Checks for syntax errors |
terraform output | Shows output values |
terraform state list | Lists all resources in the state |
Run terraform fmt before every commit. Run terraform validate when you get syntax errors. Use terraform state list to see what Terraform is currently managing.
What to learn next
Your first configuration is a single EC2 instance. Production Terraform is more complex but uses the same concepts. Here is what to tackle next, in order:
Modules
Modules are reusable packages of Terraform configuration. Instead of copying your EC2 code every time, you create a module and reference it with different variables. The Terraform Registry has thousands of pre-built modules for common patterns like VPCs, EKS clusters, and RDS databases.
Remote state
Move your state file from your local machine to an S3 bucket with DynamoDB locking. This is required for any team environment and is considered a baseline best practice for production.
Multiple environments
Structure your Terraform code to support development, staging, and production environments. The simplest approach is separate directories with shared modules. More advanced approaches use workspaces or tools like Terragrunt.
CI/CD integration
Run Terraform in automated pipelines instead of from your laptop. A typical pattern: a developer opens a pull request, the CI pipeline runs terraform plan, the team reviews the plan, and on merge the pipeline runs terraform apply. This brings the same code review process to infrastructure that you use for application code.
Import existing resources
If your team already has infrastructure created manually through the console, terraform import lets you bring it under Terraform management. This is a common real-world scenario and an important skill for brownfield projects.
Each of these topics builds directly on what you have learned here. The workflow remains the same: write configuration, run plan, review, apply. The complexity grows, but the mental model stays constant.
For a comprehensive introduction to Terraform concepts, see the Terraform for beginners guide. To understand how Terraform fits within the broader DevOps toolchain, see the DevOps tools guide. If you are charting your path into cloud engineering, the cloud engineer roadmap maps out the full journey.
Frequently Asked Questions
Ola
Founder, CloudPros
Building the most hands-on DevOps bootcamp for the AI era. 16 weeks of real infrastructure, real projects, real career outcomes.
