DevOps Tools
Terraform for Beginners: Your First Infrastructure as Code
Terraform lets you define cloud infrastructure in code files, then create, update, and destroy that infrastructure with a single command. Instead of clicking through the AWS console to create a server, you write a configuration file that describes the server, and Terraform builds it for you. Change the file, run the command again, and Terraform updates the server to match.
That is Infrastructure as Code (IaC), and Terraform is the most widely used tool for doing it. If you are learning DevOps or cloud engineering, Terraform is one of the first tools you should pick up after understanding a cloud platform.
This guide covers everything you need to go from zero to deploying your first infrastructure with Terraform.
Why Terraform matters
Before Terraform, engineers created cloud resources manually. Click through the AWS console, configure a server, set up a database, create a load balancer. The problem: there is no record of what you did. No way to reproduce it. No way to review changes before making them. No way to roll back if something goes wrong.
Terraform solves all of these problems:
- Reproducibility -- Your infrastructure is defined in files. Run the same files in a new account and you get identical infrastructure. Every time.
- Version control -- Infrastructure files live in Git, just like application code. You can see who changed what, when, and why.
- Review before apply -- Terraform shows you exactly what it will change before it changes anything. No surprises.
- Collaboration -- Teams can review infrastructure changes through pull requests, just like code reviews.
- Multi-cloud -- Terraform works with AWS, Azure, GCP, Cloudflare, Datadog, and hundreds of other providers. One tool, one language, every platform.
This is why Terraform appears in the majority of DevOps job descriptions. It is the industry standard for infrastructure management, and it is a core part of the DevOps tools landscape.
Core concepts
Terraform has six concepts you need to understand before writing your first file. None of them are complicated.
Providers
A provider is a plugin that tells Terraform how to talk to a specific platform. The AWS provider knows how to create AWS resources. The Azure provider knows Azure resources. You declare which providers you need, and Terraform downloads them automatically.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "eu-west-1"
}
This tells Terraform: "I want to create AWS resources in the eu-west-1 region."
Resources
A resource is a single piece of infrastructure: a server, a database, a DNS record, a security group. Resources are the building blocks of everything you create.
resource "aws_instance" "web_server" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t3.micro"
tags = {
Name = "my-first-server"
}
}
The syntax is resource "TYPE" "NAME". The type (aws_instance) tells Terraform what to create. The name (web_server) is your internal label for referencing it elsewhere in your code.
Variables
Variables make your configuration flexible. Instead of hardcoding values, you define variables that can change between environments.
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t3.micro"
}
resource "aws_instance" "web_server" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = var.instance_type
}
Now you can deploy a t3.micro in development and a t3.large in production using the same code with different variable values.
Outputs
Outputs display useful information after Terraform creates your infrastructure. Need the public IP of a server? Define an output.
output "server_public_ip" {
description = "Public IP address of the web server"
value = aws_instance.web_server.public_ip
}
After running terraform apply, Terraform prints this value. You can also reference outputs across different Terraform projects.
State
Terraform keeps a record of every resource it has created in a state file (terraform.tfstate). This file maps your configuration to real-world resources. When you change your configuration, Terraform compares the file to the state to determine what needs to change.
Critical rule: Never edit the state file manually. Never commit it to Git (it can contain secrets). In team environments, store it in a remote backend like an S3 bucket.
Data sources
Data sources let you read information about existing resources that Terraform did not create. For example, looking up the latest Ubuntu AMI:
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
}
}
resource "aws_instance" "web_server" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
}
Now your configuration always uses the latest Ubuntu image without you manually updating AMI IDs.
Your first Terraform project: an AWS EC2 instance
Let us walk through creating a real piece of infrastructure. You will deploy an EC2 instance (a virtual server) on AWS.
Prerequisites
- Install Terraform -- Download from terraform.io or use your package manager (
brew install terraformon macOS). - AWS account -- The free tier works. You will not incur charges for a
t3.microinstance (12 months free). - AWS CLI configured -- Run
aws configurewith your access key and secret key.
Step 1: Create a project directory
mkdir terraform-first-project
cd terraform-first-project
Step 2: Write the configuration
Create a file called main.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
required_version = ">= 1.7.0"
}
provider "aws" {
region = "eu-west-1"
}
# Look up the latest Ubuntu AMI
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"]
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
}
}
# Create a security group allowing SSH access
resource "aws_security_group" "web_sg" {
name = "web-server-sg"
description = "Allow SSH inbound traffic"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# Create the EC2 instance
resource "aws_instance" "web_server" {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
vpc_security_group_ids = [aws_security_group.web_sg.id]
tags = {
Name = "terraform-first-server"
Environment = "learning"
}
}
Create a file called variables.tf:
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t3.micro"
}
Create a file called outputs.tf:
output "instance_id" {
description = "ID of the EC2 instance"
value = aws_instance.web_server.id
}
output "instance_public_ip" {
description = "Public IP address of the EC2 instance"
value = aws_instance.web_server.public_ip
}
Step 3: Initialise Terraform
terraform init
This downloads the AWS provider plugin. You will see output confirming the provider was installed. Run this once per project (or when you add new providers).
Step 4: Preview the changes
terraform plan
Terraform shows exactly what it will create: one security group, one EC2 instance. Review this carefully. Nothing has been created yet. This is your chance to catch mistakes.
Step 5: Apply the changes
terraform apply
Terraform shows the plan again and asks for confirmation. Type yes. It creates the resources and displays your outputs -- including the public IP of your new server.
That is it. You have infrastructure defined in code, versioned, reviewable, and reproducible.
Step 6: Destroy when finished
terraform destroy
This removes everything Terraform created. No orphaned resources. No surprise charges. Clean.
The four essential commands
Terraform has dozens of commands, but you will use these four daily:
| Command | What it does |
|---|---|
terraform init | Downloads providers and initialises the project |
terraform plan | Shows what changes Terraform will make (dry run) |
terraform apply | Creates or updates infrastructure to match your code |
terraform destroy | Removes all infrastructure Terraform manages |
Other useful commands:
terraform fmt-- Formats your files to a consistent styleterraform validate-- Checks your configuration for syntax errorsterraform output-- Displays output values without applying changesterraform state list-- Shows all resources Terraform is tracking
Best practices for beginners
1. Use remote state from the start
Local state files work for learning, but in any team environment, use a remote backend:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "project/terraform.tfstate"
region = "eu-west-1"
}
}
Remote state prevents conflicts when multiple people work on the same infrastructure.
2. Never hardcode secrets
Do not put AWS keys, passwords, or tokens in your Terraform files. Use environment variables or a secrets manager:
export AWS_ACCESS_KEY_ID="your-key"
export AWS_SECRET_ACCESS_KEY="your-secret"
Terraform reads these automatically.
3. Use meaningful resource names
# Bad
resource "aws_instance" "this" { ... }
# Good
resource "aws_instance" "api_server" { ... }
You will reference these names throughout your configuration. Make them descriptive.
4. Separate environments with workspaces or directories
Do not use the same configuration for development and production without separation. The simplest approach for beginners is separate directories:
infrastructure/
dev/
main.tf
variables.tf
staging/
main.tf
variables.tf
prod/
main.tf
variables.tf
5. Run plan before apply -- always
Even when you are confident about a change, always review the plan. Terraform will tell you if it is going to destroy something you did not expect.
Introduction to modules
Once you understand the basics, modules are the next concept to learn. A module is a reusable package of Terraform configuration.
Instead of copying your EC2 configuration every time you need a server, you create a module:
modules/
web-server/
main.tf
variables.tf
outputs.tf
Then use it:
module "api_server" {
source = "./modules/web-server"
instance_type = "t3.small"
name = "api-server"
}
module "worker_server" {
source = "./modules/web-server"
instance_type = "t3.medium"
name = "worker-server"
}
Two servers, defined once, reused with different configurations. This is how production Terraform code is organised.
The Terraform Registry also has thousands of pre-built modules from the community. Instead of writing VPC configuration from scratch, you can use the official terraform-aws-modules/vpc/aws module and configure it with variables.
Where Terraform fits in the DevOps toolchain
Terraform does not work in isolation. It is part of a larger workflow:
- Developer pushes code -- Application changes trigger a CI/CD pipeline
- Terraform provisions infrastructure -- Servers, databases, networking
- Docker packages the application -- Containers ensure consistency
- Kubernetes orchestrates containers -- Manages deployment and scaling
- Monitoring tracks everything -- Prometheus, Grafana, alerting
Understanding how these tools connect is covered in our complete DevOps tools guide. Terraform handles the infrastructure layer. Docker and Kubernetes handle the application layer. Together, they form the modern deployment pipeline.
If you are charting your path into cloud engineering, Terraform is a non-negotiable skill. See the complete cloud engineer roadmap for the full picture.
What to build next
After your first EC2 instance, try these progressively harder projects:
- VPC with public and private subnets -- Networking fundamentals in code
- Auto-scaling group behind a load balancer -- Production-style compute
- RDS database with security groups -- Managed databases in code
- S3 bucket with lifecycle policies -- Storage management
- Full three-tier application -- VPC + ALB + EC2 + RDS, all in Terraform
Each project builds on the previous one. By the time you complete the five, you have a portfolio that demonstrates real-world Terraform competency.
The best way to learn Terraform is to build with it. Pick a project, write the configuration, run terraform plan, review, apply, and iterate. Break things. Fix them. That cycle is how the knowledge becomes permanent.
Frequently Asked Questions
Ola
Founder, CloudPros
Building the most hands-on DevOps bootcamp for the AI era. 16 weeks of real infrastructure, real projects, real career outcomes.
