Provision real cloud resources with Terraform — a VPC, an S3 bucket, and an EC2 instance — using the standard init/plan/apply workflow.
By the end of this post you'll have used Terraform to provision a small AWS infrastructure (VPC + S3 bucket + EC2 instance), modify it, and tear it down — all from text files in a Git repo. About 30 minutes. Stays in the AWS free tier.
You'll need: an AWS account, AWS CLI configured (aws configure), and Terraform installed (brew install terraform or download from terraform.io).
Without IaC, you click around in the AWS console to create a VPC, then an S3 bucket, then a security group. Six months later, no one remembers exactly what was clicked. Reproducing the setup in another environment is a guessing game.
With IaC, you write text files describing what you want. A tool reads those files, talks to the cloud provider's API, and creates exactly what's described. Six months later, the files are still the truth. You can version-control them, code-review them, replicate them per environment, destroy and recreate them.
Terraform is the most widely adopted IaC tool. It's declarative — you describe the desired end state, not the steps to get there — and supports virtually every cloud and SaaS that has an API (AWS, GCP, Azure, GitHub, Cloudflare, Datadog, etc.).
You'll use the same four commands forever:
terraform init — download the providers your config uses (e.g. the AWS provider plugin).terraform plan — show what would change. Read this carefully.terraform apply — actually make the changes.terraform destroy — tear it all down.Almost every Terraform interaction is one of those four. We'll use all of them.
mkdir tf-tutorial && cd tf-tutorial
Create main.tf:
terraform {
required_version = ">= 1.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
# Generate a random suffix so bucket names are unique
resource "random_id" "suffix" {
byte_length = 4
}
resource "aws_s3_bucket" "tutorial" {
bucket = "tf-tutorial-${random_id.suffix.hex}"
tags = {
Project = "tf-tutorial"
ManagedBy = "Terraform"
}
}
We'll need the random provider too — add it to the required_providers block:
random = {
source = "hashicorp/random"
version = "~> 3.0"
}
So the complete terraform block becomes:
terraform {
required_version = ">= 1.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.0"
}
}
}
terraform init
You should see:
Initializing provider plugins...
- Installing hashicorp/aws v5.x.x...
- Installing hashicorp/random v3.x.x...
Terraform has been successfully initialized!
This downloaded the provider plugins into .terraform/. Don't commit that directory — add .terraform/ to .gitignore.
terraform plan
You should see:
Terraform will perform the following actions:
# aws_s3_bucket.tutorial will be created
+ resource "aws_s3_bucket" "tutorial" {
+ bucket = (known after apply)
+ tags = { ... }
...
}
# random_id.suffix will be created
+ resource "random_id" "suffix" {
...
}
Plan: 2 to add, 0 to change, 0 to destroy.
The + means create. Always read the plan before applying. In real projects, this is what reviewers look at — it shows exactly what's going to change.
terraform apply
You'll see the plan again, then a prompt:
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value:
Type yes. After ~10 seconds:
random_id.suffix: Creating...
random_id.suffix: Creation complete after 0s
aws_s3_bucket.tutorial: Creating...
aws_s3_bucket.tutorial: Creation complete after 3s
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
You now have a real S3 bucket. Verify:
aws s3 ls | grep tf-tutorial
You should see the bucket name.
Edit main.tf and add at the bottom:
resource "aws_s3_bucket_versioning" "tutorial" {
bucket = aws_s3_bucket.tutorial.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "tutorial" {
bucket = aws_s3_bucket.tutorial.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
We're enabling versioning and server-side encryption on the bucket. The aws_s3_bucket.tutorial.id reference is how Terraform knows these resources depend on the bucket — they'll be created after it.
Plan, then apply:
terraform plan
You'll see Plan: 2 to add, 0 to change, 0 to destroy. — the existing bucket isn't being recreated, just the two new resources are added.
terraform apply
Confirm:
aws s3api get-bucket-versioning --bucket tf-tutorial-<your-suffix>
aws s3api get-bucket-encryption --bucket tf-tutorial-<your-suffix>
Both should show enabled.
You probably want to know the bucket name without grepping AWS. Add to main.tf:
output "bucket_name" {
description = "The name of the created S3 bucket"
value = aws_s3_bucket.tutorial.bucket
}
terraform apply once more (a no-op for resources but populates the output):
terraform output bucket_name
You should see your bucket name. Outputs are useful for: feeding values to scripts, displaying connection strings, sharing data between Terraform projects.
terraform destroy
You'll see a plan with - (delete) for each resource. Type yes to confirm. Everything Terraform created will be removed:
Destroy complete! Resources: 4 destroyed.
That's the part you'd never get from clicking around the console — clean teardown of everything you created, in one command.
.
├── main.tf
├── .terraform/ # provider plugins (don't commit)
├── .terraform.lock.hcl # provider version lockfile (commit this)
└── terraform.tfstate # state (don't commit; use remote backend)
The terraform.tfstate file is Terraform's record of what it created. For real projects, you store this in a remote backend (S3, Terraform Cloud, etc.) so the team shares one source of truth — never commit the local file.
The .terraform.lock.hcl pins exact provider versions. Commit this so everyone uses the same versions.
Editing infrastructure outside Terraform. Once Terraform manages a resource, click-changes in the AWS console will be reverted on the next apply, OR cause errors. Stay in Terraform.
Storing tfstate locally for shared projects. State has secrets (DB passwords, API keys) and breaks teamwork. Always use a remote backend in real projects.
Skipping terraform plan. Always plan before apply. The plan output is your last chance to catch mistakes (resources being destroyed when you didn't expect, surprising replacements).
Hardcoding values. Use variables for things that change between environments (region, instance size, names). var.region, var.environment. Today you might think "we're only ever in us-east-1" and tomorrow you're not.
Loose IAM permissions. When Terraform creates IAM resources for itself, give it only what it needs. The temptation is to grant * permissions; resist it.
You've got the basic workflow. The next levels:
Terraform's surface area looks big but the day-to-day is small: edit, plan, apply. Get comfortable with that loop and 90% of your IaC work feels routine.
Get the latest tutorials, guides, and insights on AI, DevOps, Cloud, and Infrastructure delivered directly to your inbox.
Generate an SSH key, set up passwordless login, and configure aliases for the servers you use daily — all without copy-pasting yet another long command.
Embeddings turn text into numbers a computer can compare. Here's the working mental model, a runnable Python example, and where embeddings fit in real apps.
Explore more articles in this category
GitOps in plain words — what it actually is, the workflow it enables, and a hands-on demo using Argo CD on a local Kubernetes cluster.
Install Ansible, write your first playbook, and configure a remote server (nginx + a deploy user) without touching it manually. The basics that scale up.
We launched Backstage in October. Six months in, 80% of services are catalogued, on-boarding takes a third of the time, and we mostly know what owns what.
Evergreen posts worth revisiting.