Install Ansible, write your first playbook, and configure a remote server (nginx + a deploy user) without touching it manually. The basics that scale up.
By the end of this post you'll have written an Ansible playbook that configures a remote server — installs nginx, sets up a deploy user, and lays down a config file — all from your laptop, without SSHing in once. About 30 minutes.
You'll need: a Linux/macOS terminal, Python 3.9+, and SSH access to a remote Linux server (a $5/month VPS works perfectly).
Ansible is a tool for running commands on remote servers. You write a YAML file describing what you want a server to look like ("nginx installed, user deploy exists, this config file in place"), and Ansible SSHes in and makes it so.
The big idea: idempotency. Running the same playbook twice gives the same result as running it once. If nginx is already installed, Ansible notices and skips. If the config file already matches, no change. This makes Ansible safe to re-run — perfect for "always converge to this state" workflows.
No agent runs on the remote machine. Ansible just SSHes in and runs Python over SSH. As long as the target has SSH and Python (which Linux always does), Ansible can manage it.
# macOS
brew install ansible
# Ubuntu/Debian
sudo apt install ansible
# Or via pip
pip install ansible
Verify:
ansible --version
You should see something like ansible [core 2.x.x].
The inventory tells Ansible what hosts to manage. Create a project directory:
mkdir ansible-tutorial && cd ansible-tutorial
Create hosts.ini:
[web]
my-server.example.com ansible_user=ubuntu
[web:vars]
ansible_python_interpreter=/usr/bin/python3
Replace my-server.example.com with your actual server hostname (or IP), and ubuntu with the SSH user you log in as. Ansible will SSH to that user.
Test connectivity:
ansible -i hosts.ini web -m ping
You should see:
my-server.example.com | SUCCESS => {
"changed": false,
"ping": "pong"
}
If that fails, fix SSH first — Ansible can't help if it can't SSH. Common issues: host not in ~/.ssh/known_hosts, wrong SSH key, wrong username.
Before writing playbooks, you can run one-off commands across your inventory:
ansible -i hosts.ini web -m shell -a "uptime"
You should see the server's uptime line. Ansible SSHed in, ran uptime, captured the output. This is the most basic Ansible interaction — useful for "what's the disk usage on all my servers?" type queries.
Create setup.yml:
---
- name: Configure web server
hosts: web
become: true
tasks:
- name: Update apt cache
apt:
update_cache: yes
cache_valid_time: 3600
- name: Install nginx
apt:
name: nginx
state: present
- name: Ensure nginx is started and enabled
systemd:
name: nginx
state: started
enabled: yes
- name: Create deploy user
user:
name: deploy
shell: /bin/bash
groups: www-data
append: yes
- name: Lay down a custom index.html
copy:
content: |
<h1>Hello from Ansible</h1>
<p>This server was configured automatically.</p>
dest: /var/www/html/index.html
owner: www-data
group: www-data
mode: '0644'
Walk through what each piece does:
hosts: web — apply this play to the web group from the inventory.become: true — run with sudo. Most config tasks need root.tasks: — list of steps. Each task uses a module (the verb after the task name).apt: update_cache=yes — run apt-get update. The cache_valid_time: 3600 means skip if it ran in the last hour (idempotency in action).apt: name=nginx state=present — install nginx if not already installed.Each task has a clear name. Failures show the named task; debug becomes faster.
Always run with --check first — that's a dry-run mode that shows what would change without making changes:
ansible-playbook -i hosts.ini setup.yml --check
You should see something like:
PLAY [Configure web server] ******************
TASK [Update apt cache] **********************
ok: [my-server.example.com]
TASK [Install nginx] **************************
changed: [my-server.example.com]
TASK [Ensure nginx is started] ****************
changed: [my-server.example.com]
...
PLAY RECAP ************************************
my-server.example.com : ok=6 changed=4 unreachable=0 failed=0
changed: 4 means four things would change. Read the list, confirm it's what you want.
Now run for real:
ansible-playbook -i hosts.ini setup.yml
After it finishes, hit your server in a browser:
curl http://my-server.example.com
You should see:
<h1>Hello from Ansible</h1>
<p>This server was configured automatically.</p>
Run the playbook again:
ansible-playbook -i hosts.ini setup.yml
This time the recap should say changed=0 — nothing changed because nothing needs to. nginx is already installed, the user already exists, the file already has the right content.
That's the magic of idempotency. You can run this playbook every time you suspect drift, and it'll fix only what's wrong.
Change the index.html content in setup.yml:
content: |
<h1>Hello from Ansible v2</h1>
<p>This server was reconfigured.</p>
Run again:
ansible-playbook -i hosts.ini setup.yml
Recap shows changed=1 — only the file changed. curl to verify.
Forgetting become: true. Most operational tasks need root. Without become, the play tries to run as the SSH user and fails on anything privileged. Sprinkle become: true at the play level, not per-task.
Using shell: or command: for everything. Those modules aren't idempotent — they run every time, reporting "changed" even if nothing actually changed. Prefer module-specific verbs (apt, systemd, user, copy, template) which check current state.
Skipping the dry-run. --check mode catches a lot of mistakes. The few seconds it takes is worth it.
Hardcoding values in tasks. Server names, paths, ports — put these in variables so you can reuse playbooks across environments. vars: block at the play level, or per-host vars in inventory.
Long playbooks instead of roles. Once a playbook hits ~50 tasks, refactor into roles (organized directories of tasks/templates/vars). Easier to read, easier to reuse.
You've got the basics. The next levels:
Ansible's surface area is big but the daily-use surface is small: write a YAML file, run ansible-playbook, get expected state. Once you've done it once, every subsequent server feels like the first one.
Get the latest tutorials, guides, and insights on AI, DevOps, Cloud, and Infrastructure delivered directly to your inbox.
We've shipped all three patterns to production. They're not interchangeable. Here's the framework we now use to decide which approach fits a given task.
Write, package, and deploy a Lambda function using only the AWS CLI. Trigger it via a public URL. Understand what serverless actually means.
Explore more articles in this category
Provision real cloud resources with Terraform — a VPC, an S3 bucket, and an EC2 instance — using the standard init/plan/apply workflow.
GitOps in plain words — what it actually is, the workflow it enables, and a hands-on demo using Argo CD on a local Kubernetes cluster.
We launched Backstage in October. Six months in, 80% of services are catalogued, on-boarding takes a third of the time, and we mostly know what owns what.
systemd: name=nginx state=started enabled=yes — make sure nginx is running and starts at boot.user: name=deploy ... — create the deploy user. If it exists, no change.copy: content=... — write a file. If the file already exists with the same content and permissions, no change.