

Running ansible-playbook against three servers is simple enough. Running it against 300, spanning staging, production, and DR, with different hardening requirements per tier, an audit trail for every change, and a team of fifteen who all have slightly different ideas about "configured correctly": that is where Ansible's design decisions start to matter.
Playbooks are Ansible's unit of reuse. They're YAML files, but the interesting decisions aren't in the syntax. They're in how you structure plays, where you put variables, when to use handlers instead of tasks, and how to keep a 400-line playbook readable after six months of additions from five different contributors.
What follows is a complete walkthrough: playbook anatomy from first principles, a production-grade CIS Linux hardening example with real code, and the patterns that cause silent failures in playbooks that look correct on paper. All code examples are tested against ansible-core 2.20.5 on Ubuntu 22.04 LTS.
At a glance An Ansible playbook is a YAML file containing one or more plays, each mapping a group of hosts to a set of tasks. Playbooks are Ansible's primary automation primitive for configuration management, application deployment, and orchestration. Current stable release: ansible-core 2.20.5 (released April 21, 2026). env zero runs your playbooks through a governed workflow: RBAC, scheduled drift detection in
--checkmode, and a full audit trail on top of your existing playbook logic.
What an Ansible playbook actually is
The term "playbook" gets conflated with "Ansible automation" in general. It helps to be precise about the hierarchy.
A task calls a single module. A play maps a set of hosts to a sequence of tasks. A playbook is a YAML file containing one or more plays, executed top to bottom. Roles, handlers, variables, and templates are tools that make plays maintainable; they're components of a playbook, not separate things.
The distinction between a playbook and an ad-hoc command is intent. Ad-hoc commands are for one-off operations: check disk space, restart a misbehaving service, test SSH connectivity. Playbooks are for anything you'll run more than once, version-control, and review in a pull request.
For a broader introduction to Ansible (architecture, inventory structure, how agentless SSH-based push works), see The Essential Ansible Tutorial. This post picks up where that one ends.
If you're already running playbooks and want to add governance around them, env zero connects directly to your existing YAML without requiring changes to it.
The anatomy of a playbook
A minimal working playbook looks like this:
---
- name: Configure web servers
hosts: webservers
become: true
vars:
nginx_port: 80
tasks:
- name: Install nginx
ansible.builtin.apt:
name: nginx
state: present
update_cache: true
- name: Start and enable nginx
ansible.builtin.service:
name: nginx
state: started
enabled: true
Every element here has a reason.
The --- at the top marks the start of a YAML document. Technically optional, but a strong convention that signals to both humans and tools that this is a playbook. The name: on the play is also optional but worth the keystrokes: when a play has no name, ansible-playbook output shows a blank line, and at 2am reading through 300 lines of task output, blank play headers cost time.
Plays, hosts, and execution order
hosts: accepts an inventory group name, a hostname, a comma-separated list, or a pattern. all targets every host in the inventory. webservers:!db targets the webservers group, excluding any host that also belongs to the db group.
become: true at the play level applies privilege escalation to every task in that play. Placing it on individual tasks is cleaner when only a few need root, limiting the blast radius if a task goes wrong. At the play level, it's an all-or-nothing choice.
Plays within a playbook run in order, top to bottom. Tasks within a play also run in order. There's no parallelism within a single host's task sequence, but Ansible runs tasks across multiple hosts concurrently (controlled by the forks setting in ansible.cfg, which defaults to 5).
Tasks and FQCN modules
Since ansible-core 2.10, the official recommendation is to reference modules by their fully qualified collection name (FQCN): ansible.builtin.apt instead of apt, ansible.builtin.service instead of service. The short names still work, but FQCN removes ambiguity when multiple collections provide modules with the same short name, a real problem once you start mixing ansible-core builtins with community collections.
The ansible.builtin collection covers the essentials. Package management: apt, dnf, yum. File operations: copy, template, lineinfile, replace. Service management: service. Identity: user, group. One-off execution: command and shell (use sparingly, as they break idempotency). Readiness checks: wait_for, uri.
When you need platform-specific abstractions beyond the builtins, like community.general.ufw for Ubuntu firewall management, install the collection first:
ansible-galaxy collection install community.general
Variables and facts
Ansible's variable precedence order has 22 levels. The practical rule: use group_vars/ for environment-specific values (anything that changes between dev and prod), host_vars/ for per-host exceptions, and play-level vars: for values that belong with the playbook logic itself.
Facts (values gathered automatically about the target host) become available as variables once gather_facts: true runs (the default). ansible_distribution, ansible_os_family, and ansible_distribution_version are the ones you'll use most for conditionals that handle Ubuntu versus RHEL differences.
Related reading: Ansible Variables: A Practical Guide with Examples. It covers the full 22-level precedence order and common traps with variable scope and registration.
Handlers
A handler is a task that runs only when notified, and only once per play, at the end of the play, regardless of how many tasks sent the notification. The canonical use is service reloads.
tasks:
- name: Set SSH MaxAuthTries
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: '^#?MaxAuthTries'
line: 'MaxAuthTries 4'
notify: Reload sshd
- name: Disable root login over SSH
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin no'
notify: Reload sshd
handlers:
- name: Reload sshd
ansible.builtin.service:
name: sshd
state: reloaded
Both tasks notify the same handler. Ansible fires Reload sshd exactly once, at the end of the play: not twice, not after each task. This is intentional: sshd reloads once, atomically, after all config changes are applied.
One gotcha: if the play fails before all tasks complete, pending handlers don't run. The host ends up with config changes on disk but a service that hasn't reloaded. This is usually fine: fix the failure, re-run, and the handler fires on the next successful run. For cases where that's unacceptable, --force-handlers tells Ansible to run notified handlers even when the play fails. env zero logs both the failure and the subsequent successful run together in the environment's audit trail, so you can trace exactly which task failed and what the handler state was without digging through terminal history.
Conditionals
when: accepts any Jinja2 expression. OS branching is the most common use:
- name: Install auditd (Debian-based systems)
ansible.builtin.apt:
name: auditd
state: present
when: ansible_os_family == 'Debian'
- name: Install audit (RHEL-based systems)
ansible.builtin.dnf:
name: audit
state: present
when: ansible_os_family == 'RedHat'
when: evaluates after facts are gathered. If you've disabled gather_facts, any condition referencing facts will silently error or produce unexpected results. Keep gather_facts: true (the default) unless you have a specific reason to skip it and have audited every when: in the play.
Loops
loop: iterates over a list; the current item is available as {{ item }}. Before ansible-core 2.5, the syntax was with_items:; loop: is the current standard and handles more data types cleanly.
- name: Install required packages
ansible.builtin.apt:
name: "{{ item }}"
state: present
loop:
- ufw
- auditd
- libpam-pwquality
For structured data, loop over a list of dictionaries:
- name: Set SSH hardening options
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
loop:
- { regexp: '^#?PermitRootLogin', line: 'PermitRootLogin no' }
- { regexp: '^#?PasswordAuthentication', line: 'PasswordAuthentication no' }
- { regexp: '^#?MaxAuthTries', line: 'MaxAuthTries {{ ssh_max_auth_tries }}' }
notify: Reload sshd
The {{ ssh_max_auth_tries }} variable comes from group_vars, covered in the example below.
A real-world walkthrough: hardening a Linux fleet to CIS benchmark
CIS (Center for Internet Security) benchmarks define configuration standards for operating systems. They're referenced in SOC 2, ISO 27001, and FedRAMP audit requirements. Running a hardening playbook across a fleet and verifying idempotency is one of the most common Ansible use cases in enterprise, and one of the clearest examples of what Ansible does well.
This walkthrough hardens Ubuntu 22.04 servers to CIS Level 1: SSH configuration, firewall rules, and auditd. The full code is available in the env0-ansible-playbooks-tutorial repository.
Project structure
cis-hardening/
├── inventory.ini
├── group_vars/
│ ├── all.yml
│ ├── prod.yml
│ └── dev.yml
└── harden.yml
No roles in this example; we'll cover when to extract roles in the next section. A flat playbook is the right starting point when the logic fits in a single file and you're not sharing it across multiple projects.
Inventory
[webservers]
web01 ansible_host=10.0.1.10
web02 ansible_host=10.0.1.11
[dbservers]
db01 ansible_host=10.0.1.20
[prod:children]
webservers
dbservers
group_vars: different policy per environment
group_vars/all.yml sets defaults for every host:
# group_vars/all.yml
ssh_max_auth_tries: 4
ssh_login_grace_time: 60
auditd_max_log_file: 8
firewall_allowed_tcp_ports:
- 22
- 80
- 443
group_vars/prod.yml tightens constraints for production:
# group_vars/prod.yml
ssh_max_auth_tries: 3
auditd_max_log_file: 32
Ansible merges these at runtime. Production hosts get ssh_max_auth_tries: 3; everything else gets 4. One variable file, no conditionals needed in the playbook itself.
The hardening playbook
---
- name: CIS Level 1 hardening - Ubuntu 22.04
hosts: all
become: true
gather_facts: true
handlers:
- name: Reload sshd
ansible.builtin.service:
name: sshd
state: reloaded
tasks:
# SSH hardening
- name: Set SSH hardening directives
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
validate: /usr/sbin/sshd -t -f %s
loop:
- { regexp: '^#?PermitRootLogin', line: 'PermitRootLogin no' }
- { regexp: '^#?PasswordAuthentication', line: 'PasswordAuthentication no' }
- { regexp: '^#?MaxAuthTries', line: 'MaxAuthTries {{ ssh_max_auth_tries }}' }
- { regexp: '^#?LoginGraceTime', line: 'LoginGraceTime {{ ssh_login_grace_time }}' }
- { regexp: '^#?X11Forwarding', line: 'X11Forwarding no' }
- { regexp: '^#?AllowAgentForwarding', line: 'AllowAgentForwarding no' }
notify: Reload sshd
tags: [ssh]
# Firewall
- name: Install ufw
ansible.builtin.apt:
name: ufw
state: present
update_cache: true
tags: [firewall]
- name: Allow required TCP ports
community.general.ufw:
rule: allow
port: "{{ item | string }}"
proto: tcp
loop: "{{ firewall_allowed_tcp_ports }}"
tags: [firewall]
- name: Enable ufw with default deny on incoming traffic
community.general.ufw:
state: enabled
default: deny
direction: incoming
tags: [firewall]
# auditd
- name: Install auditd
ansible.builtin.apt:
name: auditd
state: present
tags: [auditd]
- name: Configure auditd max log file size
ansible.builtin.lineinfile:
path: /etc/audit/auditd.conf
regexp: '^max_log_file\s*='
line: "max_log_file = {{ auditd_max_log_file }}"
tags: [auditd]
- name: Enable and start auditd
ansible.builtin.service:
name: auditd
state: started
enabled: true
tags: [auditd]
A few things worth calling out.
The validate: parameter on the lineinfile task runs /usr/sbin/sshd -t -f %s against the file before writing it. If sshd reports a config syntax error, the task fails and the bad config never lands on disk. This is one of the most underused features in Ansible and one of the most valuable for SSH configuration specifically. A bad sshd_config that you can't roll back because you just locked yourself out of the server is a serious incident. The validate parameter prevents it entirely.
The tags on each section ([ssh], [firewall], [auditd]) let you run a specific section during development without re-running the whole playbook. More on this in the next section.
First run vs. tenth run
On the first run, every task that changes state reports changed. sshd reloads. ufw enables. auditd installs. The run takes 30 to 45 seconds per host depending on package cache state.
On the tenth run, every task reports ok. The sshd handler never fires because no config lines changed. The run completes in under 10 seconds. That's idempotency: the playbook describes desired state, and Ansible verifies rather than reapplies.
This is what makes scheduled playbook runs viable at scale. You can run the hardening playbook against your entire fleet every night. If a server drifts from the CIS baseline between runs, the next run catches and corrects it.
Running and controlling playbook execution
The ansible-playbook command
ansible-playbook harden.yml -i inventory.ini
The flags that matter in production:
-i specifies the inventory file. Without it, Ansible falls back to /etc/ansible/hosts, which is never the right choice for project-specific playbooks. Always pass it explicitly.
-u sets the remote username. Usually defined in inventory or ansible.cfg, but useful for one-off overrides without editing files.
-e passes extra variables that override everything else in the precedence chain. ansible-playbook harden.yml -e "ssh_max_auth_tries=2" overrides whatever is in group_vars. Useful for testing; dangerous to rely on in CI pipelines where the variable origin should be traceable.
--vault-password-file points to a file containing the Vault decryption password. In CI, this is typically a path where your CI system injects a secret, not a file checked into the repo.
Tags: --tags and --skip-tags
With the hardening playbook above:
# Run only SSH and auditd tasks
ansible-playbook harden.yml -i inventory.ini --tags "ssh,auditd"
# Run everything except firewall changes
ansible-playbook harden.yml -i inventory.ini --skip-tags firewall
Tags are most useful during development (test only the section you're modifying) and for maintenance (apply a targeted fix without re-running everything). Don't over-tag: if every task gets its own tag, the playbook becomes as hard to navigate as one with no tags. Tag logical sections: ssh, firewall, auditd, packages.
Check mode and diff: the two flags to run before every production change
ansible-playbook harden.yml -i inventory.ini --check --diff
--check is a dry run. Tasks report what they would do without doing it. Not every module supports check mode cleanly (command and shell tasks can behave unpredictably because they can't know whether their output would differ), but lineinfile, apt, service, and template all handle it correctly.
--diff shows a unified diff for every file that would change. Combined with --check, you get a preview of exactly which lines in which files will be modified before committing.
Running --check --diff before every production apply catches more drift than most teams expect. It's the closest Ansible gets to a plan-before-apply workflow. env zero enforces this automatically: every environment can be configured to run playbooks in --check mode on a schedule, so any host that drifts from the hardening baseline surfaces without anyone having to remember to run the command.
Limiting scope with --limit
# Target one host only
ansible-playbook harden.yml -i inventory.ini --limit web01
# Target a group
ansible-playbook harden.yml -i inventory.ini --limit webservers
--limit overrides the hosts: directive in the play without modifying the playbook itself. It's the escape hatch for testing a change on one host before rolling it to the group.
Organizing playbooks at scale
When to extract a role
A role is a reusable unit of tasks, handlers, templates, files, and variables with a standard directory structure:
roles/
└── ssh_hardening/
├── tasks/
│ └── main.yml
├── handlers/
│ └── main.yml
├── defaults/
│ └── main.yml
└── templates/
└── sshd_config.j2
Extract to a role when: the same logic is needed in more than one playbook, the task list grows long enough that a single flat file becomes hard to scan, or you want to publish to Ansible Galaxy for reuse across teams.
Don't extract a role just to feel more organized. A 60-task playbook that belongs to one project doesn't benefit from role extraction; it just adds indirection. The test is simple: can you state in one sentence why the role boundary exists? "It's our SSH hardening logic and we use it in three playbooks" is a good answer. "It felt cleaner" is not.
import_tasks vs. include_tasks
This is the gotcha that trips up most engineers when they first split playbooks into multiple files.
import_tasks: is static. Ansible reads the file at parse time, before any tasks run. Tags applied to the import statement propagate into all tasks inside the included file, so --tags ssh on an import correctly applies to every task within it.
include_tasks: is dynamic. Ansible reads the file at runtime, when execution reaches that point. Tags applied to the include_tasks: statement do not propagate into the tasks inside the file. The advantage is that you can use a variable in the filename: include_tasks: "{{ ansible_os_family }}.yml", which is impossible with import_tasks.
Use import_tasks by default. Switch to include_tasks only when you need a dynamic filename or conditional file inclusion at runtime.
group_vars and host_vars precedence
Variables in group_vars/all.yml apply to every host. Variables in group_vars/<groupname>.yml apply to hosts in that group. Variables in host_vars/<hostname>.yml apply to one host. Host vars beat group vars; more specific groups beat less specific groups.
In practice: set broad defaults in all.yml, tighten them in environment-specific group files (prod.yml, staging.yml), and override for one-off exceptions in host_vars/. Most of the time, group_vars is enough. host_vars is for genuine exceptions, not for organization. env zero's environment model maps to this structure directly: each env zero environment corresponds to an inventory group, with its own variable overrides and RBAC boundaries.
Related reading: Ansible vs. Terraform: When to Choose One or Use Both Together. The two tools overlap in surprising ways and this post covers how teams typically use them together.
Debugging and troubleshooting Ansible playbooks
Verbose modes
ansible-playbook harden.yml -i inventory.ini -v # Task results
ansible-playbook harden.yml -i inventory.ini -vv # Input and output data
ansible-playbook harden.yml -i inventory.ini -vvv # SSH connection details
ansible-playbook harden.yml -i inventory.ini -vvvv # Ansible internals
Start with -v for most failures: it shows the full error message and the return value from the module. Jump to -vvv when the failure looks like a connection issue. -vvvv is for debugging the SSH or WinRM layer itself.
Common failures
Jinja2 undefined variable. "{{ some_var }}" evaluates at runtime. If some_var is undefined, Ansible's default behavior is to substitute an empty string rather than fail, which means the task runs with a blank value and the error shows up later, if at all. Set DEFAULT_UNDEFINED_VAR_BEHAVIOR: strict in ansible.cfg, or use the default() filter to make fallbacks explicit: "{{ ssh_max_auth_tries | default(4) }}".
Handler not running when expected. A handler fires only if the task that notified it reported changed. If the task reports ok (the file already contains the target value), the notify is skipped. During development, if you're expecting a handler to fire and it isn't, manually change the target state before running, or use --force-handlers to make Ansible run notified handlers regardless.
False positives from command and shell tasks. command: and shell: always report changed, even when they do nothing. This pollutes your change tracking and breaks idempotency checks. Replace these with purpose-built modules wherever possible. When you genuinely must use command:, add changed_when: false or a proper condition: changed_when: result.rc == 0 and 'already installed' not in result.stdout.
Block and rescue for structured error handling. For situations where a task might fail in an expected way and you want to recover gracefully, block: with rescue: and always: gives you try/catch/finally semantics:
- block:
- name: Apply configuration
ansible.builtin.template:
src: app.conf.j2
dest: /etc/app/app.conf
rescue:
- name: Restore backup config
ansible.builtin.copy:
src: /etc/app/app.conf.bak
dest: /etc/app/app.conf
always:
- name: Verify service is running
ansible.builtin.service:
name: app
state: started
Linting with ansible-lint
pip install ansible-lint
ansible-lint harden.yml
ansible-lint catches patterns that aren't syntax errors but are wrong in practice: using command: where a module exists, missing name: on tasks, short module names instead of FQCN, deprecated syntax. Most failing checks on new playbooks fall into two categories: module names that aren't FQCN, and command: tasks that should be module calls.
Run it in your CI pipeline before merging any playbook change. A clean lint run takes seconds. Discovering the same issues after a failed production deploy costs considerably more. When env zero runs a playbook, the full verbose output is stored in the run's audit log, so when something fails in production you're not relying on whoever happened to have a terminal open at the time.
Best practices for production playbooks
Keep tasks idempotent by design. Before writing a command: task, check whether a module exists for the operation. Before using lineinfile, ask whether a Jinja2 template would be more maintainable as the config grows. The goal is a playbook where a tenth run produces zero changed results. When something does change, it means state actually drifted, not that the playbook is reapplying things unnecessarily.
Store secrets in Ansible Vault. Passwords, API keys, and private keys don't belong in group_vars or host_vars in plaintext. ansible-vault encrypt_string lets you encrypt individual values and store the encrypted form directly in a vars file. The vault password itself belongs in a secrets manager, not in the repository. Pass it to ansible-playbook with --vault-password-file pointing to a CI-injected secret path. env zero handles this as an environment secret: you store the vault password once in env zero, and it injects it at runtime without exposing it in run logs or environment variables visible to other users. See the Ansible Vault guide for the full workflow.
Run --check --diff before every production apply. Treat it as a required step, not an optional one. The diff output is also a useful artifact for change management: pipe it to a file and attach it to the change request.
Pin collection versions. If your playbook uses community.general, define a requirements.yml and install with ansible-galaxy collection install -r requirements.yml. Unpinned community collections can introduce breaking changes between releases without warning.
Test in CI before merging. A minimal pipeline: ansible-lint, then ansible-playbook --syntax-check, then a molecule test against a Docker container or local VM. This catches the most common failure classes without requiring a staging environment for every change.
Running Ansible playbooks at scale with env zero
A well-structured playbook handles the automation: what each host should look like, in what order, with which values per environment. What it doesn't handle is everything that matters when Ansible moves from a personal project to a shared team practice.
Who can run which playbook against which environment? Most teams solve this informally at first: don't run the production playbook without a change ticket. That norm breaks down at ten engineers and definitely breaks at fifty.
When did the last production run happen, who triggered it, and what changed? ansible-playbook prints to stdout. Without something capturing and storing that output, the audit trail disappears the moment the terminal closes.
What if a server drifts from the hardening baseline between scheduled runs? Without automated verification, drift is invisible until it surfaces during an audit or causes an incident.
env zero addresses all three. Playbooks run through an env zero environment, which means RBAC governs who can trigger which playbook against which target, with configurable approval workflows for production. Every run is logged: the triggering user, the playbook version from Git, the host list, and the full task output. And env zero runs playbooks in --check mode on a schedule, flagging any host that drifts from its expected state automatically.
The output from a scheduled check run appears alongside the last applied run in the env zero dashboard. Teams can see whether their fleet is still in compliance without running anything manually.
Pismo used env zero to go from 2 months to 2 days for global infrastructure delivery. Automation Anywhere took multi-region rollouts from a full day to minutes. The pattern in both cases: the automation was already working. What changed was the governance layer around it.
See env zero IaC onboarding for how teams bring existing Ansible playbooks under env zero governance, and env zero integrations for the full list of supported IaC frameworks.
Related reading: 8 Terraform Drift Detection Tools Enterprise Teams Actually Use in 2026. env zero's scheduled drift detection works across Terraform and Ansible environments.
Try it with env zero
env zero connects to your Git repo, pulls your existing playbooks, and wraps them in RBAC, scheduling, drift detection, and audit logging, without requiring changes to the playbooks themselves.
Start a free trial or book a demo.
References
- Ansible playbooks: official documentation
- ansible-core 2.20.5 release notes (latest stable as of April 2026)
- ansible-core built-in module index
- Ansible variables precedence: official reference
- Ansible Vault guide
- ansible-lint documentation
- CIS Benchmarks : configuration standards for Linux, Windows, and cloud
- env0 Ansible playbooks tutorial repository
Frequently asked questions
What is the difference between an Ansible playbook and a role?
A playbook is the top-level file you execute with ansible-playbook. A role is a reusable unit of tasks, handlers, templates, and variables; it's called from a playbook, not run directly. Roles are worth extracting when the same automation logic is needed across more than one playbook. For single-use automation, keeping everything in a flat playbook is simpler and easier to read.
Why is my handler not running?
Handlers fire only when the task that notifies them reports changed. If the task reports ok (the target is already in the desired state), the notify is skipped and the handler never runs. Verify by changing the target state manually and re-running. If your play fails before completing all tasks, pending handlers also don't run. Use --force-handlers to run them anyway, or fix the failure and re-run normally.
What is the difference between import_tasks and include_tasks?
import_tasks is processed at parse time (static). Tags applied to the import propagate into the included tasks, so --tags ssh on an import works correctly. include_tasks is processed at runtime (dynamic). Tags on the include statement do not propagate into the included tasks. Use import_tasks by default; switch to include_tasks only when you need a variable in the filename or conditional file selection at runtime.
How do I run only specific parts of a playbook?
Use tags. Add tags: [ssh] to tasks or task groups, then run ansible-playbook harden.yml --tags ssh. For the inverse: --skip-tags firewall runs everything except firewall tasks. Tags applied to an import_tasks statement propagate into all tasks inside the included file; tags on include_tasks do not.
Is Ansible idempotent by default?
Most ansible.builtin modules are idempotent: apt with state: present won't reinstall a package that's already there, lineinfile won't change a line that already matches. command and shell are the exceptions: they always report changed and always re-execute. Replace them with purpose-built modules wherever possible. When you can't, use changed_when: false or a proper changed condition to prevent false positives.
Can I run Ansible playbooks against Windows hosts?
Yes. Windows hosts use WinRM instead of SSH. Modules targeting Windows use the ansible.windows collection rather than ansible.builtin. The playbook structure is identical; the differences are in the connection method and module selection. Set ansible_connection: winrm and ansible_winrm_transport: ntlm (or kerberos for domain-joined hosts) in your inventory or host_vars.
How do I store secrets safely in an Ansible playbook?
Use Ansible Vault. ansible-vault encrypt_string 'mypassword' --name 'db_password' produces an encrypted value you can paste directly into a vars file. The vault password itself belongs in a secrets manager, not in the repository. Pass it to ansible-playbook with --vault-password-file pointing to a path where your CI system injects it at runtime.
Does ansible-core 2.20 support Python 3.12 and 3.13? Yes. ansible-core 2.20 supports Python 3.10, 3.11, 3.12, and 3.13 on both the control node and managed nodes. Python 2 support was dropped in ansible-core 2.17. If your managed nodes are still on Python 2 (RHEL 7, Ubuntu 16.04), pin to ansible-core 2.16 or earlier, or update the nodes.
How does env zero work with Ansible playbooks?
env zero treats your existing playbooks as the automation layer and adds governance on top. You connect your Git repo, define an environment (inventory + variables), and env zero provides RBAC to control who can trigger which playbook against which target, approval workflows for production, a full audit log of every run and its task output, and scheduled --check runs that surface drift automatically. No changes to the YAML required.
What is ansible-lint and should I use it?
ansible-lint is a static analysis tool for Ansible playbooks and roles. It catches patterns that aren't syntax errors but break idempotency or best practice: command where a module exists, missing name on tasks, short module names instead of FQCN, deprecated syntax. Run it in CI before merging playbook changes. A clean ansible-lint run is a stronger quality signal than a passing syntax check.

.webp)

![Using Open Policy Agent (OPA) with Terraform: Tutorial and Examples [2026]](https://cdn.prod.website-files.com/63eb9bf7fa9e2724829607c1/69d6a3bde2ffe415812d9782_post_th.png)