
Most teams reach for Ansible in CI/CD pipelines the same way: a shell step that runs ansible-playbook, SSH keys stored as secrets, and a growing list of manual steps nobody documented. It works until it doesn't : then the failure happens at the worst possible moment and the pipeline becomes a debugging session.
This guide shows how to run Ansible playbooks in GitHub Actions and GitLab CI properly: managing inventory, vault secrets, SSH keys, and environment-specific variables without putting credentials in plain text or hardcoding environment differences into the pipeline.
At a glance
ansible-core works in any CI/CD environment that supports Python. Current version: ansible-core 2.20.5 (April 2025). Requires Python 3.10+ on the runner, SSH access from the runner to managed nodes, and a strategy for passing vault passwords securely. env zero adds policy enforcement, RBAC, and deployment history on top of CI-triggered runs.
New to Ansible? This guide assumes you have working playbooks. If you're starting from scratch, Ansible Tutorial for Beginners: From Install to First Deployment covers setup and your first deployment in under 20 minutes.
What you need before wiring Ansible into CI
Before any CI configuration, three things need to be in place:
A working inventory. The runner needs to reach your managed nodes. In most CI environments that means SSH from a disposable runner to fixed infrastructure. Dynamic inventory (AWS EC2, GCP, Azure) works but requires the relevant collection and credentials. Static inventory files work fine; just don't hardcode IPs that change between environments.
SSH key pair with the right distribution. The runner authenticates as a deploy user. Generate a dedicated keypair for CI. Add the public key to managed nodes' ~/.ssh/authorized_keys. Store the private key as a CI secret, never in the repository.
A vault password strategy. If your playbooks use Ansible Vault, the CI runner needs the vault password. Options: store it as a CI secret and pass via --vault-password-file, or use a client script that fetches it from AWS Secrets Manager, HashiCorp Vault, or your secrets manager of choice. The latter is better for production.
Related reading: Protecting Secrets with Ansible Vault. How vault IDs, password files, and external secrets managers work together.
Running Ansible in GitHub Actions
Basic workflow
Install ansible-core in the runner, configure SSH, and run the playbook. The full workflow for a production deployment:
name: Deploy with Ansible
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install ansible-core
run: pip install ansible-core==2.20.5
- name: Install required collections
run: ansible-galaxy collection install -r requirements.yml
- name: Configure SSH key
run: |
mkdir -p ~/.ssh
echo "${{ secrets.ANSIBLE_SSH_PRIVATE_KEY }}" > ~/.ssh/deploy_key
chmod 600 ~/.ssh/deploy_key
ssh-keyscan -H ${{ secrets.DEPLOY_HOST }} >> ~/.ssh/known_hosts
- name: Write vault password file
run: echo "${{ secrets.ANSIBLE_VAULT_PASSWORD }}" > /tmp/vault_pass.txt
- name: Run playbook
run: |
ansible-playbook site.yml \
-i inventory/production.ini \
--private-key ~/.ssh/deploy_key \
--vault-password-file /tmp/vault_pass.txt \
-e "env=production"
env:
ANSIBLE_HOST_KEY_CHECKING: "False"
ANSIBLE_STDOUT_CALLBACK: yaml
- name: Clean up secrets
if: always()
run: rm -f ~/.ssh/deploy_key /tmp/vault_pass.txt
The if: always() on the cleanup step ensures SSH keys and vault passwords are removed even when the playbook fails.
Limiting scope with tags and host patterns
Production pipelines rarely run the full playbook on every push. Use tags and host patterns to scope what runs:
- name: Run deployment tasks only
run: |
ansible-playbook site.yml \
-i inventory/production.ini \
--private-key ~/.ssh/deploy_key \
--vault-password-file /tmp/vault_pass.txt \
--tags "deploy,restart" \
--limit "webservers"
Environment-specific deployments
Separate staging and production deployments using GitHub environments with their own secrets:
jobs:
deploy-staging:
runs-on: ubuntu-latest
environment: staging
steps:
- name: Run staging deployment
run: |
ansible-playbook site.yml \
-i inventory/staging.ini \
--private-key ~/.ssh/deploy_key \
--vault-password-file /tmp/vault_pass.txt \
-e "env=staging"
deploy-production:
runs-on: ubuntu-latest
environment: production
needs: deploy-staging
steps:
- name: Run production deployment
run: |
ansible-playbook site.yml \
-i inventory/production.ini \
--private-key ~/.ssh/deploy_key \
--vault-password-file /tmp/vault_pass.txt \
-e "env=production"
The needs: deploy-staging ensures staging completes successfully before production runs. GitHub environment protection rules add manual approval gates if required.
Caching collections and pip packages
Installing ansible-core and collections on every run adds 30–90 seconds. Cache them:
- name: Cache pip packages
uses: actions/cache@v4
with:
path: ~/.cache/pip
key: pip-ansible-2.20.5-${{ hashFiles('requirements.txt') }}
- name: Cache Ansible collections
uses: actions/cache@v4
with:
path: ~/.ansible/collections
key: ansible-collections-${{ hashFiles('requirements.yml') }}
Running Ansible in GitLab CI
Basic pipeline
GitLab CI uses .gitlab-ci.yml. The pattern is the same: install, configure SSH, run playbook:
stages:
- deploy
variables:
ANSIBLE_HOST_KEY_CHECKING: "False"
ANSIBLE_STDOUT_CALLBACK: yaml
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
.ansible-base:
image: python:3.12-slim
before_script:
- pip install ansible-core==2.20.5 --cache-dir $PIP_CACHE_DIR
- ansible-galaxy collection install -r requirements.yml
- mkdir -p ~/.ssh
- echo "$ANSIBLE_SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/deploy_key
- chmod 600 ~/.ssh/deploy_key
- echo "$ANSIBLE_VAULT_PASSWORD" > /tmp/vault_pass.txt
after_script:
- rm -f ~/.ssh/deploy_key /tmp/vault_pass.txt
deploy-staging:
extends: .ansible-base
stage: deploy
environment: staging
script:
- ansible-playbook site.yml
-i inventory/staging.ini
--private-key ~/.ssh/deploy_key
--vault-password-file /tmp/vault_pass.txt
-e "env=staging"
only:
- develop
deploy-production:
extends: .ansible-base
stage: deploy
environment: production
script:
- ansible-playbook site.yml
-i inventory/production.ini
--private-key ~/.ssh/deploy_key
--vault-password-file /tmp/vault_pass.txt
-e "env=production"
only:
- main
when: manual
The when: manual on the production job creates a one-click deploy gate in the GitLab UI. The pipeline runs automatically up to that point, then waits for a human to approve before touching production.
Using GitLab's SSH key integration
GitLab has a native SSH key feature that handles key setup without shell scripting:
deploy-production:
stage: deploy
image: python:3.12-slim
before_script:
- eval $(ssh-agent -s)
- echo "$ANSIBLE_SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh && chmod 700 ~/.ssh
- ssh-keyscan $DEPLOY_HOST >> ~/.ssh/known_hosts
script:
- pip install ansible-core==2.20.5 -q
- ansible-playbook site.yml -i inventory/production.ini
--vault-password-file /tmp/vault_pass.txt
Dynamic inventory in CI
Static inventory files break when infrastructure scales or changes frequently. Dynamic inventory (pulling the target list from AWS, GCP, or Azure at runtime) works well in CI but requires the right collection and credentials.
For AWS EC2 with GitHub Actions:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Install EC2 inventory plugin
run: ansible-galaxy collection install amazon.aws
- name: Run playbook with dynamic inventory
run: |
ansible-playbook site.yml \
-i inventory/aws_ec2.yml \
--vault-password-file /tmp/vault_pass.txt \
--limit "tag_env_production"
The aws_ec2.yml inventory file defines filters to target specific instances by tag:
plugin: amazon.aws.aws_ec2
regions:
- us-east-1
filters:
tag:env: production
instance-state-name: running
hostnames:
- private-ip-address
compose:
ansible_host: private_ip_address
Fetching vault passwords from a secrets manager
Storing the vault password directly as a CI secret is acceptable for smaller teams. For production environments with strict compliance requirements, fetch it from a secrets manager at runtime instead:
#!/bin/bash
# vault-client.sh: fetches vault password from AWS Secrets Manager
aws secretsmanager get-secret-value \
--secret-id ansible/vault-password \
--query SecretString \
--output text
Reference this script in the playbook run:
ansible-playbook site.yml \
-i inventory/production.ini \
--private-key ~/.ssh/deploy_key \
--vault-id prod@./scripts/vault-client.sh
The script runs at playbook execution time, fetching the password from Secrets Manager rather than reading it from a file. The runner needs IAM permissions scoped to that secret, not broad secrets access.
Common failures and how to fix them
SSH host key verification failure
Runners are ephemeral and don't have ~/.ssh/known_hosts pre-populated. Options: run ssh-keyscan before the playbook (shown above), or set ANSIBLE_HOST_KEY_CHECKING=False for non-production environments. Don't disable host key checking in production; use ssh-keyscan instead.
Vault password file missing or empty
The most common cause: the CI secret value has a trailing newline. When written to a file, ansible-vault sees a password with a newline at the end, which doesn't match the actual password. Fix: use echo -n (no trailing newline) or printf '%s' when writing the password file:
printf '%s' "$ANSIBLE_VAULT_PASSWORD" > /tmp/vault_pass.txt
Python version mismatch between runner and managed nodes
ansible-core 2.17+ requires Python 3.10 on the control node (runner). Managed nodes need Python 3.8+. If the runner uses an older Python, install 3.12 explicitly via actions/setup-python or use a python:3.12-slim Docker image in GitLab CI. Verify with python3 --version as a pipeline step if you're debugging mysterious failures.
Collection not found
Community collections aren't bundled with ansible-core. If your playbooks use community.general, amazon.aws, or any third-party collection, add a requirements.yml and install it explicitly. Don't install the full ansible package (which bundles everything) just to avoid writing a requirements file. It's slower and installs collections you don't use.
Running Ansible with env zero
CI/CD pipelines for Ansible have a structural gap: the pipeline knows who triggered a run, but it doesn't know who approved it, what policy it ran under, or why the inventory was what it was. When something goes wrong at 2am, the audit trail is whatever you thought to log at the time.
env zero runs Ansible natively. Point it at a repository with playbooks and a template configuration, and it handles execution. The difference from a CI pipeline: env zero adds an approval layer before execution, enforces OPA policies against the deployment (which hosts, which playbook, which variables), stores a full audit trail, and surfaces drift between runs. RBAC controls who can trigger which environment. Cost estimates appear before deployment for infrastructure-provisioning playbooks.
For teams already using env zero for Terraform or OpenTofu, adding Ansible runs to the same platform means one governance layer covers all IaC tooling : not a separate CI pipeline with separate secret management, separate approval flows, and separate audit logs.
env zero Ansible documentation covers template setup, variable configuration, and policy enforcement.
Start a free trial or book a demo to see Ansible runs alongside your other IaC tooling.
References
- Ansible documentation (official docs for ansible-core 2.20)
- GitHub Actions documentation
- GitLab CI/CD documentation
- Ansible Vault guide (managing encrypted secrets)
- amazon.aws collection (dynamic EC2 inventory)
Frequently asked questions
Does Ansible work with GitHub Actions self-hosted runners?
Yes. Self-hosted runners are often preferable for Ansible because they can be placed in the same network as managed nodes, eliminating the need to expose SSH ports to GitHub's shared runner IPs. Install ansible-core on the runner image and the workflow YAML is identical.
How do I pass different inventory files for different environments?
Use separate inventory files per environment (inventory/staging.ini, inventory/production.ini) and reference the correct one with -i. For GitHub Actions, use environment-specific secrets and jobs with environment: set. For GitLab, use environment-specific variables scoped to the environment in project settings.
Can I run Ansible in CI without SSH access to managed nodes?
Not directly. Ansible's default transport is SSH. For cloud infrastructure provisioning (not configuration management), consider using Ansible's cloud modules with API credentials instead of SSH. For managed nodes that aren't SSH-accessible from CI runners, a bastion host or VPN is the standard solution.
How do I speed up Ansible CI runs?
Cache pip packages and Ansible collections between runs (shown above). Use --tags to run only changed tasks. Disable fact gathering with gather_facts: false for plays that don't need system facts. Use forks in ansible.cfg to parallelise across hosts: forks = 20.
What is the difference between running Ansible in CI and using env zero?
CI pipelines execute playbooks. env zero executes playbooks and adds governance: approval gates before runs, OPA policy enforcement on what can run, RBAC on who can trigger which environment, audit trails, drift detection between runs, and cost estimation. For teams that need to demonstrate compliance or run Ansible across multiple teams with different permissions, env zero closes the gap that CI pipelines leave open.
How do I handle Ansible Vault passwords in GitHub Actions securely?
Store the vault password as a GitHub Actions secret (secrets.ANSIBLE_VAULT_PASSWORD). Write it to a temp file at runtime using printf '%s' to avoid trailing newline issues. Delete the file in an if: always() cleanup step. For higher security requirements, use a client script that fetches the password from AWS Secrets Manager or HashiCorp Vault at runtime rather than storing it as a CI secret.

.webp)
