In the evolving domain of Infrastructure as Code (IaC), the choice of tooling plays a pivotal role in orchestrating and automating tasks efficiently, as opposed to manually configuring resources. While there are IaC-specific tools available, the versatility of CI/CD tools like Jenkins opens up new paradigms in managing infrastructure. 

This blog post delves into the rationale behind considering Jenkins with Terraform for IaC, showcasing its versatility in handling both continuous integration/continuous delivery (CI/CD) and infrastructure code deployments.

The heart of this tutorial beats around a practical example, where we leverage Terraform to provision an Azure VM, followed by Ansible orchestrating the setup of a monitoring stack comprising Prometheus and Grafana running on Docker. This hands-on approach aims at demystifying the orchestration of Terraform deployments using Jenkins, showcasing a real-world scenario.

Through this lens, we'll dissect why a CI/CD tool might be preferred over a dedicated IaC tool in certain environments, assessing the pros and cons of choosing Jenkins for IaC management.

Requirements : GitHub and Azure accounts

TL;DR: You can find the GitHub repo here

What is Jenkins?

Jenkins is an open-source automation server used for continuous integration and continuous deployment (CI/CD). It's a hub for reliable building, testing, and deployment of code, with a plethora of plugins, including the Jenkins Terraform plugin, to extend its functionality.

Jenkins, initially developed as Hudson by Kohsuke Kawaguchi in 2004, emerged out of a need for a continuous integration (CI) tool that could improve software development processes. When Oracle acquired Sun Microsystems in 2010, a dispute led the community to fork Hudson and rename it as Jenkins. 

Since then, Jenkins has grown exponentially, becoming one of the most popular open-source automation servers used for reliable building, testing, and deploying code. Its extensible nature, through a vast array of plugins, and strong community support, has solidified its place in the DevOps toolchain, bridging the gap between development and operational teams.

Using Jenkins for IaC Management

Why would someone want to use a CI/CD tool like Jenkins in place of an IaC-specific tool? The thought of using Jenkins for IaC management stems from its capability to automate and structure deployment workflows and not just for continuous integration. Let’s dig deeper.

Why Consider Jenkins for IaC

From an architect's perspective, Jenkins might be considered for IaC management in scenarios where there's already heavy reliance on Jenkins for CI/CD, and the organization has developed competency in Jenkins pipeline scripting. 

The decision to use Jenkins for IaC might also be influenced by budget constraints, as Jenkins is an open-source tool, and the cost of specialized IaC platforms can be prohibitive for smaller organizations.

However, when an organization's size and complexity grow, the hidden costs of maintaining and securing Jenkins, along with creating homebrewed code for scalability, can outweigh the initial savings.

Dedicated IaC Tools vs. Jenkins

Dedicated IaC tools such as env0 are built specifically for infrastructure automation. They provide out-of-the-box functionality for state management, secrets management, policy as code, and more, which are essential for IaC but are not native to Jenkins. These tools are designed to operate at scale, with less overhead for setup and maintenance.

Choosing Between the Two

An architect or a team lead must consider the trade-offs. If the organization's priority is to have a unified tool for CI/CD and IaC with a strong in-house Jenkins skillset, Jenkins could be a viable option. 

However, for larger organizations governance, security, cost management, and scalability considerations could make dedicated IaC tools a more valuable long-term option. 

In the end, the decision hinges on the organization's current tooling, expertise, and the complexity of the infrastructure they manage. 

Here is a table to quickly summarize the pros and cons of this decision.

Jenkins Pros and Cons

ProsCons
Integration Extensive plugin ecosystem for adaptability.Requires plugins for IaC features, adding complexity and maintenance overhead.
Community Support Large community and extensive documentation.Community support varies by plugin, which can lead to challenges in troubleshooting.
Custom Workflows Basic governance through plugins and scripting.Lacks advanced governance features such as RBAC or policy as code.
Security Basic security features are available and can be extended with plugins.Setting comprehensive security guardrails is complex and error prone.
Scalability Can handle scale with the right setup and infrastructure.Cumbersome and resource-intensive to manage as organizational complexity grows.
State Management Can manage the state with additional plugins and custom storage solutions.Does not handle state management natively, and custom setups add risk and complexity.
State Management Can be extended to support various IaC tasks with the right plugins and scripts.Lacks built-in IaC-specific features like drift detection and automatic planning.
Cost Open-source and no initial cost, which can be cost-effective for smaller teams.Hidden costs of maintenance, scaling, and potential downtime can become significant.
Learning Curve If the team is already proficient with Jenkins, the learning curve is minimized.Steep learning curve for IaC setup; requires in-depth knowledge of Jenkins and associated plugins.

Setting Up the Jenkins Job for Managing IaC

Setting up Jenkins for managing Terraform involves a series of steps. In the next few sections, you will learn how to go about:

  • Creating a Jenkins server in a Docker container
  • Initializing and setting up Jenkins
  • Creating Azure credentials for Terraform to access Azure
  • Jenkins pipeline script (Jenkinsfile)
  • Terraform Configuration
  • Ansible Configuration

The goal is to create a VM in Azure that hosts the Prometheus and Grafana monitoring stack. This VM will run docker and docker-compose to stand up these services.

Jenkins Server Installation Process

First off, let's take a look at how to install Jenkins in a Docker container.

Create the Jenkins Docker Container

Below is the Dockerfile for the Jenkins container. Notice how we install Terraform along with Ansible inside our container. For our demo and simplicity, we are using the Jenkins Server as a Jenkins node worker as well.

Building a scalable Jenkins architecture demands careful consideration and significant technical expertise, which can be a daunting challenge for teams whose primary goal is to implement an IaC solution efficiently. In contrast, a dedicated IaC platform like env0 removes the burden of managing the intricacies of the tool itself. env0 simplifies the process with a guided setup that abstracts the underlying complexities, allowing teams to focus on infrastructure management rather than tool configuration.

In a production Jenkins environment, you should install the Terraform binary along with Ansible and any other necessary binaries on the Jenkins nodes. Notice that we don't make use of the Terraform plugin in our demo.

FROM jenkins/jenkins:lts

# Define arguments for Terraform and Ansible versions
ARG TF_VERSION=1.5.5
ARG ANSIBLE_VERSION=8.5.0

USER root

# Install necessary tools like wget and unzip before downloading Terraform
RUN apt-get update && \
    apt-get install -y wget unzip python3-venv && \
    rm -rf /var/lib/apt/lists/* && \
    apt-get clean

# Use the TF_VERSION argument to download and install the specified version of Terraform
RUN wget https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip && \
    unzip terraform_${TF_VERSION}_linux_amd64.zip && \
    mv terraform /usr/local/bin && \
    rm terraform_${TF_VERSION}_linux_amd64.zip

# Create a virtual environment for Python and activate it
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

# Use the ANSIBLE_VERSION argument to install the specified version of Ansible within the virtual environment
RUN pip install --upgrade pip cffi && \
    pip install ansible==${ANSIBLE_VERSION} && \
    pip install mitogen ansible-lint jmespath && \
    pip install --upgrade pywinrm

# Drop back to the regular jenkins user - good practice
USER jenkins

Build the Docker Image

Now we can build our Terraform Docker image using the following command:

docker build -t samgabrail/jenkins-terraform-docker

Next, let's run the Docker container with:

docker run --name jenkins-terraform -d -v jenkins_home:/var/jenkins_home -p 8080:8080 -p 50000:50000 samgabrail/jenkins-terraform-docker:latest

Configure Jenkins

Once the container is running, we can access the Jenkins UI at http://localhost:8080.

Notice that a password has been written to the log and inside our container at the following location: /var/jenkins_home/secrets/initialAdminPassword

You can access this password inside our container by running this command:

docker exec -it jenkins-terraform cat /var/jenkins_home/secrets/initialAdminPassword

Then, install the suggested plugins:

Then go ahead and create a first admin user

Next, keep the Jenkins URL as is, which should be http://localhost:8080, then save and finish the setup.

Create a Jenkins Job

From the main Jenkins page, click on the New Item button. Then give the pipeline a name and select the Pipeline project type.

Now, fill out some details for this Pipeline job. You can add Poll SCM to configure Jenkins to poll GitHub regularly. The schedule follows the syntax of cron. For example, if we want to poll every two minutes we would use: [.code]H/2 * * * *[.code] Under "Pipeline", choose Pipeline script from SCM, Git for SCM, our Repository URL, and the main branch.

Finally, add the Jenkinsfile path as Jenkins/Jenkinsfile and click save.

Run the Jenkins Build

Let's go ahead and run our first build. Click the “▷ Build Now” button.

You should get an error in Stage View as shown below:

This is because we still haven't added the necessary credentials for Terraform to access Azure. Let's do that next.

Create Azure Credentials

Follow the guide Azure Provider: Authenticating using a Service Principal with a Client Secret to create an application and service principal. You will need the following for Terraform to access Azure:

  • [.code]client_id[.code]
  • [.code]client_secret[.code]
  • [.code]tenant_id[.code]
  • [.code]subscription_id[.code]

Now, let's add these credentials inside of Jenkins.

First, go to the "Dashboard > ⚙ Manage Jenkins > Credentials":

Go into the "System" store under the global domain and create five new credentials: four for Azure of type "secret text", then one SSH Username with a private key for Ansible in order to access our VM via SSH. This is what you should end up with:

Re-Run the Jenkins Pipeline

Now, let's go back to our pipeline and re-run it. Notice this time you get ‘Build wit3h Parameters’. Click on that and choose "Apply". Your build should succeed this time. It will go through the five stages we defined in our Jenkinsfile:

  1. Terraform (run Terraform to provision the VM in Azure)
  2. Delay of 2 minutes (wait for the ssh-agent to come up in the VM)
  3. Ansible (run ansible to configure the VM with Docker and start the monitoring services)
  4. Output URLs
  5. Archive URLs

RBAC Discussion

When we consider the Jenkins pipeline that I've demonstrated it's crucial to understand that this setup operates under a controlled demo environment. In real-world applications, especially within an organizational context, the need for robust Role-Based Access Control (RBAC) becomes significantly more important.

Why RBAC Matters

RBAC is central to maintaining security and operational integrity. It determines who has permission to execute, modify, or approve changes in the pipeline, which is critical in preventing unauthorized modifications and ensuring that infrastructure changes are peer-reviewed. This is not just about security; it's about stability and reliability. Without stringent RBAC, you risk having too many cooks in the kitchen, which can lead to configuration drift, security vulnerabilities, and operational chaos.

Jenkins and RBAC

In Jenkins, implementing RBAC can be somewhat manual and often necessitates additional plugins. For instance, the Jenkins pipeline as configured for the demo does not inherently provide a detailed RBAC system. It can be tailored to do so, but this requires a deep dive into Jenkins' access control mechanisms and perhaps a reliance on the Role Strategy Plugin or similar to ensure that only authorized personnel can execute critical pipeline stages.

env0’s and RBAC

On the other hand, env0 offers a far more comprehensive and out-of-the-box RBAC solution. With env0, you can easily define who can trigger deployments, who can approve them, and who can manage the infrastructure. This granular level of control extends across all organizational layers, from team to project to environment, and integrates smoothly with SSO providers for streamlined user and group management.

In an environment where infrastructure as code (IaC) is no longer just a convenience but a necessity, env0’s RBAC system offers a more secure and controlled workflow, ensuring that every change is accounted for and authorized. This mitigates the risk of errors or breaches, which can have significant implications in a production environment.

Final Output

You can find the public URL for Grafana and Prometheus either by going into the Jenkins console logs or by checking the Jenkins artifacts for the urls.txt file. Here is the console output:

Here is the content of the urls.txt file:

Grafana URL: http://172.190.218.146:3000
Prometheus URL: http://172.190.218.146:9090

Jenkins Pipeline Configuration

Let's dive into the Jenkinsfile – essentially our pipeline script – that orchestrates a monitoring setup using Terraform and Ansible. This is the conductor of our DevOps orchestra, tying everything together.

The Basics

pipeline {
    agent any
    parameters {
        choice(name: 'ACTION', choices: ['apply', 'destroy'], description: 'What action should Terraform take?')
    }

Here, we kick off the Jenkins pipeline using any available Jenkins agent. We also set up a choice parameter for our Terraform actions. We can either apply to set things up or destroy to tear them down.

Environment Variables

environment {
        ARM_CLIENT_ID = credentials('azure-client-id')
        ARM_CLIENT_SECRET = credentials('azure-client-secret')
        ARM_SUBSCRIPTION_ID = credentials('azure-subscription-id')
        ARM_TENANT_ID = credentials('azure-tenant-id')
    }

We're loading some Azure credentials into environment variables. This is to allow Terraform to access Azure securely. Recall how we stored these credentials in the Jenkins server's secure store.

Terraform Stage

stages {
    stage('Terraform') {
        steps {
            script {
                    dir('Terraform') {
                        sh 'terraform init'
                        sh 'terraform validate'
                        sh "terraform ${params.ACTION} -auto-approve"
                        if (params.ACTION == 'apply') {
                        sh "terraform ${params.ACTION} -auto-approve"
                        def ip_address = sh(script: 'terraform output public_ip', returnStdout: true).trim()
                        writeFile file: '../Ansible/inventory', text: "monitoring-server ansible_host=${ip_address}"
                        }
                    }
            }
        }
    }

In this stage, we're running our Terraform job which contains the core Terraform commands. We initialize with [.code]terraform init[.code], validate with [.code]terraform validate[.code], and then apply or destroy based on our parameter. If we're applying, we also fetch the public IP for our Ansible inventory. Neat, right?

Delay Stage

stage('Delay') {
    when {
        expression { params.ACTION == 'apply' }
    }
    steps {
        script {
            echo 'Waiting for SSH agent...'
            sleep 120 // waits for 120 seconds before continuing
        }
    }

I’ve added a delay stage here to allow the ssh-agent to start inside of the VM, otherwise Ansible will try to access the VM and will fail.

Ansible Stage

If we selected [.code]apply[.code] in our Terraform stage, this stage will run our Ansible playbook to configure the server. We do this securely using SSH credentials that we stored earlier in the Jenkins secure store. This means we allow Ansible to SSH into our Azure VM using the private key stored in Jenkins.

stage('Output URLs') {
    when {
        expression { params.ACTION == 'apply' }
    }
    steps {
        script {
            def ip_address = sh(script: 'terraform -chdir=Terraform output -raw public_ip', returnStdout: true).trim()
            def grafana_url = "http://${ip_address}:3000"
            def prometheus_url = "http://${ip_address}:9090"
            echo "Grafana URL: ${grafana_url}"
            echo "Prometheus URL: ${prometheus_url}"
            writeFile file: 'urls.txt', text: "Grafana URL: ${grafana_url}\nPrometheus URL: ${prometheus_url}"
        }
    }
}

This stage outputs the Grafana and Prometheus URLs for us in the console logs. It even writes them to a text file. So we can store them as an artifact.

Archive URLs Stage

stage('Archive URLs') {
    when {
        expression { params.ACTION == 'apply' }
    }
    steps {
        archiveArtifacts artifacts: 'urls.txt', onlyIfSuccessful: true
    }
}

Finally, we archive the URLs text file as an artifact to reference later.

Terraform Configuration Files

Now that we have Jenkins running, we can configure Terraform. All the Terraform configuration files are found in the Terraform folder in our repo. Below are the contents.

.
├── id_rsa.pub
├── main.tf
└── networking.tf

Main.tf

The Basics

The main.tf Terraform file has all the Terraform code to create resources on Azure, like a resource group and a Linux VM. It's broken down into different sections: terraform, provider, resource, and output.

The Terraform Block

terraform {
  required_providers {
    azurerm = {
      source = "hashicorp/azurerm"
      version = "3.77.0"
    }
  }
}

Here, we specify what providers are required. For this script, we are using the AzureRM provider and locking it down to version 3.77.0.

The Provider Block

provider "azurerm" {
  features {}
}

This block initializes the Azure provider. The [.code]features {}[.code] is essential even though we don't need to fill it.

The Resource Group

resource "azurerm_resource_group" "rg" {
  name     = "MonitoringResources"
  location = "East US"
}

This part is creating an Azure Resource Group called [.code]MonitoringResources[.code] in the [.code]East US[.code] location.

The Linux Virtual Machine

This is the meat of the script!

resource "azurerm_linux_virtual_machine" "vm" {
  name                = "MonitoringVM"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  size                = "Standard_B2s"

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "Canonical"
    offer     = "0001-com-ubuntu-server-jammy"
    sku       = "22_04-lts"
    version   = "latest"
  }

  computer_name  = "monitoringvm"
  admin_username = "azureuser"

  // Add a public key to the same folder as the main.tf script (we use Ansible to send the private key to the Jenkins machine)
  admin_ssh_key {
    username   = "azureuser"
    public_key = file("id_rsa.pub")
  }

  network_interface_ids = [
    azurerm_network_interface.nic.id,
  ]

  disable_password_authentication = true
}

Here, we're spinning up a Linux VM with the name "MonitoringVM". The VM will reside in the same resource group and location as specified earlier. We're setting it to a Standard_B2s size, which is a decent balance of CPU and memory.

Notice the [.code]admin_ssh_key[.code] block? It uses the public key from the file id_rsa.pub. This is super important for secure SSH access which is needed for Ansible.

The Output Block

output "public_ip" {
  value = azurerm_public_ip.pip.ip_address
}

This output block just spits out the public IP of the VM once it's up. We will need this to update our Ansible inventory file.

Networking.tf

Alright, let's dive into the Terraform code in the networking.tf file! This Terraform script is all about laying down the networking groundwork for our Azure setup. It defines how our virtual network, subnet, and security rules come together.

Virtual Network (azurerm_virtual_network)

resource "azurerm_virtual_network" "vnet" {
  name                = "MonitoringVNet"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  address_space       = ["10.0.0.0/16"]
}

We're setting up a Virtual Network (VNet) named "MonitoringVNet". This VNet is where our resources like VMs will reside. The [.code]address_space[.code] is set to 10.0.0.0/16, giving us a nice, roomy network to play with.

Subnet (azurerm_subnet)

resource "azurerm_subnet" "subnet" {
  name                 = "default"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = ["10.0.1.0/24"]
}

Within that VNet, we're carving out a subnet. The [.code]address_prefixes[.code] is 10.0.1.0/24, so all our resources within this subnet will have an IP in this range.

Network Interface (azurerm_network_interface)

resource "azurerm_network_interface" "nic" {
  name                = "MonitoringNIC"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.subnet.id
    private_ip_address_allocation = "Dynamic"
    public_ip_address_id          = azurerm_public_ip.pip.id
  }
}

Here, we're creating a network interface card (NIC) named "MonitoringNIC". This NIC is what connects our VM to the subnet. We're dynamically assigning a private IP address here.

Public IP (azurerm_public_ip)

resource "azurerm_public_ip" "pip" {
  name                = "MonitoringPublicIP"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  allocation_method   = "Dynamic"
}

We're also setting up a public IP address with dynamic allocation. We'll use this to access our monitoring VM.

Network Security Group (azurerm_network_security_group)

resource "azurerm_network_security_group" "nsg" {
  name                = "MonitoringNSG"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
}

Let's not forget about security! We set up a Network Security Group (NSG) and defined rules for inbound traffic. We've set up rules for Grafana, Prometheus, and SSH, specifying which ports should be open.

resource "azurerm_network_security_rule" "rule" {
  for_each = {
    grafana    = { priority = 1001, port = 3000 }
    prometheus = { priority = 1002, port = 9090 }
    ssh        = { priority = 1003, port = 22 }
  }

  name                        = "${upper(each.key)}Rule"
  priority                    = each.value.priority
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = each.value.port
  source_address_prefix       = "*"
  destination_address_prefix  = "*"
  resource_group_name         = azurerm_resource_group.rg.name
  network_security_group_name = azurerm_network_security_group.nsg.name
}

NSG Association 

Finally, we're associating the NSG with our subnet. This means the rules we defined in the NSG will apply to all resources in this subnet.

resource "azurerm_subnet_network_security_group_association" "subnet_nsg_association" {
  subnet_id                 = azurerm_subnet.subnet.id
  network_security_group_id = azurerm_network_security_group.nsg.id
}

id_rsa.pub

This file is used for SSH access. It's the public key for the private SSH key that Ansible will use to SSH into the VM for configuration management. Terraform will include this file in the Azure VM.

ssh-rsa 
AAAAB3NzaC1yc2EAAAADAQABAAABAQCb7fcDZfIG+SxuP5UsZaoHPdh9MNxtEL5xRI71hzMS5h4
SsZiPGEP4shLcF9YxSncdOJpyOJ6OgumNSFWj2pCd/kqg9wQzk/E1o+FRMbWX5gX8xMzPig8mmK
kW5szhnP+yYYYuGUqvTAKX4ua1mQwL6PipWKYJ1huJhgpGHrvSQ6kuywJ23hw4klcaiZKXVYtvT
i8pqZHhE5Kx1237a/6GRwnbGLEp0UR2Q/KPf6yRgZIrCdD+AtOznSBsBhf5vqcfnnwEIC/DOnqc
OTahBVtFhOKuPSv3bUikAD4Vw7SIRteMltUVkd/O341fx+diKOBY7a8M6pn81HEZEmGsr7rT 
sam@SamMac.local

Terraform State

It's important to note that the Terraform state file (terraform.tfstate) gets stored in the workspace folder in the running Jenkins worker node.

~/workspace/MonitoringStack/Terraform$ ls -lah
total 72K
drwxr-xr-x 3 jenkins jenkins 4.0K Oct 31 20:20 .
drwxr-xr-x 8 jenkins jenkins 4.0K Oct 31 18:31 ..
drwxr-xr-x 3 jenkins jenkins 4.0K Oct 31 18:23 .terraform
-rw-r--r-- 1 jenkins jenkins 1.2K Oct 31 18:15 .terraform.lock.hcl
-rw-r--r-- 1 jenkins jenkins  397 Oct 31 18:15 id_rsa.pub
-rw-r--r-- 1 jenkins jenkins 1.2K Oct 31 18:15 main.tf
-rw-r--r-- 1 jenkins jenkins 2.3K Oct 31 18:15 networking.tf
-rw-r--r-- 1 jenkins jenkins  22K Oct 31 20:20 terraform.tfstate
-rw-r--r-- 1 jenkins jenkins  20K Oct 31 18:24 terraform.tfstate.backup

~/workspace/MonitoringStack/Terraform$ pwd
/var/jenkins_home/workspace/MonitoringStack/Terraform

This is not desirable in a production environment, though. You’d need to store the state file in a secure remote location accessible only to your team. This can be a private S3 bucket or Azure blob that encrypts the content. Or you could opt to use an IaC tool such as env0 that takes care of storing and managing state files securely.

Ansible Configuration Files

How about we take a look at the Ansible configuration files? They are located inside the Ansible directory and the contents are:

.
├── ansible.cfg
├── appPlaybook.yaml

ansible.cfg

The file contains configuration settings that influence Ansible's behavior. These settings are grouped into sections, and the [defaults] section is what we're focusing on here.

# Make sure that this directory is not world-wide writable for the below to take effect
[defaults]
host_key_checking = False

By setting [.code]host_key_checking = False[.code], we're telling Ansible not to check the SSH host key when connecting to remote machines. Normally, SSH checks the host key to enhance security, but this can get annoying in environments where host keys are expected to change, or where we aren't super concerned about man-in-the-middle attacks.

Remember, though, disabling this check can make our setup less secure. It's like saying, "Yeah, I trust you," without asking for an ID. We are fine with this here in the context of our demo.

appPlaybook.yaml

Let's get down to breaking apart our appPlaybook.yaml Ansible playbook. It's designed to set up a monitoring stack with Prometheus and Grafana on our Azure VM.

The Overview

- hosts: all
  become_user: root
  become: true
  tasks:

We're targeting all hosts [.code](hosts: all)[.code]. Also, we're elevating our permissions to root with [.code]become: true[.code] and [.code]become_user: root[.code]. Now, let's jump into the tasks.

Installing pip3 and unzip

- name: Install pip3 and unzip
  apt:
    update_cache: yes
    pkg:
    - python3-pip
    - unzip

We're kicking things off by installing pip3 and unzip. We also update the cache.

Adding Docker GPG apt Key

- name: Add Docker GPG apt Key
  apt_key:
    url: https://download.docker.com/linux/ubuntu/gpg
    state: present

Before installing Docker, we're adding its GPG key for package verification. Standard security practices.

Adding Docker Repository

- name: Add Docker Repository
  apt_repository:
    repo: deb https://download.docker.com/linux/ubuntu jammy stable
    state: present

Next, we're adding Docker's apt repository to our sources list. This allows us to install Docker directly from its official source.

Installing docker-ce

- name: Update apt and install docker-ce
  apt:
    name: docker-ce
    state: latest
    update_cache: true

Time to install Docker! We're installing the latest version of docker-ce.

Installing Docker Python module

- name: Install Docker module for Python
  pip:
    name: docker

We're also installing the Docker Python module so we can interface with Docker using Python scripts.

Creating and Setting up Docker Compose

- name: Create Docker Compose directory
  file:
    path: /opt/docker-compose
    state: directory
- name: Copy Docker Compose file
  copy:
    content: |
      version: '3'
      services:
        prometheus:
          image: prom/prometheus
          ports:
            - "9090:9090"
        grafana:
          image: grafana/grafana
          environment:
            - GF_SECURITY_ADMIN_PASSWORD=admin
          ports:
            - "3000:3000"
    dest: /opt/docker-compose/docker-compose.yml

We're creating a directory for our Docker Compose file and then copying the Compose config into it. The config sets up Prometheus on port 9090 and Grafana on port 3000.

Running Docker Compose

- name: Run Docker Compose
  command: docker compose up -d
  args:
    chdir: /opt/docker-compose

Finally, the grand finale! We're running [.code]docker compose up -d[.code] in the directory where our Docker Compose file resides, bringing up our monitoring stack.

Conclusion

In this exploration of using Jenkins to manage Terraform, we've witnessed the flexibility of Jenkins in the realm of continuous development automation. It stands tall as a popular choice for CI/CD. Through a Jenkins pipeline, we seamlessly orchestrated the provisioning of an Azure VM and its configuration to host our monitoring stack. 

As seen in our hands-on example, it adapts well to managing IaC with Terraform and Ansible but does fall short in some key ways.

Using Jenkins to manage Terraform or your IaC is tempting due to the mentioned pros in this article, however, it's essential to note that Jenkins isn’t purpose-built for IaC management. 

Taking on a purpose-built IaC platform such as env0 offers the flexibility you get with Jenkins in addition to a feature-rich IaC platform out-of-the-box, making IaC management a breeze. 

Check out this article, env0 - A Terraform Cloud Alternative, I wrote on using Terraform with Ansible all with env0 to show you how env0 can run a similar workflow to the one in this demo with added features.

Explore the vast array of features env0 offers for IaC management. Below is a small sample:

  • Automated deployment workflows: env0 automates your Terraform, Terragrunt, AWS CloudFormation, and other Infrastructure as Code tools.
  • Governance and policy enforcement: Enforce Infrastructure as Code best practices and governance with approval workflows, full and granular RBAC, and multi-layer Infrastructure as Code variable management.
  • Multi-cloud infrastructure support: env0 supports multi-cloud infrastructure, enabling you to manage cloud deployments and IaC alongside existing application development pipelines.
  • Cost optimization: env0 provides cost management features that help you optimize your cloud deployments.
  • Team collaboration: env0 enhances team collaboration by providing end-to-end IaC visibility, audit logs, and exportable IaC run logs to your logging platform of choice.
Note: Future releases of Terraform will come under the BUSL license, while everything developed before version 1.5.x remains open-source. OpenTofu is a free variant of Terraform that builds upon its existing principles and offerings. Originating from Terraform version 1.5.6, it stands as a robust alternative to HashiCorp's Terraform.

To learn more about Terraform, check out this Terraform tutorial.

In the evolving domain of Infrastructure as Code (IaC), the choice of tooling plays a pivotal role in orchestrating and automating tasks efficiently, as opposed to manually configuring resources. While there are IaC-specific tools available, the versatility of CI/CD tools like Jenkins opens up new paradigms in managing infrastructure. 

This blog post delves into the rationale behind considering Jenkins with Terraform for IaC, showcasing its versatility in handling both continuous integration/continuous delivery (CI/CD) and infrastructure code deployments.

The heart of this tutorial beats around a practical example, where we leverage Terraform to provision an Azure VM, followed by Ansible orchestrating the setup of a monitoring stack comprising Prometheus and Grafana running on Docker. This hands-on approach aims at demystifying the orchestration of Terraform deployments using Jenkins, showcasing a real-world scenario.

Through this lens, we'll dissect why a CI/CD tool might be preferred over a dedicated IaC tool in certain environments, assessing the pros and cons of choosing Jenkins for IaC management.

Requirements : GitHub and Azure accounts

TL;DR: You can find the GitHub repo here

What is Jenkins?

Jenkins is an open-source automation server used for continuous integration and continuous deployment (CI/CD). It's a hub for reliable building, testing, and deployment of code, with a plethora of plugins, including the Jenkins Terraform plugin, to extend its functionality.

Jenkins, initially developed as Hudson by Kohsuke Kawaguchi in 2004, emerged out of a need for a continuous integration (CI) tool that could improve software development processes. When Oracle acquired Sun Microsystems in 2010, a dispute led the community to fork Hudson and rename it as Jenkins. 

Since then, Jenkins has grown exponentially, becoming one of the most popular open-source automation servers used for reliable building, testing, and deploying code. Its extensible nature, through a vast array of plugins, and strong community support, has solidified its place in the DevOps toolchain, bridging the gap between development and operational teams.

Using Jenkins for IaC Management

Why would someone want to use a CI/CD tool like Jenkins in place of an IaC-specific tool? The thought of using Jenkins for IaC management stems from its capability to automate and structure deployment workflows and not just for continuous integration. Let’s dig deeper.

Why Consider Jenkins for IaC

From an architect's perspective, Jenkins might be considered for IaC management in scenarios where there's already heavy reliance on Jenkins for CI/CD, and the organization has developed competency in Jenkins pipeline scripting. 

The decision to use Jenkins for IaC might also be influenced by budget constraints, as Jenkins is an open-source tool, and the cost of specialized IaC platforms can be prohibitive for smaller organizations.

However, when an organization's size and complexity grow, the hidden costs of maintaining and securing Jenkins, along with creating homebrewed code for scalability, can outweigh the initial savings.

Dedicated IaC Tools vs. Jenkins

Dedicated IaC tools such as env0 are built specifically for infrastructure automation. They provide out-of-the-box functionality for state management, secrets management, policy as code, and more, which are essential for IaC but are not native to Jenkins. These tools are designed to operate at scale, with less overhead for setup and maintenance.

Choosing Between the Two

An architect or a team lead must consider the trade-offs. If the organization's priority is to have a unified tool for CI/CD and IaC with a strong in-house Jenkins skillset, Jenkins could be a viable option. 

However, for larger organizations governance, security, cost management, and scalability considerations could make dedicated IaC tools a more valuable long-term option. 

In the end, the decision hinges on the organization's current tooling, expertise, and the complexity of the infrastructure they manage. 

Here is a table to quickly summarize the pros and cons of this decision.

Jenkins Pros and Cons

ProsCons
Integration Extensive plugin ecosystem for adaptability.Requires plugins for IaC features, adding complexity and maintenance overhead.
Community Support Large community and extensive documentation.Community support varies by plugin, which can lead to challenges in troubleshooting.
Custom Workflows Basic governance through plugins and scripting.Lacks advanced governance features such as RBAC or policy as code.
Security Basic security features are available and can be extended with plugins.Setting comprehensive security guardrails is complex and error prone.
Scalability Can handle scale with the right setup and infrastructure.Cumbersome and resource-intensive to manage as organizational complexity grows.
State Management Can manage the state with additional plugins and custom storage solutions.Does not handle state management natively, and custom setups add risk and complexity.
State Management Can be extended to support various IaC tasks with the right plugins and scripts.Lacks built-in IaC-specific features like drift detection and automatic planning.
Cost Open-source and no initial cost, which can be cost-effective for smaller teams.Hidden costs of maintenance, scaling, and potential downtime can become significant.
Learning Curve If the team is already proficient with Jenkins, the learning curve is minimized.Steep learning curve for IaC setup; requires in-depth knowledge of Jenkins and associated plugins.

Setting Up the Jenkins Job for Managing IaC

Setting up Jenkins for managing Terraform involves a series of steps. In the next few sections, you will learn how to go about:

  • Creating a Jenkins server in a Docker container
  • Initializing and setting up Jenkins
  • Creating Azure credentials for Terraform to access Azure
  • Jenkins pipeline script (Jenkinsfile)
  • Terraform Configuration
  • Ansible Configuration

The goal is to create a VM in Azure that hosts the Prometheus and Grafana monitoring stack. This VM will run docker and docker-compose to stand up these services.

Jenkins Server Installation Process

First off, let's take a look at how to install Jenkins in a Docker container.

Create the Jenkins Docker Container

Below is the Dockerfile for the Jenkins container. Notice how we install Terraform along with Ansible inside our container. For our demo and simplicity, we are using the Jenkins Server as a Jenkins node worker as well.

Building a scalable Jenkins architecture demands careful consideration and significant technical expertise, which can be a daunting challenge for teams whose primary goal is to implement an IaC solution efficiently. In contrast, a dedicated IaC platform like env0 removes the burden of managing the intricacies of the tool itself. env0 simplifies the process with a guided setup that abstracts the underlying complexities, allowing teams to focus on infrastructure management rather than tool configuration.

In a production Jenkins environment, you should install the Terraform binary along with Ansible and any other necessary binaries on the Jenkins nodes. Notice that we don't make use of the Terraform plugin in our demo.

FROM jenkins/jenkins:lts

# Define arguments for Terraform and Ansible versions
ARG TF_VERSION=1.5.5
ARG ANSIBLE_VERSION=8.5.0

USER root

# Install necessary tools like wget and unzip before downloading Terraform
RUN apt-get update && \
    apt-get install -y wget unzip python3-venv && \
    rm -rf /var/lib/apt/lists/* && \
    apt-get clean

# Use the TF_VERSION argument to download and install the specified version of Terraform
RUN wget https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_amd64.zip && \
    unzip terraform_${TF_VERSION}_linux_amd64.zip && \
    mv terraform /usr/local/bin && \
    rm terraform_${TF_VERSION}_linux_amd64.zip

# Create a virtual environment for Python and activate it
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

# Use the ANSIBLE_VERSION argument to install the specified version of Ansible within the virtual environment
RUN pip install --upgrade pip cffi && \
    pip install ansible==${ANSIBLE_VERSION} && \
    pip install mitogen ansible-lint jmespath && \
    pip install --upgrade pywinrm

# Drop back to the regular jenkins user - good practice
USER jenkins

Build the Docker Image

Now we can build our Terraform Docker image using the following command:

docker build -t samgabrail/jenkins-terraform-docker

Next, let's run the Docker container with:

docker run --name jenkins-terraform -d -v jenkins_home:/var/jenkins_home -p 8080:8080 -p 50000:50000 samgabrail/jenkins-terraform-docker:latest

Configure Jenkins

Once the container is running, we can access the Jenkins UI at http://localhost:8080.

Notice that a password has been written to the log and inside our container at the following location: /var/jenkins_home/secrets/initialAdminPassword

You can access this password inside our container by running this command:

docker exec -it jenkins-terraform cat /var/jenkins_home/secrets/initialAdminPassword

Then, install the suggested plugins:

Then go ahead and create a first admin user

Next, keep the Jenkins URL as is, which should be http://localhost:8080, then save and finish the setup.

Create a Jenkins Job

From the main Jenkins page, click on the New Item button. Then give the pipeline a name and select the Pipeline project type.

Now, fill out some details for this Pipeline job. You can add Poll SCM to configure Jenkins to poll GitHub regularly. The schedule follows the syntax of cron. For example, if we want to poll every two minutes we would use: [.code]H/2 * * * *[.code] Under "Pipeline", choose Pipeline script from SCM, Git for SCM, our Repository URL, and the main branch.

Finally, add the Jenkinsfile path as Jenkins/Jenkinsfile and click save.

Run the Jenkins Build

Let's go ahead and run our first build. Click the “▷ Build Now” button.

You should get an error in Stage View as shown below:

This is because we still haven't added the necessary credentials for Terraform to access Azure. Let's do that next.

Create Azure Credentials

Follow the guide Azure Provider: Authenticating using a Service Principal with a Client Secret to create an application and service principal. You will need the following for Terraform to access Azure:

  • [.code]client_id[.code]
  • [.code]client_secret[.code]
  • [.code]tenant_id[.code]
  • [.code]subscription_id[.code]

Now, let's add these credentials inside of Jenkins.

First, go to the "Dashboard > ⚙ Manage Jenkins > Credentials":

Go into the "System" store under the global domain and create five new credentials: four for Azure of type "secret text", then one SSH Username with a private key for Ansible in order to access our VM via SSH. This is what you should end up with:

Re-Run the Jenkins Pipeline

Now, let's go back to our pipeline and re-run it. Notice this time you get ‘Build wit3h Parameters’. Click on that and choose "Apply". Your build should succeed this time. It will go through the five stages we defined in our Jenkinsfile:

  1. Terraform (run Terraform to provision the VM in Azure)
  2. Delay of 2 minutes (wait for the ssh-agent to come up in the VM)
  3. Ansible (run ansible to configure the VM with Docker and start the monitoring services)
  4. Output URLs
  5. Archive URLs

RBAC Discussion

When we consider the Jenkins pipeline that I've demonstrated it's crucial to understand that this setup operates under a controlled demo environment. In real-world applications, especially within an organizational context, the need for robust Role-Based Access Control (RBAC) becomes significantly more important.

Why RBAC Matters

RBAC is central to maintaining security and operational integrity. It determines who has permission to execute, modify, or approve changes in the pipeline, which is critical in preventing unauthorized modifications and ensuring that infrastructure changes are peer-reviewed. This is not just about security; it's about stability and reliability. Without stringent RBAC, you risk having too many cooks in the kitchen, which can lead to configuration drift, security vulnerabilities, and operational chaos.

Jenkins and RBAC

In Jenkins, implementing RBAC can be somewhat manual and often necessitates additional plugins. For instance, the Jenkins pipeline as configured for the demo does not inherently provide a detailed RBAC system. It can be tailored to do so, but this requires a deep dive into Jenkins' access control mechanisms and perhaps a reliance on the Role Strategy Plugin or similar to ensure that only authorized personnel can execute critical pipeline stages.

env0’s and RBAC

On the other hand, env0 offers a far more comprehensive and out-of-the-box RBAC solution. With env0, you can easily define who can trigger deployments, who can approve them, and who can manage the infrastructure. This granular level of control extends across all organizational layers, from team to project to environment, and integrates smoothly with SSO providers for streamlined user and group management.

In an environment where infrastructure as code (IaC) is no longer just a convenience but a necessity, env0’s RBAC system offers a more secure and controlled workflow, ensuring that every change is accounted for and authorized. This mitigates the risk of errors or breaches, which can have significant implications in a production environment.

Final Output

You can find the public URL for Grafana and Prometheus either by going into the Jenkins console logs or by checking the Jenkins artifacts for the urls.txt file. Here is the console output:

Here is the content of the urls.txt file:

Grafana URL: http://172.190.218.146:3000
Prometheus URL: http://172.190.218.146:9090

Jenkins Pipeline Configuration

Let's dive into the Jenkinsfile – essentially our pipeline script – that orchestrates a monitoring setup using Terraform and Ansible. This is the conductor of our DevOps orchestra, tying everything together.

The Basics

pipeline {
    agent any
    parameters {
        choice(name: 'ACTION', choices: ['apply', 'destroy'], description: 'What action should Terraform take?')
    }

Here, we kick off the Jenkins pipeline using any available Jenkins agent. We also set up a choice parameter for our Terraform actions. We can either apply to set things up or destroy to tear them down.

Environment Variables

environment {
        ARM_CLIENT_ID = credentials('azure-client-id')
        ARM_CLIENT_SECRET = credentials('azure-client-secret')
        ARM_SUBSCRIPTION_ID = credentials('azure-subscription-id')
        ARM_TENANT_ID = credentials('azure-tenant-id')
    }

We're loading some Azure credentials into environment variables. This is to allow Terraform to access Azure securely. Recall how we stored these credentials in the Jenkins server's secure store.

Terraform Stage

stages {
    stage('Terraform') {
        steps {
            script {
                    dir('Terraform') {
                        sh 'terraform init'
                        sh 'terraform validate'
                        sh "terraform ${params.ACTION} -auto-approve"
                        if (params.ACTION == 'apply') {
                        sh "terraform ${params.ACTION} -auto-approve"
                        def ip_address = sh(script: 'terraform output public_ip', returnStdout: true).trim()
                        writeFile file: '../Ansible/inventory', text: "monitoring-server ansible_host=${ip_address}"
                        }
                    }
            }
        }
    }

In this stage, we're running our Terraform job which contains the core Terraform commands. We initialize with [.code]terraform init[.code], validate with [.code]terraform validate[.code], and then apply or destroy based on our parameter. If we're applying, we also fetch the public IP for our Ansible inventory. Neat, right?

Delay Stage

stage('Delay') {
    when {
        expression { params.ACTION == 'apply' }
    }
    steps {
        script {
            echo 'Waiting for SSH agent...'
            sleep 120 // waits for 120 seconds before continuing
        }
    }

I’ve added a delay stage here to allow the ssh-agent to start inside of the VM, otherwise Ansible will try to access the VM and will fail.

Ansible Stage

If we selected [.code]apply[.code] in our Terraform stage, this stage will run our Ansible playbook to configure the server. We do this securely using SSH credentials that we stored earlier in the Jenkins secure store. This means we allow Ansible to SSH into our Azure VM using the private key stored in Jenkins.

stage('Output URLs') {
    when {
        expression { params.ACTION == 'apply' }
    }
    steps {
        script {
            def ip_address = sh(script: 'terraform -chdir=Terraform output -raw public_ip', returnStdout: true).trim()
            def grafana_url = "http://${ip_address}:3000"
            def prometheus_url = "http://${ip_address}:9090"
            echo "Grafana URL: ${grafana_url}"
            echo "Prometheus URL: ${prometheus_url}"
            writeFile file: 'urls.txt', text: "Grafana URL: ${grafana_url}\nPrometheus URL: ${prometheus_url}"
        }
    }
}

This stage outputs the Grafana and Prometheus URLs for us in the console logs. It even writes them to a text file. So we can store them as an artifact.

Archive URLs Stage

stage('Archive URLs') {
    when {
        expression { params.ACTION == 'apply' }
    }
    steps {
        archiveArtifacts artifacts: 'urls.txt', onlyIfSuccessful: true
    }
}

Finally, we archive the URLs text file as an artifact to reference later.

Terraform Configuration Files

Now that we have Jenkins running, we can configure Terraform. All the Terraform configuration files are found in the Terraform folder in our repo. Below are the contents.

.
├── id_rsa.pub
├── main.tf
└── networking.tf

Main.tf

The Basics

The main.tf Terraform file has all the Terraform code to create resources on Azure, like a resource group and a Linux VM. It's broken down into different sections: terraform, provider, resource, and output.

The Terraform Block

terraform {
  required_providers {
    azurerm = {
      source = "hashicorp/azurerm"
      version = "3.77.0"
    }
  }
}

Here, we specify what providers are required. For this script, we are using the AzureRM provider and locking it down to version 3.77.0.

The Provider Block

provider "azurerm" {
  features {}
}

This block initializes the Azure provider. The [.code]features {}[.code] is essential even though we don't need to fill it.

The Resource Group

resource "azurerm_resource_group" "rg" {
  name     = "MonitoringResources"
  location = "East US"
}

This part is creating an Azure Resource Group called [.code]MonitoringResources[.code] in the [.code]East US[.code] location.

The Linux Virtual Machine

This is the meat of the script!

resource "azurerm_linux_virtual_machine" "vm" {
  name                = "MonitoringVM"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  size                = "Standard_B2s"

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "Canonical"
    offer     = "0001-com-ubuntu-server-jammy"
    sku       = "22_04-lts"
    version   = "latest"
  }

  computer_name  = "monitoringvm"
  admin_username = "azureuser"

  // Add a public key to the same folder as the main.tf script (we use Ansible to send the private key to the Jenkins machine)
  admin_ssh_key {
    username   = "azureuser"
    public_key = file("id_rsa.pub")
  }

  network_interface_ids = [
    azurerm_network_interface.nic.id,
  ]

  disable_password_authentication = true
}

Here, we're spinning up a Linux VM with the name "MonitoringVM". The VM will reside in the same resource group and location as specified earlier. We're setting it to a Standard_B2s size, which is a decent balance of CPU and memory.

Notice the [.code]admin_ssh_key[.code] block? It uses the public key from the file id_rsa.pub. This is super important for secure SSH access which is needed for Ansible.

The Output Block

output "public_ip" {
  value = azurerm_public_ip.pip.ip_address
}

This output block just spits out the public IP of the VM once it's up. We will need this to update our Ansible inventory file.

Networking.tf

Alright, let's dive into the Terraform code in the networking.tf file! This Terraform script is all about laying down the networking groundwork for our Azure setup. It defines how our virtual network, subnet, and security rules come together.

Virtual Network (azurerm_virtual_network)

resource "azurerm_virtual_network" "vnet" {
  name                = "MonitoringVNet"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  address_space       = ["10.0.0.0/16"]
}

We're setting up a Virtual Network (VNet) named "MonitoringVNet". This VNet is where our resources like VMs will reside. The [.code]address_space[.code] is set to 10.0.0.0/16, giving us a nice, roomy network to play with.

Subnet (azurerm_subnet)

resource "azurerm_subnet" "subnet" {
  name                 = "default"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = ["10.0.1.0/24"]
}

Within that VNet, we're carving out a subnet. The [.code]address_prefixes[.code] is 10.0.1.0/24, so all our resources within this subnet will have an IP in this range.

Network Interface (azurerm_network_interface)

resource "azurerm_network_interface" "nic" {
  name                = "MonitoringNIC"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.subnet.id
    private_ip_address_allocation = "Dynamic"
    public_ip_address_id          = azurerm_public_ip.pip.id
  }
}

Here, we're creating a network interface card (NIC) named "MonitoringNIC". This NIC is what connects our VM to the subnet. We're dynamically assigning a private IP address here.

Public IP (azurerm_public_ip)

resource "azurerm_public_ip" "pip" {
  name                = "MonitoringPublicIP"
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  allocation_method   = "Dynamic"
}

We're also setting up a public IP address with dynamic allocation. We'll use this to access our monitoring VM.

Network Security Group (azurerm_network_security_group)

resource "azurerm_network_security_group" "nsg" {
  name                = "MonitoringNSG"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
}

Let's not forget about security! We set up a Network Security Group (NSG) and defined rules for inbound traffic. We've set up rules for Grafana, Prometheus, and SSH, specifying which ports should be open.

resource "azurerm_network_security_rule" "rule" {
  for_each = {
    grafana    = { priority = 1001, port = 3000 }
    prometheus = { priority = 1002, port = 9090 }
    ssh        = { priority = 1003, port = 22 }
  }

  name                        = "${upper(each.key)}Rule"
  priority                    = each.value.priority
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = each.value.port
  source_address_prefix       = "*"
  destination_address_prefix  = "*"
  resource_group_name         = azurerm_resource_group.rg.name
  network_security_group_name = azurerm_network_security_group.nsg.name
}

NSG Association 

Finally, we're associating the NSG with our subnet. This means the rules we defined in the NSG will apply to all resources in this subnet.

resource "azurerm_subnet_network_security_group_association" "subnet_nsg_association" {
  subnet_id                 = azurerm_subnet.subnet.id
  network_security_group_id = azurerm_network_security_group.nsg.id
}

id_rsa.pub

This file is used for SSH access. It's the public key for the private SSH key that Ansible will use to SSH into the VM for configuration management. Terraform will include this file in the Azure VM.

ssh-rsa 
AAAAB3NzaC1yc2EAAAADAQABAAABAQCb7fcDZfIG+SxuP5UsZaoHPdh9MNxtEL5xRI71hzMS5h4
SsZiPGEP4shLcF9YxSncdOJpyOJ6OgumNSFWj2pCd/kqg9wQzk/E1o+FRMbWX5gX8xMzPig8mmK
kW5szhnP+yYYYuGUqvTAKX4ua1mQwL6PipWKYJ1huJhgpGHrvSQ6kuywJ23hw4klcaiZKXVYtvT
i8pqZHhE5Kx1237a/6GRwnbGLEp0UR2Q/KPf6yRgZIrCdD+AtOznSBsBhf5vqcfnnwEIC/DOnqc
OTahBVtFhOKuPSv3bUikAD4Vw7SIRteMltUVkd/O341fx+diKOBY7a8M6pn81HEZEmGsr7rT 
sam@SamMac.local

Terraform State

It's important to note that the Terraform state file (terraform.tfstate) gets stored in the workspace folder in the running Jenkins worker node.

~/workspace/MonitoringStack/Terraform$ ls -lah
total 72K
drwxr-xr-x 3 jenkins jenkins 4.0K Oct 31 20:20 .
drwxr-xr-x 8 jenkins jenkins 4.0K Oct 31 18:31 ..
drwxr-xr-x 3 jenkins jenkins 4.0K Oct 31 18:23 .terraform
-rw-r--r-- 1 jenkins jenkins 1.2K Oct 31 18:15 .terraform.lock.hcl
-rw-r--r-- 1 jenkins jenkins  397 Oct 31 18:15 id_rsa.pub
-rw-r--r-- 1 jenkins jenkins 1.2K Oct 31 18:15 main.tf
-rw-r--r-- 1 jenkins jenkins 2.3K Oct 31 18:15 networking.tf
-rw-r--r-- 1 jenkins jenkins  22K Oct 31 20:20 terraform.tfstate
-rw-r--r-- 1 jenkins jenkins  20K Oct 31 18:24 terraform.tfstate.backup

~/workspace/MonitoringStack/Terraform$ pwd
/var/jenkins_home/workspace/MonitoringStack/Terraform

This is not desirable in a production environment, though. You’d need to store the state file in a secure remote location accessible only to your team. This can be a private S3 bucket or Azure blob that encrypts the content. Or you could opt to use an IaC tool such as env0 that takes care of storing and managing state files securely.

Ansible Configuration Files

How about we take a look at the Ansible configuration files? They are located inside the Ansible directory and the contents are:

.
├── ansible.cfg
├── appPlaybook.yaml

ansible.cfg

The file contains configuration settings that influence Ansible's behavior. These settings are grouped into sections, and the [defaults] section is what we're focusing on here.

# Make sure that this directory is not world-wide writable for the below to take effect
[defaults]
host_key_checking = False

By setting [.code]host_key_checking = False[.code], we're telling Ansible not to check the SSH host key when connecting to remote machines. Normally, SSH checks the host key to enhance security, but this can get annoying in environments where host keys are expected to change, or where we aren't super concerned about man-in-the-middle attacks.

Remember, though, disabling this check can make our setup less secure. It's like saying, "Yeah, I trust you," without asking for an ID. We are fine with this here in the context of our demo.

appPlaybook.yaml

Let's get down to breaking apart our appPlaybook.yaml Ansible playbook. It's designed to set up a monitoring stack with Prometheus and Grafana on our Azure VM.

The Overview

- hosts: all
  become_user: root
  become: true
  tasks:

We're targeting all hosts [.code](hosts: all)[.code]. Also, we're elevating our permissions to root with [.code]become: true[.code] and [.code]become_user: root[.code]. Now, let's jump into the tasks.

Installing pip3 and unzip

- name: Install pip3 and unzip
  apt:
    update_cache: yes
    pkg:
    - python3-pip
    - unzip

We're kicking things off by installing pip3 and unzip. We also update the cache.

Adding Docker GPG apt Key

- name: Add Docker GPG apt Key
  apt_key:
    url: https://download.docker.com/linux/ubuntu/gpg
    state: present

Before installing Docker, we're adding its GPG key for package verification. Standard security practices.

Adding Docker Repository

- name: Add Docker Repository
  apt_repository:
    repo: deb https://download.docker.com/linux/ubuntu jammy stable
    state: present

Next, we're adding Docker's apt repository to our sources list. This allows us to install Docker directly from its official source.

Installing docker-ce

- name: Update apt and install docker-ce
  apt:
    name: docker-ce
    state: latest
    update_cache: true

Time to install Docker! We're installing the latest version of docker-ce.

Installing Docker Python module

- name: Install Docker module for Python
  pip:
    name: docker

We're also installing the Docker Python module so we can interface with Docker using Python scripts.

Creating and Setting up Docker Compose

- name: Create Docker Compose directory
  file:
    path: /opt/docker-compose
    state: directory
- name: Copy Docker Compose file
  copy:
    content: |
      version: '3'
      services:
        prometheus:
          image: prom/prometheus
          ports:
            - "9090:9090"
        grafana:
          image: grafana/grafana
          environment:
            - GF_SECURITY_ADMIN_PASSWORD=admin
          ports:
            - "3000:3000"
    dest: /opt/docker-compose/docker-compose.yml

We're creating a directory for our Docker Compose file and then copying the Compose config into it. The config sets up Prometheus on port 9090 and Grafana on port 3000.

Running Docker Compose

- name: Run Docker Compose
  command: docker compose up -d
  args:
    chdir: /opt/docker-compose

Finally, the grand finale! We're running [.code]docker compose up -d[.code] in the directory where our Docker Compose file resides, bringing up our monitoring stack.

Conclusion

In this exploration of using Jenkins to manage Terraform, we've witnessed the flexibility of Jenkins in the realm of continuous development automation. It stands tall as a popular choice for CI/CD. Through a Jenkins pipeline, we seamlessly orchestrated the provisioning of an Azure VM and its configuration to host our monitoring stack. 

As seen in our hands-on example, it adapts well to managing IaC with Terraform and Ansible but does fall short in some key ways.

Using Jenkins to manage Terraform or your IaC is tempting due to the mentioned pros in this article, however, it's essential to note that Jenkins isn’t purpose-built for IaC management. 

Taking on a purpose-built IaC platform such as env0 offers the flexibility you get with Jenkins in addition to a feature-rich IaC platform out-of-the-box, making IaC management a breeze. 

Check out this article, env0 - A Terraform Cloud Alternative, I wrote on using Terraform with Ansible all with env0 to show you how env0 can run a similar workflow to the one in this demo with added features.

Explore the vast array of features env0 offers for IaC management. Below is a small sample:

  • Automated deployment workflows: env0 automates your Terraform, Terragrunt, AWS CloudFormation, and other Infrastructure as Code tools.
  • Governance and policy enforcement: Enforce Infrastructure as Code best practices and governance with approval workflows, full and granular RBAC, and multi-layer Infrastructure as Code variable management.
  • Multi-cloud infrastructure support: env0 supports multi-cloud infrastructure, enabling you to manage cloud deployments and IaC alongside existing application development pipelines.
  • Cost optimization: env0 provides cost management features that help you optimize your cloud deployments.
  • Team collaboration: env0 enhances team collaboration by providing end-to-end IaC visibility, audit logs, and exportable IaC run logs to your logging platform of choice.
Note: Future releases of Terraform will come under the BUSL license, while everything developed before version 1.5.x remains open-source. OpenTofu is a free variant of Terraform that builds upon its existing principles and offerings. Originating from Terraform version 1.5.6, it stands as a robust alternative to HashiCorp's Terraform.

To learn more about Terraform, check out this Terraform tutorial.

Logo Podcast
With special guest
Andrew Brown

Schedule a technical demo. See env0 in action.

Footer Illustration