

Last updated: April 2026
If you've been managing cloud infrastructure by clicking through the Azure portal, you already know where that leads: a configuration nobody can reproduce, a sprawl of resources with unclear ownership, and a support ticket whenever someone needs a new environment. Infrastructure as Code (IaC) exists to solve that problem, and this Terraform tutorial is the fastest path to getting it working.
This guide walks through everything you need to get started with Terraform on Azure, from installation to remote state management, including how OpenTofu fits into the picture in 2026. By the end, you'll have a working configuration, a clear mental model of the Terraform workflow, and a foundation to build on.
At a glance Terraform is an open source IaC tool by HashiCorp that lets you define, provision, and manage cloud infrastructure using declarative configuration files. Current stable version: v1.14.8 (released March 25, 2026). OpenTofu is the community-maintained open source fork of Terraform, currently at v1.11.6 (released April 8, 2026). Both tools share the same HCL syntax and Azure provider, making the workflow in this tutorial applicable to either.
What you'll need:
- An Azure account (free tier works for this tutorial)
- The Azure CLI installed
- A terminal (macOS/Linux) or PowerShell (Windows)
- No prior Terraform experience required
What is Terraform?
Terraform lets you describe the infrastructure you want (virtual machines, networks, databases, DNS records) in configuration files, then applies those files to make reality match the description. Add a resource to the config, run terraform apply, and it appears. Remove it, apply again, and it's gone. The tool figures out what needs to change; you don't have to.
This model, called declarative IaC, is different from writing shell scripts that call az create in sequence. Scripts describe how to create infrastructure. Terraform configurations describe what should exist. The distinction matters at scale: a configuration is idempotent by design, meaning you can apply it repeatedly and get the same result.
Terraform works with virtually every cloud provider through a plugin system called providers. The Azure provider (azurerm) gives Terraform API access to create and manage Azure resources. AWS, GCP, Kubernetes, GitHub, and hundreds of other platforms have equivalent providers.
Terraform vs OpenTofu: understanding the fork
In 2023, HashiCorp changed Terraform's license from the Mozilla Public License to the Business Source License (BUSL), restricting certain commercial uses. The OpenTofu project forked Terraform at that point and continues development under the original Mozilla Public License.
For most teams getting started today, the practical differences are small. The HCL syntax is identical. The Azure provider works the same way. The CLI commands are the same, with tofu replacing terraform in your terminal. OpenTofu 1.11 introduced provider-defined functions and ephemeral resources, features that are ahead of what HashiCorp shipped in the equivalent Terraform release.
When to pick which: if your organization has an existing Terraform footprint and active HashiCorp support contracts, staying on Terraform makes sense. If you're starting fresh and want a permissively licensed tool with an active open source community, OpenTofu is worth evaluating. The migration path between the two is straightforward, because the state format is compatible and the provider ecosystem is shared.
Related reading: OpenTofu vs. Terraform: A Practical Guide for Enterprise Infrastructure Teams. Once you know both tools, this post covers what the choice actually means for teams already running production infrastructure.
Terraform vs other IaC tools
| Tool | Approach | Cloud support | Language |
|---|---|---|---|
| Terraform / OpenTofu | Declarative | Multi-cloud | HCL |
| Pulumi | Declarative | Multi-cloud | Python, TypeScript, Go, C# |
| AWS CloudFormation | Declarative | AWS only | JSON / YAML |
| Azure Bicep | Declarative | Azure only | Bicep DSL |
| Ansible | Procedural | Multi-cloud | YAML |
Terraform's strength is breadth: one tool, one workflow, every cloud. That's the reason it became the default for platform engineering teams managing infrastructure across providers. For a broader comparison of IaC tools, including newer entrants, see 14 Best IaC Tools for Cloud Automation in 2026.
Related reading: Terraform Cloud: Benefits, Key Features, and Examples. If you're evaluating where to run Terraform at the team level, this post covers HCP Terraform's capabilities alongside the alternatives.
Installing Terraform and OpenTofu
Terraform v1.14.8 is the latest stable release as of March 25, 2026.
On macOS with Homebrew:
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
On Linux (Ubuntu/Debian):
wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform
On Windows with Chocolatey:
choco install terraform
Verify the installation:
terraform version
# Terraform v1.14.8
OpenTofu v1.11.6 is the latest stable release as of April 8, 2026.
The OpenTofu project provides installers for all major platforms at opentofu.org/docs/intro/install. On macOS:
brew install opentofu
On Windows with winget:
winget install --id=OpenTofu.OpenTofu
Verify:
tofu version
# OpenTofu v1.11.6
From here on, every command in this tutorial works with either terraform or tofu. Substitute the one you installed.
Setting up Azure for Terraform
Terraform talks to Azure through the azurerm provider, which uses the Azure Resource Manager API. Before it can do that, it needs credentials.
The recommended approach for local development is authenticating through the Azure CLI. Install the CLI, log in, and the provider picks up your session automatically:
az login
For CI/CD pipelines and shared team environments, a service principal is the right approach. It gives Terraform a dedicated identity with scoped permissions, separate from any individual's user account. Create one with the Azure CLI:
az ad sp create-for-rbac \
--name "terraform-tutorial-sp" \
--role Contributor \
--scopes /subscriptions/YOUR_SUBSCRIPTION_ID
The output includes four values: appId (client ID), password (client secret), tenant, and the subscription ID you passed in. Export them as environment variables before running Terraform:
export ARM_CLIENT_ID="YOUR_APP_ID"
export ARM_CLIENT_SECRET="YOUR_PASSWORD"
export ARM_SUBSCRIPTION_ID="YOUR_SUBSCRIPTION_ID"
export ARM_TENANT_ID="YOUR_TENANT_ID"
Keep these values out of your code. They belong in environment variables or a secrets manager, never in a committed .tf file.
Related reading: How to Use Terraform Providers. Covers provider versioning, authentication patterns for multiple clouds, and how to configure providers for different environments.
Your first Terraform configuration
Terraform configurations live in .tf files. A minimal project has at least two: one that declares the provider and one that defines your resources. For anything beyond a quick test, separating variables and outputs into their own files keeps things manageable.
Start with this project structure:
terraform-azure-demo/
├── main.tf
├── variables.tf
├── outputs.tf
└── terraform.tfvars
Add a .gitignore before your first commit:
# Terraform working directory (provider binaries - large, not portable)
.terraform/
# State files - never commit these
*.tfstate
*.tfstate.backup
# Variable files may contain secrets
*.tfvars
One thing to get right: .terraform.lock.hcl is intentionally absent from this .gitignore. Unlike the .terraform/ directory, the lock file is a small text file that records exact provider checksums for your team and CI runners. Commit it. Ignoring it defeats version pinning.
Related reading: Terraform Files and Folder Structure. Once your project grows past a few resources, file organization matters more than you'd expect.
HCL syntax: blocks, arguments, and expressions
HashiCorp Configuration Language (HCL) is the language Terraform configurations are written in. The syntax has three core constructs.
A block is a container with a type, optional labels, and a body:
resource "azurerm_resource_group" "main" {
# block body: arguments go here
}
The first label ("azurerm_resource_group") is the resource type. The second ("main") is a local name you choose, used to reference this resource elsewhere in the config.
An argument assigns a value to a name within a block:
name = var.resource_group_name
location = "East US"
An expression produces a value. The simplest expressions are literals ("East US", true, 42). More useful ones reference other resources or variables: azurerm_resource_group.main.location reads the location attribute of the resource group after it's created.
That's the core grammar. Everything else in HCL builds on these three concepts.
The provider block
In main.tf, declare the provider:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 4.0"
}
}
required_version = ">= 1.14"
}
provider "azurerm" {
features {}
}
The required_providers block pins the provider version using a version constraint. The ~> 4.0 constraint allows any 4.x release but blocks a major version upgrade from happening silently. This is a best practice worth following from the start.
Defining resources
Add a resource group and a storage account to main.tf:
resource "azurerm_resource_group" "main" {
name = var.resource_group_name
location = var.location
}
resource "azurerm_storage_account" "main" {
name = var.storage_account_name
resource_group_name = azurerm_resource_group.main.name
location = azurerm_resource_group.main.location
account_tier = "Standard"
account_replication_type = "LRS"
}
Each resource block follows the same shape: the resource keyword, the resource type, a local name you use to reference it elsewhere, and a block of arguments. The storage account references azurerm_resource_group.main.name and .location. These are resource attributes, resolved at apply time from the resource group Terraform just created. Terraform uses these references to build a dependency graph, so it knows to create the resource group before the storage account.
Both account_tier and account_replication_type are required by the azurerm provider. Valid tiers are Standard and Premium; valid replication types include LRS, GRS, ZRS, and GZRS. LRS (locally redundant storage) is appropriate for non-critical workloads and the cheapest option.
Variables and the HCL type system
In variables.tf:
variable "resource_group_name" {
type = string
description = "Name of the Azure resource group"
}
variable "location" {
type = string
default = "East US"
}
variable "storage_account_name" {
type = string
description = "Globally unique storage account name (3-24 lowercase alphanumeric chars)"
}
In terraform.tfvars:
resource_group_name = "tf-tutorial-rg"
location = "East US"
storage_account_name = "tftutorialdemo2026"
Storage account names must be globally unique across all of Azure, not just within your subscription. If tftutorialdemo2026 is already taken, append a short suffix (your initials or a random string). The apply will fail immediately with a clear error if the name isn't available.
The .tfvars file is where you supply actual values. Since you added *.tfvars to .gitignore, you can safely put real resource names here without risking an accidental commit. For a deeper look at how Terraform resolves variable values across files, environment variables, and command-line flags, see the Terraform Variables guide.
Outputs
In outputs.tf:
output "resource_group_id" {
value = azurerm_resource_group.main.id
}
output "storage_account_primary_endpoint" {
value = azurerm_storage_account.main.primary_blob_endpoint
}
Outputs surface values from your configuration after apply, useful for passing data to other systems or confirming what was created. primary_blob_endpoint is an exported attribute of azurerm_storage_account, resolved only after the resource exists.
If an output contains a secret value (a connection string, a primary access key), mark it sensitive = true:
output "storage_primary_connection_string" {
value = azurerm_storage_account.main.primary_connection_string
sensitive = true
}
Terraform redacts sensitive outputs from terminal output and plan diffs. The value is still written to state in plaintext, which is another reason the state file needs access controls. Marking outputs sensitive doesn't protect the state file; it just prevents accidental exposure in logs and CI output.
Local values
Local values let you assign a name to an expression and reuse it across the configuration without repeating yourself. Add a locals block to main.tf:
locals {
common_tags = {
environment = "tutorial"
managed_by = "terraform"
}
}
Then reference locals with the local.<name> syntax:
resource "azurerm_resource_group" "main" {
name = var.resource_group_name
location = var.location
tags = local.common_tags
}
Locals are evaluated once and cached. They're most useful for computed values you'd otherwise repeat across multiple resources, like a naming prefix or a shared tag map. They can't be overridden from outside the configuration the way variables can, which makes them reliable for internal logic.
Data sources
Data sources let you read information about existing infrastructure without managing it. If a resource was created outside Terraform (a pre-existing virtual network, a shared resource group, or the current Azure subscription), a data source queries it and makes its attributes available to the rest of your config.
data "azurerm_subscription" "current" {}
data "azurerm_resource_group" "existing" {
name = "shared-infra-rg"
}
resource "azurerm_storage_account" "main" {
name = var.storage_account_name
resource_group_name = data.azurerm_resource_group.existing.name
location = data.azurerm_resource_group.existing.location
account_tier = "Standard"
account_replication_type = "LRS"
}
The data "azurerm_subscription" "current" block requires no arguments; it queries the subscription Terraform is authenticated to. Reference its attributes with data.azurerm_subscription.current.subscription_id. The data "azurerm_resource_group" block reads an existing group by name and exposes its location, tags, and other attributes.
Data sources are how Terraform configurations stay modular. Rather than hardcoding values from resources you don't own, you read them at plan time and let Terraform resolve the dependency.
The Terraform workflow
Four commands cover the full lifecycle for most day-to-day work.
terraform init
terraform init
# or: tofu init
Run this once in a new project directory. It downloads the provider plugins specified in your configuration, sets up the local .terraform directory, and initializes the backend for state storage. Nothing gets created in Azure at this step.
The Terraform init guide covers the flags and behaviors in depth, including how init handles provider version upgrades and what to do when it fails in CI/CD.
terraform fmt
terraform fmt -recursive
# or: tofu fmt -recursive
Formats all .tf files in the current directory (and subdirectories with -recursive) to Terraform's canonical style. Run this before committing. It's non-destructive and idempotent: running it twice produces the same result. Most teams add it as a pre-commit hook or CI check so formatting drift never reaches code review.
terraform validate
terraform validate
Validates the configuration files for syntax errors and basic logical consistency. It catches problems like missing required arguments or referencing a variable that doesn't exist, before you waste time on a plan that will fail partway through.
terraform plan
terraform plan
Reads your configuration, queries the current state of your infrastructure, and produces a diff: what will be created, changed, or destroyed. No changes happen yet. Review the plan output carefully, because it's the last point where you can catch a resource deletion you didn't intend.
A common pattern in CI/CD is to save the plan as an artifact:
terraform plan -out=tfplan
Then apply that exact plan in a subsequent step, so what was reviewed is what gets applied. If you're running Terraform in a team environment, env zero automates this: it runs plan on every pull request and posts the diff as a comment directly in the PR, so infrastructure review happens alongside code review rather than in a separate process.
terraform apply
terraform apply
# or apply a saved plan:
terraform apply tfplan
Applies the changes described in the plan. Without a saved plan file, Terraform runs a fresh plan and prompts for confirmation before proceeding. After a successful apply, the terminal prints the output values you defined.
terraform output
terraform output
terraform output storage_account_primary_endpoint
Queries output values from the state file without running a plan or apply. Useful when you need to retrieve a value after the fact: passing an endpoint to another tool, or checking what was deployed without re-reading the terminal history.
terraform console
terraform console
Opens an interactive REPL for evaluating HCL expressions against your current state and variables. Type any expression and get the result immediately:
> var.location
"East US"
> azurerm_resource_group.main.id
"/subscriptions/.../resourceGroups/tf-tutorial-rg"
> format("storage-%s", var.resource_group_name)
"storage-tf-tutorial-rg"
This is the fastest way to test string functions, check what a resource attribute resolves to, or debug a complex expression before committing it to a config file. Exit with Ctrl+C or exit.
terraform destroy
terraform destroy
Destroys all resources managed by the current configuration. It produces a destroy plan first and requires explicit confirmation. Use with care in any environment that holds data you care about.
Managing Terraform state
What state is and why it matters
Terraform keeps a record of every resource it manages in a state file, terraform.tfstate. This file is how Terraform knows that the azurerm_resource_group.main block in your config corresponds to an actual resource group in Azure with a specific ID. Without state, every plan would look like a fresh deployment.
State is also how Terraform detects drift. If someone deletes a resource directly in Azure (bypassing Terraform), the next terraform plan will show it as something that needs to be recreated. The Terraform Refresh Command post covers how to reconcile state with reality explicitly.
One security point worth understanding: state files store resource attributes in plaintext, including sensitive values like database passwords, private keys, and connection strings. That's why *.tfstate belongs in .gitignore and state should be stored in a backend with access controls, not on a shared file system. Treat the state file with the same care as the secrets it may contain.
For a complete breakdown of how state works, including how to recover from state corruption, see Terraform State File Explained.
Remote state with Azure Blob Storage
By default, Terraform stores state in a local terraform.tfstate file. That works for solo development, but it breaks down as soon as two people try to apply the same configuration concurrently, or when the state file lives on a laptop that isn't backed up.
Remote state solves both problems. Store state in Azure Blob Storage and every team member reads and writes the same source of truth. Create a storage account dedicated to state (separate from the one in your tutorial config), then add a backend block to main.tf:
terraform {
backend "azurerm" {
resource_group_name = "tf-state-rg"
storage_account_name = "tfstatedemo2026"
container_name = "tfstate"
key = "terraform.tfstate"
}
}
Run terraform init again after adding this block. Terraform will prompt you to migrate existing local state to the remote backend.
For production environments, consider authenticating the backend using Microsoft Entra ID rather than storage access keys. The Azure backend configuration docs cover the five available authentication methods.
State locking
When using a remote backend, Terraform acquires a lock before modifying state and releases it when the operation completes. This prevents two concurrent apply runs from corrupting the state file. Azure Blob Storage implements locking through blob leases, with no extra configuration required.
If a Terraform run is interrupted mid-apply, the lock may not be released automatically. To manually clear it:
terraform force-unlock LOCK_ID
The lock ID appears in the error message when a subsequent operation finds the lock still in place. At scale, manually hunting for lock IDs across many workspaces is the kind of toil that adds up fast. env zero surfaces lock status and active deployments in a central dashboard, so the team can see what's running and release stuck locks without touching the CLI.
Remote state plus locking solves the mechanics, but it doesn't surface what's happening across environments. If you need visibility into drift across multiple workspaces, the Ultimate Guide to Terraform Drift Detection covers detection, prevention, and remediation patterns.
Common pitfalls and how to fix them
Hardcoded credentials
The ARM_CLIENT_SECRET in your provider configuration should never be a literal string in main.tf. Use environment variables or a secrets manager (Azure Key Vault, HashiCorp Vault). If credentials land in git history, rotate them immediately, regardless of whether the repository is private. env zero removes this problem entirely at the team level: credentials are stored once in the platform and injected per environment at run time, so developers never handle cloud credentials directly.
Provider version drift
Skipping the version constraint in required_providers means Terraform downloads the latest provider on every new workspace. Provider upgrades occasionally include breaking changes. Pin to a minor version (~> 4.0) and upgrade deliberately, testing in a non-production workspace first.
Conflicting state from concurrent applies
Without a remote backend and state locking, two applies running simultaneously produce a corrupted state file. Move to remote state before anyone other than you runs terraform apply. This is a day-one problem in team environments.
Misreading the plan output
Terraform marks a resource for replacement (destroy + create) whenever a property that can't be updated in-place changes, such as a storage account name. The plan shows this as -/+ rather than ~. Read the plan carefully on anything involving databases or storage; the visual difference between an update and a replacement is easy to miss when you're moving quickly. This is one of the strongest arguments for approval workflows: env zero requires a human sign-off on every plan before apply runs, which makes it much harder for an accidental replacement to reach production unreviewed.
Debugging provider errors
When a plan or apply fails with a vague provider error, the default output often doesn't show enough context. Set TF_LOG to get verbose output:
TF_LOG=DEBUG terraform plan
The debug log includes the full API request and response, which usually makes the actual error obvious: an invalid field value, a missing permission, or a rate limit. Pipe it to a file if the output is too long for the terminal:
TF_LOG=DEBUG terraform plan 2> debug.log
Valid log levels in order of verbosity: ERROR, WARN, INFO, DEBUG, TRACE. Start with DEBUG; use TRACE only if the provider-level logs aren't enough.
Importing existing resources
If Azure resources were created outside of Terraform, through the portal or a script, Terraform doesn't know about them until you import them. Run terraform import to bring an existing resource under management:
terraform import azurerm_resource_group.main \
/subscriptions/YOUR_SUB_ID/resourceGroups/tf-tutorial-rg
After importing, reconcile the configuration to match the resource's actual state, then run a plan to confirm no unexpected changes are queued.
Azure-managed attribute drift
Azure quietly modifies certain resource attributes after creation: policy assignments add tags, Azure Defender updates settings, Azure Active Directory populates fields your config never set. Every subsequent terraform plan will show a diff you didn't cause and can't eliminate by changing your config.
The fix is the lifecycle block with ignore_changes:
resource "azurerm_resource_group" "main" {
name = var.resource_group_name
location = var.location
tags = local.common_tags
lifecycle {
ignore_changes = [tags]
}
}
This tells Terraform to stop tracking changes to tags on this resource. The config still sets the initial tags on creation; it just stops flagging Azure-managed modifications as drift. Use it surgically: adding ignore_changes = [all] suppresses every drift signal, which defeats the purpose of having state.
Managing multiple environments
Most teams need at least three environments: development, staging, and production. Two patterns cover this.
Terraform workspaces use a single configuration with separate state files per workspace. Create and switch between them with:
terraform workspace new dev
terraform workspace select dev
terraform workspace list
This works well when environments are structurally identical and differ only in variable values. The downside: there's no structural isolation between workspaces, so a misconfigured terraform destroy in the wrong workspace can hit production.
Separate root modules give each environment its own configuration, state, and variable files. The most common implementation uses per-environment .tfvars files alongside a shared configuration:
terraform-azure-demo/
├── main.tf
├── variables.tf
├── outputs.tf
├── dev.tfvars
└── prod.tfvars
Where dev.tfvars might look like:
resource_group_name = "myapp-dev-rg"
location = "East US"
storage_account_name = "myappdevstorage"
And prod.tfvars:
resource_group_name = "myapp-prod-rg"
location = "East US 2"
storage_account_name = "myappprodstorage"
Apply against a specific environment by passing the var file explicitly:
terraform plan -var-file=dev.tfvars
terraform apply -var-file=prod.tfvars
Each environment still uses a separate backend key so state files don't overlap. The blast radius of any mistake is scoped to one environment: a terraform destroy -var-file=dev.tfvars can't reach production.
The tradeoffs run deeper than they appear at first. The Terraform Files and Folder Structure post covers both approaches with concrete examples. The Terragrunt tutorial is worth reading if you're managing many environments, since Terragrunt handles the orchestration layer that plain Terraform leaves to you.
Managing Terraform at scale with env zero
Here's the pattern we see most often: a team adopts Terraform, gets the remote backend working, writes a few CI/CD scripts to run plan and apply, and ships it. Six months later they have a tangle of bash wrappers, undocumented environment variables, a Slack channel where someone posts "is anyone running a plan right now?", and two engineers who understand how it all fits together. When one of them leaves, it becomes a problem.
The local Terraform workflow in this tutorial is the right starting point. It doesn't scale to a team.
The gaps are predictable. Remote state tells you what Terraform last applied, not who ran it, why, or what it changed. There's no audit trail by default. Access control is all-or-nothing: developers either have the credentials to apply or they don't, which means either production runs go through a bottleneck or you accept the risk of anyone applying anything. And drift (resources modified directly in Azure) accumulates silently until the next apply surfaces a conflict you can't explain.
env zero is built to close these gaps without requiring teams to build and maintain their own platform tooling. Every Terraform and OpenTofu deployment runs centrally, with Role-Based Access Control (RBAC) per environment so the right people can apply to dev freely and production requires an explicit approval. Composable Workflows replace the bash wrappers with a proper pipeline. Drift detection runs continuously, not on a schedule, so changes made directly in Azure surface before they cause an incident rather than during one. The full audit trail (who triggered what, what the plan showed, who approved it) is there when you need it.
For teams using Azure, env zero's automated drift detection integrates directly with the same remote state backend you configured in this tutorial. No re-architecture required.
If you want to see what the workflow looks like before committing to a platform migration, How to Use Terraform Locally with the env zero Platform walks through the setup.
Related reading: Terragrunt Tutorial: Examples and Use Cases. When your Terraform codebase grows to dozens of root modules, Terragrunt handles the orchestration layer that plain Terraform leaves to you.
What's next
The configuration in this tutorial creates a resource group and a storage account. That's the right starting point, but Terraform's real value shows up as your infrastructure grows.
From here, the most valuable next steps are:
- Terraform Modules Guide: learn how to package reusable infrastructure components
- Terraform Functions Guide: conditional logic, string manipulation, and data transformation in HCL
- The Four Stages of Terraform Automation: how infrastructure automation matures from local runs to governed, team-scale deployments
Try it with env zero
Running Terraform locally works fine to learn the tool. Running it at team scale, with remote state, drift detection, approval gates, and audit logs, is a different problem.
Start a free trial or book a demo to see how env zero handles Terraform and OpenTofu deployments across environments.
References
- Terraform documentation: official HashiCorp docs
- OpenTofu documentation: official OpenTofu docs
- azurerm provider documentation: full Azure provider reference
- azurerm_storage_account exported attributes: confirms primary_blob_endpoint
- Azure Remote Backend configuration: HashiCorp docs
- Terraform GitHub Releases: v1.14.8, March 25, 2026
- OpenTofu GitHub Releases: v1.11.6, April 8, 2026
- HashiCorp Business Source License announcement
Frequently asked questions
What is Terraform used for?
Terraform provisions and manages cloud infrastructure using declarative configuration files. You describe the resources you want (virtual machines, networks, databases, DNS records) and Terraform handles creating, updating, and deleting them to match your configuration. It works across Azure, AWS, GCP, and hundreds of other providers.
What's the difference between Terraform and OpenTofu?
OpenTofu is a community-maintained fork of Terraform, created after HashiCorp changed Terraform's license in 2023. Both tools use the same HCL syntax, the same provider ecosystem, and the same core commands (init, plan, apply). OpenTofu is licensed under the Mozilla Public License 2.0. As of April 2026, OpenTofu 1.11 ships features like provider-defined functions and ephemeral resources that aren't yet in the equivalent Terraform release.
Does this tutorial work with OpenTofu?
Yes. Replace terraform with tofu in every command. The HCL configuration files, the Azure provider setup, and the remote backend configuration are identical for both tools.
Do I need to use Azure for this tutorial?
No. The Terraform workflow (init, validate, plan, apply, destroy) is the same regardless of provider. The provider block and resource types change, but everything else carries over. HashiCorp maintains getting-started tutorials for AWS, GCP, and other providers.
What happens if someone modifies Azure resources outside of Terraform?
Terraform only knows about changes it made itself. If a resource is modified directly in the Azure portal, the next terraform plan will detect the drift and show a diff. If a resource is deleted outside of Terraform, the plan will show it as needing to be recreated. This is why remote state plus continuous drift monitoring matters for teams: the longer drift goes undetected, the more diverged your configuration becomes from reality.
Can I use Terraform with multiple Azure subscriptions?
Yes. Define multiple provider blocks with different aliases, each authenticating to a different subscription. Reference the specific provider in each resource block with provider = azurerm.ALIAS. This is the standard approach for managing resources across environments or business units with separate Azure subscriptions.
How is Terraform different from Azure Bicep or ARM templates?
Bicep and ARM templates are Azure-specific. Terraform is provider-agnostic, meaning the same tool and workflow manage resources across Azure, AWS, GCP, and others. Terraform's HCL syntax is generally more readable than ARM JSON. Bicep is closer in readability to HCL, but still locked to Azure. For teams already managing multi-cloud infrastructure, Terraform's breadth is the deciding factor.
What is the Terraform Registry?
The Terraform Registry is the public index of providers and modules for Terraform and OpenTofu. When you write source = "hashicorp/azurerm" in a required_providers block, that string is a Registry address: hashicorp is the namespace and azurerm is the provider name. Running terraform init fetches the provider binary from the Registry automatically.
The Registry also hosts community and verified modules: reusable configurations for common patterns like Azure Kubernetes Service clusters, virtual networks, and storage accounts. Before writing a complex resource from scratch, it's worth checking whether a well-maintained module already exists.
Is Terraform free to use?
Terraform (CLI) is free and open source under the Business Source License. OpenTofu is free and open source under the Mozilla Public License. Both are free for local use and self-managed CI/CD. HCP Terraform (HashiCorp's hosted platform) has a free tier with limits and paid plans for teams. env zero also has a free trial for teams wanting a managed platform.
Does Terraform work with Ansible?
Yes. Terraform and Ansible complement each other at different layers: Terraform provisions cloud infrastructure, Ansible configures what runs on it. The standard pattern is to run Ansible as a post-apply step, consuming Terraform outputs (such as IP addresses and SSH keys) to build the Ansible inventory. See Ansible vs Terraform: when to choose one, use both, or consider OpenTofu for a full walkthrough including a working env0 Custom Flow example.

.webp)

![Using Open Policy Agent (OPA) with Terraform: Tutorial and Examples [2026]](https://cdn.prod.website-files.com/63eb9bf7fa9e2724829607c1/69d6a3bde2ffe415812d9782_post_th.png)