In almost all cases where infrastructure as code is used it is orchestrated using a pipeline. It makes sense as one of the greatest benefits of treating your infrastructure as code is being able to embrace a DevOps approach and leverage the techniques and processes that come with it.

One of the most popular choices in tooling when it comes to infrastructure as code is Hashicorp’s Terraform. It offers a number of benefits including an easy to understand and declarative language, knowing changes before execution, efficient deployment using graph theory, modules and state management. If you’ve seen my other articles or know me, you’ll know I’m a huge advocate of Terraform.

The Terraform feature in focus for this article is state management, specifically for Terraform open source. No matter how well you manage your code and pipelines there will inevitably come a time where something is broken and you need to manually manipulate your Terraform state to fix the problem. Realistically, this is usually not a big deal, especially once you’ve been working with Terraform for a while and are familiar with Terraform state and how to manage and interact with it. Where it becomes tricky is when you need to run a terraform init locally to then manipulate state. If you do need to manually intervene with state, I always recommend doing so using the Terraform CLI tools rather than directly editing the JSON state file. To use the tools locally you’ll the backend and provider configurations, variables as well as the same provider and Terraform versions.

ℹ  Tip
Terraform state itself is not the topic of this article, but if you'd like to learn more you can read about it here.

It’s very common in Terraform pipelines to pass in, or dynamically configure things like variables and backend configurations. This is very convenient and can even offer security benefits in pipelines, however it introduces challenges when you need to run some commands locally to fix an issue. Say your code requires many variables, if you’re passing in the value of those variables as command line arguments or environment variables, how will you get them all together to run an init locally? You’ll also have to get your backend configuration if it’s being generated dynamically and ensure you’re using the same provider and Terraform versions as your pipeline to prevent compatibility issues.

The solution to these is rather simple. You either need to make sure all your configuration is checked in to git, or if you are generating some inputs and configuration dynamically that they are stored as an artefact. This way you’ll either be able to run your init just by having the repo downloaded or you’ll need to download the pipeline artefact with the full configuration. In either case you will need the credentials required for your backend and provider. These should never be stored in git or artefacts.

⚠  Warning
Be careful when it comes to secrets. Make sure you never commit secrets to git or disk. You should always retrieve secrets from a service such as Hashicorp Vault, Azure KeyVault or something similar.

When it comes to variables, all you need to do is make sure you are storing all your variables in .tfvars files. If you do have dynamically retrieved or generated variables from another source, instead of passing them in through the command line append them to the tfvars file. Obviously don’t do this if these are sensitive values. In that case you should ideally be retrieving these sensitive values from a trusted, secure source using a Terraform data object. If that’s not possible for whatever reason and this variable must be passed in at the command line or using an environment variable, then there is really only one solution. You’ll need to retrieve and input those values manually.

Another common scenario is generating the provider and backend configuration in a pipeline. This can be convenient for generating the name of state files or providing credentials for a provider. It becomes a problem though when you need to run a local operation. Your provider credentials should never be stored in git. Instead you’ll need to retrieve these and supply them as environment variables. Other parts of the provider configuration such as version constraints should be described in a providers.tf file and committed to git. For example with the azurerm provider the providers.tf would look like this:

provider "azurerm" {
  tenant_id       = "00000000-0000-0000-0000-000000000000" # Optional, this can also be specified with an environment variable but isn't a sensitive value.
  subscription_id = "00000000-0000-0000-0000-000000000000" # Optional, this can also be specified with an environment variable but isn't a sensitive value.
  features {}
}

This is different to the required providers block which specifies the providers and versions required for a Terraform module:

terraform {
  required_providers {
    azurerm = {
      source          = "hashicorp/azurerm"
      version         = ">= 2.62.0"
    }
  }
}

With your provider configured you can complete the configuration with the following environment variables if you are using a service principal and secret:

  • ARM_CLIENT_ID
  • ARM_CLIENT_SECRET

You may need to use different environment variables depending on the authentication method you’re using. More information on what to use for each can be found in the azurerm provider documentation here.

ℹ  Tip
Providing credentials as environment variables is great because it means the credentials aren't stored on disk. This breaks down though if your shell is storing your command history. Be sure to clear the command from history after you've set the environment variable.

When it comes to backend configuration you have some more flexibility as you can use partial configuration to supply the consistent or static parts of your configuration, and then complete the configuration with any credentials using environment variables. Continuing our Azure example, if you’re using an azurerm backend, your partial configuration might look something like this:

terraform {
  backend "azurerm" {
    storage_account_name = "myawesomestorageaccount"
    container_name       = "terraformstate"
    key                  = "myapp.tfstate"
  }
}

We can then provide the authentication details like the SAS token or storage account key as an environment variable to keep things secure. This also makes it easy to find where the state for a particular configuration is stored, and makes it easy to perform local operations without compromising security.

Alternatively, if you don’t want to follow the “commit to git” solution outlined above, you can update the configuration at the start of the pipeline and generate a pipeline artefact. This way you can download the artefact with the configuration you need to use, add some of the details as environment variables in the same way we outlined above for each item and then you’re good to go. The process for this depends on your DevOps tooling but is typically very simple to do.

Now with a couple of options to manage variables, provider and backend configurations, we are in a great position to enable local Terraform operations while maintaining security. We also get the added bonus of more readable and portable code.

ℹ  Tip
If you're working with Terraform Cloud or Enterprise and need to manipulate state take a look at these great articles from Brendan Thompson.