Multi GCP environments with Terraform: from local backend to TF Cloud

An example of how you can manage multiple GCP environments with Terraform Cloud

Image for post
Image for post

If you are working on a small project, chances are that you do not want to stand up a complicated CI/CD pipeline. That being said, you probably still the need to put in place a simple workflow that allows you to deploy your infrastructure resources on at least two environments: development and production. For this purpose, I like to use Terraform Cloud and here I’ll show you how.

GCP Project Objective and Terraform Setup

We will create a simple GCP project that includes 3 GCS storage buckets: landing, raw and curated. These buckets will be the foundation of an hypothetical Data Lake.

Photo by Gabriel Perelman on Unsplash

As this is a small project, I like to create only Dev and Prod environments as a QA environment will unnecessarily increase complexity. There are two philosophies when it comes to structuring a Terraform project that has to manage multiple environments:

  • create separate folders, one per environment: each folder has an environment’s configuration files. This gives the possibility to configure a different backend per environment as well as the flexibility, if needed, to create environments which are not exact copies of each other.
Separate folders per environment, separate config and state files
  • create workspaces, one per environment: what is a workspace, to begin with? Workspaces allow you to have multiple states in the same backend and tied to the same configuration. This allows you to deploy multiple instances of the same infrastructure.
One configuration, multiple workspaces with state files stored locally in the terraform.tfstate.d folder

Because this is a small project, even though the recommended way is to go for the different folders, I like the workspaces option better as it allows me to work on the same config files for both environments. You may wonder how we would be able to name resources so that they have a reference to the environment they belong to. We will do it dynamically, using the terraform.workspace variable which holds the workspace’s name.

You can look at the GitLab repo I created for this project to better understand the overall structure. As you can see, I am also using two local modules, one for the creation of the project and another one for the creation of the GCS buckets. Before we start deploying, here below you have the initial resources hierarchy in GCP:

initial resources hierarchy

There’s the organization marco at the top and then I created a medium folder that will host the two projects. You can also see a TF Admin project. This project hosts the service account that terraform will use to create projects, resources, link billing accounts, etc.

Let’s deploy some resources

When initializing a Terraform project, a default workspace is created automatically.

So, let’s create the DEV workspace by using the terraform workspace new DEV command:

After the workspace is created, terraform automatically activates it and creates the terraform.tfstate.d/DEV folder where the state files will be saved. We can run now the plan command: terraform plan -out=”tf-medium-dev.plan”

And then deploy these 9 resources by running the terraform apply tf-medium-dev.plan:

The Terraform output gives us the list of created resources but let’s go and check from the console as well:

And there they are. As you can see, each bucket name, has the dev suffix, because of the terraform.workspace variable, as well as a random string to make sure our bucket name is unique globally.

We can follow the same steps to create the production environment and its GCS buckets. We just have to run the same terraform workspace new, plan and apply commands in sequence. Once the deploy is complete, if we look at the updated resources structure from the console, we can see both our projects.

Photo by Barth Bailey on Unsplash

Let the migration begin

When we’re ready to start collaborating with friends and colleagues, keeping the state local, does not make a lot of sense anymore. We need to have a remote backend where we can keep our terraform state files. This will make sure that we won’t be stepping on each others toes and avoid the risk of having inconsistent states. When it comes to migrating to a remote backend, we have a couple of options: Terraform Cloud, and a GCS bucket. For this project, we will be migrating to Terraform Cloud. After we create a Terraform Cloud account and set up an organization, we are ready to start the migration.

From the terminal, we run the terraform login command in order to login to our Terraform Cloud account. We will have to create a token

and then do copy paste it into the terminal window where we executed the command. The next step is to create a backend.tf file where we will specify a couple of things:

  • the organization: the Terraform Cloud organization where our local backend will be migrated to.
  • the workspaces name prefix: all the local backend configuration instances (DEV and PROD in our case) will be migrated to separate Terraform Cloud workspaces and here they will be created with the same local names. Optionally, we can add a prefix as well and here we’ll set it to: tf-medium-. Hence our two Terraform Cloud workspaces will be tf-medium-DEV and tf-medium-PROD.

At this point, we are ready to start the migration. To do that, we simply execute the terraform init command again and then type yes when asked for confirmation:

After the workspace migration is complete, we can see the two workspaces in Terraform Cloud:

Awesome! Our migration is not over yet though. When using Terraform Cloud workspaces, the file that has all our variables’ values, terraform.tfvars, is created off of a variable list to be maintained directly in the cloud workspaces from the web console.

While it should be easy to maintain text variables, there are a couple of things that we need to pay attention to:

  • GCP credentials: we can’t use the service account json file directly anymore. In order to authenticate on the GCP, we need to maintain a GOOGLE_CREDENTIALS variable with the content of the service account key file downloaded from the GCP console. Before we copy the file’s content we need to remove all the newline characters (instructions on the the gitlab repo). We will also mark this variable as Sensitive — write only and that allows for some level of security
  • terraform.workspace: are we still be able to use this variable to name our resources? Unfortunately not. We now have two truly independent Cloud workspaces, each of them related to a different configuration that only has the default workspace. That means that no matter the cloud workspace we will be using, our resource names will always be created with the default suffix. So, how do we go about naming our resources depending on the environment where we create them? We simply update our configuration files to use a var.workspace variable whenever terraform.workspace = default

After we maintain the variables on terraform cloud, we update our local variable declaration file (remember we’ve replaced the service account file for the GOOGLE_CREDENTIALS variable right?) and any config files where the workspace variable has to be used, we can say our migration is over.

To run a test from the CLI, let’s select the DEV workspace and run the terraform plan command. If we did everything correctly, no resources should be updated!

Our test was successful! If you noticed, the planning is actually executed on Terraform Cloud and what you see on the terminal is just the streaming of the log that is generated by the plan command remote execution. If we switch to the PROD workspace and run the plan command again, we would get the same result: our infrastructure is up-to-date.

Let’s add some automation

A drawback related to using local workspaces to manage different environments is the risk of planning and applying a configuration to the wrong environment. In the test we just ran, to make sure the migration was successful, we had to manually select the DEV workspace first and then the PROD one. We all know that manual steps can lead to errors. To eliminate this risk, we can create a simple workflow that allows us, by working always in the local DEV workspace, to update our GCP resources as we commit changes to our GitLab repo. More precisely here is what we want:

To achieve that, we have to link our Terraform Cloud workspaces to the right VCS repo branch. While we can establish the connection between the PROD workspace and the repo master branch right away, we can connect the DEV workspace only after creating a DEV feature branch.

To run a quick test, let’s make a small change to the configuration of the landing bucket’s lifecycle policy: change the retention # of days from 7 to 14. When we commit the changes here is what we see:

the plan command is executed automatically and has expected there’s only one change that has to be applied. As we click on the confirm and apply button the infrastructure in the DEV project is updated. Workspace settings can be updated so that changes to DEV environment are applied automatically without the need to confirm.

Once we merge changes to the master branch, the plan command is automatically triggered for the PROD environment. After we confirm & apply, changes are also applied to the production environment.

Final Situation

Conclusion

In this post we started by looking at how we can use local workspaces to manage multiple environments. We then went through all the needed steps to migrate the local workspaces to Terraform Cloud. Finally we looked at how we can link Terraform Cloud workspaces to a GitLab repo in order to create a very simple CD pipeline.

Of course, there are other things that we could have done such as, creating a CI/CD pipeline in GitLab that has more steps than simply resource deployment, generating temporary credentials using Terraform Vault, etc. Who knows, maybe I’ll cover these in future articles.

If you made it this far, I really would like to thank you! This was my first article and maybe a little too long :) Of course, this is only one way of managing multiple environments using Terraform. Don’t forget to run the terraform destroy command from Terraform Cloud…

Written by

Cloud Solutions Architect, Google Cloud Certified, Terraform Certified and Angular enthusiast

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store