Introduction
Terraform, Docker and CI/CD are not just buzzwords but must-haves in tools. They are used in some combination or another in most modern cloud ecosystems. They are also made to look really difficult – but in reality, you need to take one step at a time and slowly string them all together and improve them.
With most languages, we create artefacts and store them in some artefact repository like nexus or jFrog. The same applies to docker images as well, they can be stored in private or public docker image repositories. These artefact repositories help version artefacts and make them quite structured.
However, when you turn to terraform, I found the same missing. Why because – well the answer you get most often is that it is a configuration or something on those lines. So what can be done to get around this problem of missing versioning our infrastructure???
Question – Can I version my infrastructure? It can be done if we can version our terraform code? But how?
Answer – This blog is my attempt at helping me find the answer to the question. This involves the use of all the fancy tech. We will use Terraform, Docker Gitlab CI/CD and docker hub to create an artefact that will represent a version of our infrastructure. It can have various advantages
- Portability
- Repeatable, Versioned code
- Consistently versioned environments
There might be other approaches and better ones, and I would really be grateful if you can share them in the comments section. It will be a great addition to the blog.
Note: This blog does assume you know a bit about terraform, docker and CI/CD
So this is blog post is divided into the following sections.
All the code is available on gitlab.com as well!
What do we intend to do?
Let’s first define what we want to do here. My intention is to put together a pipeline that does the following.
Phase | Tasks |
---|---|
Build | – Store the terraform code inside a docker image – Build docker image |
Test | – Run some test |
Merge to Master | – Push the image to a docker registry |
Deploy | – Manual step – Pull the image from the docker registry – Provision cloud resources by starting the docker container based on images built via build pipeline. |
The pipeline tries to show all these steps as individual steps which in real-world may not be ideal but they are shown here just to highlight simple aspects of a CI/CD pipeline
Now all this may sound a bit over the top but it is easier than you think. There is nothing fancy. Also, you can use any other tool as well or have some other different flow. Like AWS CodeBuild and CodeDeploy or Jenkins or CircleCI or GitHub Actions or any other CI tool. I picked Gitlab but you could use any.
Terraform Script
Our terraform is pretty simple and it contains a simple code that provisions an S3 bucket. You can just about make your configuration as complex as you want. The pipeline still remains pretty much the same.
bucket = "cloudwalker-test-bucket"
acl = "private"
tags = {
Name = "cloudwalker-test-bucket-for-blog"
Environment = "Blog"
}
}
output "bucket-name" {
value = aws_s3_bucket.cloudwalker-test-bucket.arn
}
Docker Image – Dockerfile
The docker file is very simple – is based on the alpine image. You can choose which every you want. It has the following software
- Terraform binaries
- Terraform script
- Shell script to execute the code
Pretty simple stuff!
# Step 1 - Install packages to docker images
RUN apk upgrade && apk update && apk add --no-cache python3 py-pip git terraform
# Step 2 - Install AWS command line utility
RUN pip install awscli
# Step 3 - Create a directory and copy terraform code
RUN mkdir infra
COPY terraform /infra/terraform
COPY scripts/*.sh /infra/
RUN chmod +x /infra/*.sh
# Step 4 - Set the working directory
WORKDIR /infra
You can build this image locally but remember you need an internet connection.
Now that we have Dockerfile let’s turn our attention to CI/CD.
CI/CD using Gitlab
This is where the next stage of our magic happens and needs some explanation. But let’s just first look at the code and then go over it line by line later. In a nutshell, what we want to achieve with this pipeline is
- To trigger a build phase automatically when a PR is created followed by a test phase to test is the docker image.
- If the code is merged to master, the image should be pushed to the docker registry.
- Once approved the pipeline should magically deploy the resource configured in terraform.
# Step - 1 - Define variables we need to run this pipeline
variables:
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
DOCKER_REGISTRY_LOGIN_ID: $DOCKER_REGISTRY_LOGIN_ID
DOCKER_REGISTRY_LOGIN_PASSWORD: $DOCKER_REGISTRY_LOGIN_PASSWORD
DOCKER_IMAGE_NAME: "docker_tf"
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
# Step - 2 - List of stages for jobs, and their order of execution
stages:
- build-image
- test-image
- push-docker-image
- deploy
# Step - 3 - Build docker image
build-job:
services:
- docker:dind # Allow docker daemon to run inside docker
image:
name: docker:20.10.7 # Image used for this step
stage: build-image
script:
- echo "Build docker image for terraform..."
- docker build -t $DOCKER_IMAGE_NAME:$CI_COMMIT_SHORT_SHA -t $DOCKER_IMAGE_NAME:latest .
- docker images
- mkdir image
- docker save $DOCKER_IMAGE_NAME:latest > image/$DOCKER_IMAGE_NAME.tar
- echo "Docker image successfully built."
artifacts:
paths:
- image
only: # Only run this step for merge requests & and on main branch
- merge_requests
- main
# Step - 4 - Test stage - Keeping it simple. Run terraform plan inside the docker container
# This step can be made quite fancy like executing on a different environment.
test-image-job:
services:
- docker:dind # Allow docker daemon to run inside docker
image:
name: docker:20.10.7 # Image used for this step
stage: test-image
script:
- docker load -i image/$DOCKER_IMAGE_NAME.tar
- docker run -e AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID $DOCKER_IMAGE_NAME:latest sh test_infra.sh
artifacts:
paths:
- image
only: # Only run this step for merge requests & and on main branch
- merge_requests
- main
# Step - 5 - Push image to docker registry
push-docker-image-job:
services:
- docker:dind # Allow docker daemon to run inside docker
image:
name: docker:20.10.7 # Image used for this step
stage: push-docker-image
script:
- docker load -i image/$DOCKER_IMAGE_NAME.tar
- echo "$DOCKER_REGISTRY_LOGIN_PASSWORD" | docker login --username $DOCKER_REGISTRY_LOGIN_ID --password-stdin
- docker tag $DOCKER_IMAGE_NAME:latest $DOCKER_REGISTRY_LOGIN_ID/$DOCKER_IMAGE_NAME:latest
- docker tag $DOCKER_IMAGE_NAME:latest $DOCKER_REGISTRY_LOGIN_ID/$DOCKER_IMAGE_NAME:$CI_COMMIT_SHORT_SHA
- docker push $DOCKER_REGISTRY_LOGIN_ID/$DOCKER_IMAGE_NAME:latest
- docker push $DOCKER_REGISTRY_LOGIN_ID/$DOCKER_IMAGE_NAME:$CI_COMMIT_SHORT_SHA
only:
- main # Only run this step for main branch
# Step - 6 - Deploy terraform code using docker image
deploy-infra-job:
services:
- docker:dind # Allow docker daemon to run inside docker
image:
name: docker:20.10.7 # Image used for this step
stage: deploy
script:
- echo "$DOCKER_REGISTRY_LOGIN_PASSWORD" | docker login --username $DOCKER_REGISTRY_LOGIN_ID --password-stdin
- docker pull $DOCKER_REGISTRY_LOGIN_ID/$DOCKER_IMAGE_NAME:latest
- docker run -e AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID $DOCKER_REGISTRY_LOGIN_ID/$DOCKER_IMAGE_NAME:latest sh apply_infra.sh
when: manual
only:
- main # Only run this step for main branch
I have divided the code into the following steps which match with the comments in the code just to make it a bit easier to understand. Also in a real-world scenario, we would like to package most of the commands as scripts but for simplicity, they are all typed out in the YAML file.
Step | Description |
---|---|
Step – 1 – Define Variables | All variables required for running the CI/CD pipeline are defined in this step. Gets the variables stored in the Gitlab CI. |
Step – 2 – Define stages | All stages in the CI/CD pipeline in order of execution are defined in this step |
Step – 3 – Build docker image | This stage builds the docker image |
Step – 4 – Test docker image | This stage allows us to test the docker image which has been created. This step can be made quite fancy like executing in a different environment. |
Step – 5 – Push docker image | This stage pushes the docker image to a docker registry. In the case of this blog, it is the docker hub |
Step – 6 – Deploy | This is a manual step and can be executed only on approval. |
Define variables
The following variables are used
Variable | Description |
---|---|
AWS_DEFAULT_REGION | Standard environment variable for accessing AWS cloud. |
AWS_ACCESS_KEY_ID | Standard environment variable for accessing AWS cloud |
AWS_SECRET_ACCESS_KEY | Standard environment variable for accessing AWS cloud |
DOCKER_REGISTRY_LOGIN_ID | Docker registry username |
DOCKER_REGISTRY_LOGIN_PASSWORD | Docker registry password |
DOCKER_IMAGE_NAME | Docker image name |
If you are following along and also doing it on your account you would need your own values for these variables.
Here is the screenshot of the variables defined in Gitlab CI.

Demo
The code is ready all checked in. It’s time to run our little pipeline and make the most of it – hopefully, it should all work.
Pipeline execution when PR is created

Pipeline execution when code merged to master

Deployment of resources
Let’s click on deploy

Here is the summary of the pipelines that we executed

Docker Image on Docker Hub

Resources provisioned
Finally, the S3 bucket which was provisioned.

Hope you find this blog useful in learning terraform and ci/cd. If you like it please do share it!!