4 minutes
Terraform + AWS CodeBuild (2/2)
In this multipart post I’m going over the setup of a continuous integration pipeline on AWS CodeBuild, using Terraform to automate all the infrastructure configuration and creation. This part is all about the Terraform juice!
ECR Repository
Continuing the work on main.tf
…
One easy thing to get out of the way is an ECR repository to store Docker images:
resource "aws_ecr_repository" "acme_registry" {
name = "acme_server"
image_tag_mutability = "MUTABLE"
}
IAM Role
Next, declare a custom role with limited permissions to be assumed by the CloudBuild service when performing AWS operations on our behalf:
resource "aws_iam_role" "acme_builder_role" {
name = "ACME_Builder_CodeBuild"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codebuild.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
And the role policy containing the necessary permissions. The logs
actions are needed so that CloudBuild can stream some logs to CloudWatch. The ecr
actions are required to interact with ECR.
Note the “Resource” field specifying a concrete repository ARN that is obtained from the output of the aws_ecr_repository
declaration above.
resource "aws_iam_role_policy" "acme_builder_policy" {
role = aws_iam_role.acme_builder_role.name
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart",
"ecr:GetAuthorizationToken"
],
"Resource": "${aws_ecr_repository.acme_registry.arn}"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:CreateLogGroup",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
POLICY
}
CodeBuild
Finally, configure a CodeBuild project as shown below.
The interesting parts:
- service_role references the role defined in
acme_builder_role
- the environment block is used to configure the environment in which the builds will occur: machine type, docker image (we’re building a Docker image inside a Docker image), env vars, etc.
privileged_mode
made me loose a considerable amount of time: it needs to be enabled if one needs to run Docker commands inside the build image- in the
buildspec.yaml
, we’ve used references to variables such asDOCKERHUB_USERNAME
: these need to be configured here (the value itself coming from Terraform variables) - lastly, the source of the project
resource "aws_codebuild_project" "acme_server" {
name = "acme_server"
build_timeout = "5"
service_role = aws_iam_role.acme_builder_role.arn
artifacts {
type = "NO_ARTIFACTS"
}
cache {
type = "LOCAL"
modes = ["LOCAL_DOCKER_LAYER_CACHE", "LOCAL_SOURCE_CACHE"]
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/standard:5.0"
type = "LINUX_CONTAINER"
privileged_mode = true
environment_variable {
name = "ECR_REGISTRY"
value = "${aws_ecr_repository.acme_registry.repository_url}"
}
environment_variable {
name = "AWS_REGION"
value = var.aws_region
}
environment_variable {
name = "DOCKERHUB_USERNAME"
value = var.dockerhub_user
}
environment_variable {
name = "DOCKERHUB_PASSWORD"
value = var.dockerhub_password
}
}
source {
type = "GITHUB"
location = "https://github.com/s1moe2/terraform-codebuild.git"
git_clone_depth = 1
}
}
The idea for our pipeline is to have it run on every commit that reaches the master branch. CodeBuild reacts to events received via webhook (from GitHub, in this case). CodeBuild then allows then allows us to create filters for the events, which is how it is possible to manipulate the pipeline triggers.
The next block is the CodeBuild webhook filter configuration that would be done in the UI. The first filter means “on every commit”, the second means “to the master branch”.
resource "aws_codebuild_webhook" "acme_server_build_webhook" {
project_name = aws_codebuild_project.acme_server.name
build_type = "BUILD"
filter_group {
filter {
type = "EVENT"
pattern = "PUSH"
}
filter {
type = "HEAD_REF"
pattern = "^refs/heads/master$"
}
}
}
Lastly, CodeBuild needs GitHub credentials to be able to access the code repository:
resource "aws_codebuild_source_credential" "acme_credentials" {
auth_type = "PERSONAL_ACCESS_TOKEN"
server_type = "GITHUB"
token = var.github_pat
}
Variables
Over the definition of the infrastructure, multiple references to var.xxx
were used. These are variables that can be passed to Terraform’s environment and they need to declared as well. In the previous part we created a variables.tf
file for this end:
variable "aws_region" {
type = string
}
variable "aws_access_key_id" {
type = string
}
variable "aws_secret_key" {
type = string
}
variable "github_pat" {
type = string
}
variable "dockerhub_user" {
type = string
}
variable "dockerhub_password" {
type = string
}
As for the values, those can be in the actuall shell environment or, as I prefer, in a file like the typical .env
.
There is already a .tfvars
that will hold the variables and their values:
github_pat = "token"
aws_access_key_id = "keyid"
aws_secret_key = "secret"
aws_region = "eu-west-1"
dockerhub_password = "pwd"
dockerhub_user = "usr"
Run it!
Now that everything is neatly configured, it is time for Terraform to shine:
terraform apply -var-file=.tfvars
A plan is presented to us to confirm and, if everything went well, we should have everything in place ready to use.
Try to commit something and see CodeBuild in action in the console.
Thank you Gerardo Lima for taking the time to review this :)
765 Words
2022-04-19