Terraform and Amazon ECS – Part 1

Preface

Amazon’s Elastic Container Service (ECS) is a platform for deploying Docker containers into the AWS Cloud. It provides a rich set of tools for integrating your containerized application with other AWS services as well as features for managing and monitoring your application. Read more about Amazon ECS.

I’ll be deploying a small, five-container application into an ECS cluster using the Fargate task type. Fargate essentially provides a serverless experience for containers and alleviates the need for manual maintenance of the underlying compute nodes in the cluster. Learn more about AWS Fargate.

This application is composed of three containers which serve web traffic:

  • A React UI
  • A To-Do List API
  • An Amazon AWS Catalog API

There are also two containers which should not be accessible to the general public but rather should only be reachable internally. These are:

  • A MySQL Database
  • A Single Consul Host

The MySQL database will be a persistence store for the To-Do API and the Consul host is used as a key-value store and provides configuration data to the To-Do List application.

I will be using Terraform to provision all of the AWS resources required to create the infrastructure, define the ECS services and tasks, and set up service discovery.

Audience

This tutorial is designed to help provision a basic environment in which to deploy microservices. You should have some knowledge about AWS and its costs, be comfortable using the command line, and understand the basics of microservices and microservice architecture.

You should also be familiar with Terraform. Understand how variables are used, how to define resources, and how to define modules and dependencies.

Prerequisites

Source

All of the source code for this project can be found on my GitHub. Feel free to clone or fork the project and make whatever changes are necessary to suit your needs.

Code Conventions

I use the following structure for my Terraform projects:

root.tf -- The entrypoint for Terraform
vars.tf -- Variables definitions for root 
outputs.tf -- Outputs of root
terraform.tfvars -- Variable values / configuration of root
<component>/ -- High level component. For example, vpc
<component>/main.tf -- Resource definitions for this component
<component>/vars.tf -- Input vars for the component
<component>/output.tf -- Output values for the component

Step 0: Identify High Level Components

The first step to defining our AWS infrastructure is to identify which AWS resources will be necessary to run this application.

First, we will need a VPC in which to build our architecture. You can use an existing VPC if you would like, but this tutorial will proceed as though we are starting from a clean slate.

Since we have services which need to be accessible to the internet and services which should not be accessible via the internet, we will also need to create both public and private subnets. Our HTTP applications will be in the public subnet, the database and consul host will be in the private.

We will need to route traffic to our containers, which, are ephemeral in nature and thus we cannot rely on their public IPs for serving requests. So, we will use an Application Load Balancer (ALB) to provide a single entrypoint into the applications which will route traffic to the appropriate container based on the HTTP path. Using this load balancer requires a Multi-AZ deployment, so we will need to ensure we have public and private subnets in both availability zones.

To allow applications to communicate with one another, we will use AWS Cloud Map for service discovery. There is not much manual setup here, but we do need to define the Route53 Hosted Zone for our private DNS space.

Lastly, we will need an ECS cluster and the task and service definitions. Since we are using the Fargate deployment type, there is no need to manually provision any EC2 instances or Auto-Scaling Groups.

So, here is a quick summary of the high level components:

  • VPC which spans two AZs. Each AZ has one public and one private subnet.
  • Application Load Balancer to route traffic to containers
  • Route53 Hosted Zone to establish our private DNS space for discovery
  • ECS cluster and the task and service definitions for our containers.

Step 1: Create root.tf

Before creating any resources, we need to create our root terraform files.

touch root.tf
touch outputs.tf
touch vars.tf
touch terraform.tfvars

We can ignore most of these files for now, but to get started we will need to let Terraform know that we are using the AWS provider. Add the following to root.tf

provider "aws" {
   region = var.aws_region
}

… the variable declaration to vars.tf

variable "aws_region" {}

… and set the value in terraform.tfvars

aws_region = "us-west-2"

Step 2: Creating the VPC

Amazon Virtual Private Cloud (VPC) allows users to create logically separated cloud environments within a single account. The VPC I will create here will contain all of the components necessary to run our ECS cluster and allow the services to communicate as needed.

Here is a breakdown of the requirements for this VPC

  • Must span two availability zones
  • Must contain public subnets for internet facing services
  • Must contain private subnets for “back end” services
  • Containers launched into the private subnets should be able to reach the internet (to download dependencies for example)

Lets start to Terraform this and see how it goes. First create a folder for the VPC component that and create the vars, main, and outputs files.

mkdir vpc
touch vpc/vars.tf
touch vpc/outputs.tf
touch vpc/main.tf

First, we will need to define the necessary high-level resources: aws_vpc and aws_subnet. We will specify our desired AZs and CIDR settings via variables. Let’s also define a variable for tags just in case we want to add tags later in the project. The following should be placed in vars.tf:

variable vpc_cidr {}
variable vpc_subnet_01_az {}
variable vpc_subnet_02_az {}
variable vpc_subnet_pub01_cidr {}
variable vpc_subnet_pub02_cidr {}
variable vpc_subnet_pvt01_cidr {}
variable vpc_subnet_pvt02_cidr {}
variable tags {
  default = {}
}

Now that we have the variables in place, lets go ahead and create the VPC resource. Since we will be using service discovery for our services, we need to make sure enable_dns_support and enable_dns_hostnames are set to true. Note that the CIDR and tags are being specified by variables.

resource "aws_vpc" "main" {
   cidr_block           = var.vpc_cidr
   tags                 = var.tags
   enable_dns_support   = true
   enable_dns_hostnames = true
}

Now that we have a VPC, we have a home for our four subnets. We will use a naming convention to specify which are public and which are private. The AZs and CIDR settings for each subnet will be specified by variables. Note that the for the public subnets, we set the map_public_ip_on_launch setting to true.

resource "aws_subnet" "subnet_pub01" {
   vpc_id                  = aws_vpc.main.id
   availability_zone       = var.vpc_subnet_01_az
   cidr_block              = var.vpc_subnet_pub01_cidr
   tags                    = var.tags
   map_public_ip_on_launch = true
 }
 resource "aws_subnet" "subnet_pub02" {
   vpc_id                  = aws_vpc.main.id
   availability_zone       = var.vpc_subnet_02_az
   cidr_block              = var.vpc_subnet_pub02_cidr
   tags                    = var.tags
   map_public_ip_on_launch = true
 }
resource "aws_subnet" "subnet_pvt01" {
   vpc_id            = aws_vpc.main.id
   availability_zone = var.vpc_subnet_01_az
   cidr_block        = var.vpc_subnet_pvt01_cidr
   tags              = var.tags
 }
 resource "aws_subnet" "subnet_pvt02" {
   vpc_id            = aws_vpc.main.id
   availability_zone = var.vpc_subnet_02_az
   cidr_block        = var.vpc_subnet_pvt02_cidr
   tags              = var.tags
 }

In order for our public subnets to be reachable from the internet, we need to define an internet gateway, a routing table with a route to the internet gateway, and associate that routing table with both public subnets.

resource "aws_internet_gateway" "inet_gateway" {
   vpc_id = aws_vpc.main.id
   tags   = var.tags
 }
 resource "aws_route_table" "rt_pub" {
   vpc_id = aws_vpc.main.id
   tags   = var.tags
   route {
     cidr_block = "0.0.0.0/0"
     gateway_id = aws_internet_gateway.inet_gateway.id
   }
 }
 resource "aws_route_table_association" "rta_subnet_pub01" {
   subnet_id      = aws_subnet.subnet_pub01.id
   route_table_id = aws_route_table.rt_pub.id
 }
 resource "aws_route_table_association" "rta_subnet_pub02" {
   subnet_id      = aws_subnet.subnet_pub02.id
   route_table_id = aws_route_table.rt_pub.id
 }

The next step will be to build a path for any containers within the private subnets to be able to reach the internet. To do so, a NAT Gateway needs to be defined in the public subnet to translate the public address space for our private instances. We need to also build the routes for instances within those subnets to the NAT Gateway. We also need to associate an ElasticIP with the gateway.

resource "aws_eip" "ng01_eip" {
   vpc  = true
   tags = var.tags
 }
 resource "aws_nat_gateway" "nat_gateway_01" {
   allocation_id = aws_eip.ng01_eip.id
   subnet_id     = aws_subnet.subnet_pub01.id
   tags          = var.tags
   depends_on = [
     aws_eip.ng01_eip,
     aws_internet_gateway.inet_gateway
   ]
 }
 resource "aws_route_table" "rt_pvt01" {
   vpc_id = aws_vpc.main.id
   tags   = var.tags
   route {
     cidr_block     = "0.0.0.0/0"
     nat_gateway_id = aws_nat_gateway.nat_gateway_01.id
   }
 }
 resource "aws_route_table_association" "rta_subnet_pvt01" {
   subnet_id      = aws_subnet.subnet_pvt01.id
   route_table_id = aws_route_table.rt_pvt01.id
 }
 resource "aws_route_table_association" "rta_subnet_pvt02" {
   subnet_id      = aws_subnet.subnet_pvt02.id
   route_table_id = aws_route_table.rt_pvt01.id
 }

Let’s now add the VPC to our root.tf file.

module "vpc" {
   source                = "./vpc"
   vpc_cidr              = var.vpc_cidr
   vpc_subnet_01_az      = var.vpc_subnet_01_az
   vpc_subnet_02_az      = var.vpc_subnet_02_az
   vpc_subnet_pub01_cidr = var.vpc_subnet_pub01_cidr
   vpc_subnet_pvt01_cidr = var.vpc_subnet_pvt01_cidr
   vpc_subnet_pub02_cidr = var.vpc_subnet_pub02_cidr
   vpc_subnet_pvt02_cidr = var.vpc_subnet_pvt02_cidr
 }

Note that we are using variables for the configuration settings here as well as within the module itself. This will let us specify all of our configuration in terraform.tfvars and pass down our configuration from the root.

So, we need to add variable definitions for each of the above variables to vars.tf

variable "vpc_cidr" {}
variable "vpc_subnet_01_az" {}
variable "vpc_subnet_02_az" {}
variable "vpc_subnet_pub01_cidr" {}
variable "vpc_subnet_pvt01_cidr" {}
variable "vpc_subnet_pub02_cidr" {}
variable "vpc_subnet_pvt02_cidr" {}

Now, add some values for the variables in terraform.tfvars

vpc_cidr              = "10.0.0.0/16"
vpc_subnet_01_az      = "us-west-2a"
vpc_subnet_02_az      = "us-west-2b"
vpc_subnet_pub01_cidr = "10.0.0.0/24"
vpc_subnet_pub02_cidr = "10.0.1.0/24"
vpc_subnet_pvt01_cidr = "10.0.2.0/24"
vpc_subnet_pvt02_cidr = "10.0.3.0/24"

If you run terraform apply now, you should have an infrastructure ready for the rest of the components!

Step 3: Application Load Balancer (ALB)

Leave A Reply

Your email address will not be published. Required fields are marked *