Terraform and Amazon ECS – Part 2

If you have not already done so, read Terraform and Amazon ECS – Part 1 to get started.

I left off having created all of the resources for the VPC which will house the rest of the infrastructure for this application. So, the next step is going to be added some components into the VPC that will support the application more directly.

Step 3: Application Load Balancer (ALB)

Amazon provides three different types of load balancers: Application Load Balancer (ALB), Network Load Balancer (NLB), and the classic Elastic Load Balancer (ELB). New applications should never use the classic ELB and should instead choose either the ALB or NLB based on the type of traffic it will be serving. Since this is an HTTP application, we will use an ALB.

The application load balancer will provide a mechanism for routing requests to the container services running in our ECS Cluster. There are a few concepts to become familiar with to understand exactly how this works.

First, there is the load balancer itself. Think of the load balancer as an entrypoint into our application. It will have a public DNS name and will be accessible from the internet.

Second, a load balancer can have different listeners. A listener can be thought of as a process which accepts requests of a certain type on a specific port and which will perform some sort of action upon processing the request.

Third are target groups. Target groups define a location and protocol for routing requests. For example, a target group for a REST service might specify to route the traffic using the HTTP protocol to port 8080. Individual containers which run this service would then register with this target group and the load balancer would distribute traffic to the registered containers.

Lastly are listener rules. Listener rules are the glue which binds a listener to a target group. Rules can be based on the HTTP path, headers, or any other part of the request. For example, if we wanted to route traffic from /api/v1 to a v1 service and /api/v2 to a v2 service, we would define two listener rules which would bind those paths to v1 and v2 target groups. The v1 instances would register as targets with the v1 target group and v2 instances would register as targets with the v2 target group. This is great because it allows each service to scale individually completely independent of one another.

We already know that we will be running three web-facing services:

  • A React UI
  • A To-Do API
  • An AWS Catalog

So, let’s create our load balancer and all of the other necessary components. We will place these components into a new component space:

mkdir alb
touch alb/main.tf
touch alb/vars.tf
touch alb/outputs.tf

First up, the load balancer. There are only a handful of things that we will need to specify in order to define our load balancer configuration. Since this is a public load balancer, we will set the internal configuration setting to “false” and set the type to “application”. An ALB needs to span at least two availability zones and needs to be given a set of subnet ids in which to operate at creation type. It also should be given a set of security groups which will allow and restrict traffic as well as a unique name. So, lets create variables for the values which we will specify at creation time. In vars.tf, add the following (note – I’ve added vpc_id here because we will need it later):

variable "name" {}
variable "vpc_id" {}
variable "subnet_ids" {}
variable "security_group_ids" {}

Now, lets define the load balancer resource in main.tf and use our variables for configuration.

resource "aws_lb" "main" {
   name_prefix        = substr(var.name, 0, 6)
   internal           = false
   load_balancer_type = "application"
   subnets            = var.subnet_ids
   security_groups    = var.security_group_ids
 }

Next, we will define the target groups. Each target group will get a name prefix to identify what service it is working with, will use the HTTP protocol on port 80, and will use use the an IP address when routing traffic to the containers. The individual settings for the health_check field depend on the service.

resource "aws_alb_target_group" "ui" {
   name_prefix = "ui"
   port        = 80
   protocol    = "HTTP"
   target_type = "ip"
   vpc_id      = var.vpc_id
   health_check {
     path = ""
   }
   depends_on = [aws_lb.main]
 }
 resource "aws_alb_target_group" "aws" {
   name_prefix = "aws"
   port        = 80
   protocol    = "HTTP"
   target_type = "ip"
   vpc_id      = var.vpc_id
   health_check {
     path = "/actuator/health"
   }
   depends_on = [aws_lb.main]
 }
 resource "aws_alb_target_group" "todo" {
   name_prefix = "todo"
   port        = 80
   protocol    = "HTTP"
   target_type = "ip"
   vpc_id      = var.vpc_id
   health_check {
     path = "/api/todoitems"
   }
   depends_on = [aws_lb.main]
 }

Next, we will create a listener. The listener is going to serve HTTP traffic on port 80. Listener require a default rule at creation time. Our default rule will route requests to our UI container. Note that we use the ARN of the load balancer to tie this listener to the load balancer we created earlier.

resource "aws_lb_listener" "http_listener" {
   load_balancer_arn = aws_lb.main.arn
   port              = "80"
   protocol          = "HTTP"
   default_action {
     type             = "forward"
     target_group_arn = aws_alb_target_group.ui.arn
   }
   depends_on = [
     aws_lb.main,
     aws_alb_target_group.ui
   ]
 }

Next, we can add rules for routing to our To-Do application and the AWS catalog. Those services have endpoints of /api/todoitems and /api/aws respectively. The To-Do items API also serves CRUD requests at /api/todoitems/<id>. So, let’s define some rules which will service these requests.

resource "aws_lb_listener_rule" "aws" {
   listener_arn = aws_lb_listener.http_listener.arn
   priority     = 101
   action {
     type             = "forward"
     target_group_arn = aws_alb_target_group.aws.arn
   }
   condition {
     path_pattern {
       values = ["/api/aws/*"]
     }
   }
   depends_on = [aws_alb_target_group.aws]
 }
 resource "aws_lb_listener_rule" "todo" {
   listener_arn = aws_lb_listener.http_listener.arn
   priority     = 102
   action {
     type             = "forward"
     target_group_arn = aws_alb_target_group.todo.arn
   }
   condition {
     path_pattern {
       values = ["/api/todoitems", "/api/todoitems/*"]
     }
   }
   depends_on = [aws_alb_target_group.todo]
 }

Notice that each rule has three basic components: a priority, an action, and a condition. The priority setting is used to resolve conflicts in the conditions. Conditions are evaluated from highest to lowest priority. Once a condition is matched, the action specified by the rule is taken.

Lastly, we need to define a security group that will allow HTTP traffic to reach out load balancer. We will define this security group in the vpc/main.tf file and supply the security group IDs to the ALB module at runtime. In vpc/main.tf, add the following security group definition:

resource "aws_security_group" "allow_http" {
   name        = "allow_http"
   description = "Allow HTTP inbound traffic"
   vpc_id      = aws_vpc.main.id
   tags        = var.tags
   ingress {
     from_port   = 80
     to_port     = 80
     protocol    = "tcp"
     cidr_blocks = ["0.0.0.0/0"]
   }
   ingress {
     from_port   = 0
     to_port     = 65535
     protocol    = "tcp"
     cidr_blocks = [var.vpc_subnet_pub01_cidr, var.vpc_subnet_pub02_cidr]
   }
   egress {
     from_port   = 0
     to_port     = 0
     protocol    = "-1"
     cidr_blocks = ["0.0.0.0/0"]
   }
   depends_on = [
     aws_subnet.subnet_pub01,
     aws_subnet.subnet_pub02
   ]
 }

This security group is named “allow_http” and allows incoming TCP requests on port 80 from any location. It also allows TCP traffic between any components in the public subnets on any port. This allows our load balancer to talk to containers on any port necessary. It also allows all egress traffic.

The last steps are to just stitch this all together! First, we need to add the id of the security group being created in vpc/main.tf to the outputs.tf for that module:

output "sg_allow_http_id" {
   value = aws_security_group.allow_http.id
 }

Next, lets update root.tf to use our alb module.

module "alb" {
   source             = "./alb"
   name               = var.name
   vpc_id             = module.vpc.vpc_id
   subnet_ids         = [module.vpc.subnet_pub01_id,   module.vpc.subnet_pub02_id]
   security_group_ids = [module.vpc.sg_allow_http_id]
 }

The last bit here is to add a name variable to our root module. Open vars.tf and add a definition of our name variable. We will use this same name for our cluster.

variable "name" {}

Lastly, open terraform.tfvars and set a value for the name variable. I used my domain name, but you can use whatever you’d like.

name       = "jsoncampos"

And thats it! We should now be able to use terraform apply to provision the ALB we will use to route traffic into our ECS cluster!

Step 4: Discovery

Leave A Reply

Your email address will not be published. Required fields are marked *