This is the final post of this series. If you have not yet done so, go back and read the previous posts.
This post will wrap up with the creation of the service definitions which will manage the tasks and registering the necessary back-end services with AWS Cloud Map.
Step 6: ECS Service Definitions
The service definitions are a bit more involved than some of the earlier components. They tie together our ECS cluster, the load balancer target groups, and the service discovery resources we created in the earlier steps.
We will continue to work in the ecs module folder for the service definitions. First though, we need to go back and add some output variables to the alb module that can be used to reference the target groups we will use for the services. Add the following to alb/outputs.tf
output "target_group_ui_arn" { value = aws_alb_target_group.ui.arn } output "target_group_aws_arn" { value = aws_alb_target_group.aws.arn } output "target_group_todo_arn" { value = aws_alb_target_group.todo.arn }
These target groups are what map resource requests to the ALB to the services running in ECS. They will be provided as input to the ECS module and therefore we need to define variables for each of them in vars.tf. And while we are at it, we can add a variable for the DNS Namespace ID created in the last post. Add the following to ecs/vars.tf
variable "service_discovery_ns_id" {} variable "target_group_ui_arn" {} variable "target_group_aws_arn" {} variable "target_group_todo_arn" {}
And the last thing before defining our services will be to define some security groups that we will use to allow the ALB to communicate with the containers and for the containers to reach one another. We already created the ‘allow_http’ security group which can be used to achieve the first requirement, but need to define groups to allow containers in the public subnet to connect to the mysql and consul containers in the private subnets. MySQL connections will be on port 3306 and Consul uses 8500. Lets add the following groups to vpc/main.tf and add the output variables to vpc/outputs.tf.
// vpc/main.tf resource "aws_security_group" "allow_mysql" { name = "allow_mysql" description = "Allow incoming connections on port 3306" vpc_id = aws_vpc.main.id ingress { from_port = 3306 to_port = 3306 protocol = "tcp" cidr_blocks = [var.vpc_subnet_pub01_cidr, var.vpc_subnet_pub02_cidr] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } depends_on = [ aws_subnet.subnet_pub01, aws_subnet.subnet_pub02 ] } // Allows connections to port 8500 from resources in the public subnet resource "aws_security_group" "allow_consul" { name = "allow_consul" description = "Allow incoming connections on port 8500" vpc_id = aws_vpc.main.id ingress { from_port = 8500 to_port = 8500 protocol = "tcp" cidr_blocks = [var.vpc_subnet_pub01_cidr, var.vpc_subnet_pub02_cidr] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } depends_on = [ aws_subnet.subnet_pub01, aws_subnet.subnet_pub02 ] } // vpc/outputs.tf output "sg_allow_mysql_id" { value = aws_security_group.allow_mysql.id } output "sg_allow_consul_id" { value = aws_security_group.allow_consul.id }
Now we have everything necessary to define the services we will be registering with the load balancer and which will be receiving traffic. So, lets create the service definitions for the AWS, To-Do, and UI services. These services are going to be using the Fargate launch type. We could place them into a private subnet since the ALB is receiving the traffic, but in this case I’ll be placing them into a public subnet and giving them a public IP as well. We are also only going to launch a single task for each service since this is just a demonstration and won’t need to deal with load issues.
resource "aws_ecs_service" "ui" { name = "ui" launch_type = "FARGATE" cluster = aws_ecs_cluster.main.id task_definition = aws_ecs_task_definition.ui.arn desired_count = 1 depends_on = [aws_ecs_cluster.main, aws_ecs_task_definition.ui] load_balancer { target_group_arn = var.target_group_ui_arn container_name = "ui" container_port = 80 } network_configuration { subnets = [var.subnet_pub01_id, var.subnet_pub02_id] security_groups = [var.sg_allow_http_id] assign_public_ip = true } }
As you can see, the cluster, task definition, load balancer, security groups, and VPC are all being referenced here… we must be getting close to the end! Lets go ahead and add the other two public facing services as well:
resource "aws_ecs_service" "aws" { name = "aws" launch_type = "FARGATE" cluster = aws_ecs_cluster.main.id task_definition = aws_ecs_task_definition.aws.arn desired_count = 1 depends_on = [aws_ecs_cluster.main, aws_ecs_task_definition.aws] load_balancer { target_group_arn = var.target_group_aws_arn container_name = "aws" container_port = 8080 } network_configuration { subnets = [var.subnet_pub01_id, var.subnet_pub02_id] security_groups = [var.sg_allow_http_id] assign_public_ip = true } } resource "aws_ecs_service" "todo" { name = "todo" launch_type = "FARGATE" cluster = aws_ecs_cluster.main.id task_definition = aws_ecs_task_definition.todo.arn desired_count = 1 depends_on = [aws_ecs_cluster.main, aws_ecs_task_definition.todo] load_balancer { target_group_arn = var.target_group_todo_arn container_name = "todo" container_port = 80 } network_configuration { subnets = [var.subnet_pub01_id, var.subnet_pub02_id] security_groups = [var.sg_allow_http_id] assign_public_ip = true } }
Now, we need to create the services for our private resources. These are very similar but specify the private subnets in the network_configuration.subnets field, are not assigned a public IP, and utilize the allow_mysql and allow_consul security groups. They also add a service_registry block to register themselves as instances with AWS Cloud Map. This allows our front-end services to find our back-end services via DNS. For example, registering “mysql” will allow the front-end services to reference our mysql services by mysql.svc.jsoncampos.local. We need to first register the name of the service with the AWS Cloud Map. To do so, we use the confusingly named aws_service_discovery_service resource:
resource "aws_service_discovery_service" "mysql" { name = "mysql" dns_config { namespace_id = var.service_discovery_ns_iddns_records {
ttl = 60
type = "A"
}
routing_policy = "MULTIVALUE"
} health_check_custom_config { failure_threshold = 1 } } resource "aws_service_discovery_service" "consul" { name = "consul" dns_config { namespace_id = var.service_discovery_ns_iddns_records {
ttl = 60
type = "A" }
routing_policy = "MULTIVALUE"
} health_check_custom_config { failure_threshold = 1 } }
We can now use these discovery service definitions when we create our ECS service. Yes, I know… the word service is a bit overloaded here. I hope you are following!
resource "aws_ecs_service" "mysql" { name = "mysql" launch_type = "FARGATE" cluster = aws_ecs_cluster.main.id task_definition = aws_ecs_task_definition.mysql.arn desired_count = 1 depends_on = [aws_ecs_cluster.main, aws_ecs_task_definition.mysql] network_configuration { subnets = [var.subnet_pvt01_id, var.subnet_pvt02_id] security_groups = [var.sg_allow_mysql_id] } service_registries { registry_arn = aws_service_discovery_service.mysql.arn } } resource "aws_ecs_service" "consul" { name = "consul" launch_type = "FARGATE" cluster = aws_ecs_cluster.main.id task_definition = aws_ecs_task_definition.consul.arn desired_count = 1 depends_on = [aws_ecs_cluster.main, aws_ecs_task_definition.consul] network_configuration { subnets = [var.subnet_pvt01_id, var.subnet_pvt02_id] security_groups = [var.sg_allow_consul_id] } service_registries { registry_arn = aws_service_discovery_service.consul.arn } }
Notice the addition of the previously defined services in the service_registries section. This tells ECS that this service should be included in the A records of the service. When tasks stop and start, they are registered and deregistered with the service registry. This makes doing deployments seemless!
Step 7: Bringing it all together
We are nearly finished… FINALLY! All that needs to happen is we need to call our ECS module from root.tf and add a variable definition at the root for the cluster name.
// vars.tf (root vars file) variable "name" {} // terraform.tfvars name = "jsoncampos-demo" // root.tf module "ecs" { source = "./ecs" name = var.name service_discovery_ns_id = module.discovery.service_discovery_ns_id target_group_ui_arn = module.alb.target_group_ui_arn target_group_aws_arn = module.alb.target_group_aws_arn target_group_todo_arn = module.alb.target_group_todo_arn subnet_pub01_id = module.vpc.subnet_pub01_id subnet_pvt01_id = module.vpc.subnet_pvt01_id subnet_pub02_id = module.vpc.subnet_pub02_id subnet_pvt02_id = module.vpc.subnet_pvt02_id sg_allow_http_id = module.vpc.sg_allow_http_id sg_allow_mysql_id = module.vpc.sg_allow_mysql_id sg_allow_consul_id = module.vpc.sg_allow_consul_id }
We should now have a fully functioning ECS application environment and a demo application with public and private components.
Conclusion
I hope you found at least something in this series helpful. This was a learning experience for me and I hope that this series at least helps prime you for developing a more robust ECS cluster. There are so many things that I left out that could be improved, but this is a basic starting point.
Thank you for reading and happy Terraforming!