Run a Docker Environment With AWS ECS

Created by: :: May 10, 2018 12:21:05 AM PDT (Revised Jun 05, 2018 9:41:57 PM PDT)

Create an Elastic Container Repository, upload an image, then mange various AWS components to host a Docker instance

There are lots of moving parts to hosting and managing Docker instances via AWS Elastic Container Services. Let's walk through it!



You must have set up:



To summarize the steps of the process, here's what we'll cover:

1) Configure an internal-facing application load balancer with an internal VPC/subnet configuration

2) Create an Elastic Container Repository (ECR) for your image

3) Build a Docker image, tag it for the repository and push it to ECR

4) Write a task definition that configures your Docker container(s)

5) Orchestrate a target group for the load balancer, complete with the targets, path rule and listener



First thing is first. You can set these in your runtime so these variables stay defined throughout the examples. It should go without saying, but sometimes it needs to be said - replace the values below with values relevant to your actual account...

# Your AWS account number

# Region you are doing this in

# The ECR base repository, containing your account num and region

# The profile used to log in (see your ~/.aws/credentials file)

# VPC instance ID (whatever VPC your ECS optimized EC2 instance is attached to)

# Subnets on the VPC (choose 2 or more)

# The cluster's name

# Your app load balancer name

# You will be able to set the ARN after the load balancer is created


Login to your ECR repository. This login should carry you through the steps if done within a reasonable time, but if it expires, you'll need to run the command again. This is just one of many creative ways to do this, as discussed here:

$ aws ecr get-login \
    --no-include-email \
    --region ${REGION} \
    --profile ${ECS_CRED_PROFILE} | awk '{printf $6}' | docker login -u AWS ${ECR_REPOSITORY} \
    --password-stdin > /dev/null


The command below will create your repository so you can upload your image. Call it "my-sample-image" or pick a different name smiley

$ aws ecr create-repository \
    --repository-name "my-sample-image" \
    --region ${AWS_REGION} \
    --profile ${ECS_CRED_PROFILE}

- OR - You can namespace the repository name as to essentially group them like a subdirectory

$ aws ecr create-repository \
    --repository-name "my-projects/my-sample-image" \
    --region ${AWS_REGION} \
    --profile ${ECS_CRED_PROFILE}



If you need to build your image (Dockerfile), run:

$ docker build \
    --tag ${ECR_REPOSITORY}/my-projects/my-sample-image:latest /directory/of/your/Dockerfile

Otherwise, just tag your existing image to match the repository name you created in ECR

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
some-image-name     latest              1cadc7378907        8 days ago          377MB
$ docker tag 1c ${ECR_REPOSITORY}/my-projects/my-sample-image:latest

A few things to note on the above:

  • You should include the base URL. I believe Docker otherwise interprets this as a push to the official Docker library. Basically, you would see something in the next step that looks like
    • "The push refers to repository []"
  • Besides the ECR base repository being in image name, this is based on the example where I namespaced the project repository. Whatever you ended up calling your image/repository, this should match.
  • Did you notice I didn't use the full image ID hash? I used a shorter hash - the hash may be the least unique characters required to match, so long as the shortened hash qualifies to one ID. If I had another image ID of say, 1cbxyz123, I would have had to try identifying the above has as "1ca"
  • Finally, I could have also excluded the tag, "latest." This tag is set by default when no tag is specified. You can use other tags at your will, such as v0.1 or whatever has significance for your project.



Now that we created the repo and tagged the image locally, let's push it on up.

$ docker push ${ECR_REPOSITORY}/my-projects/my-sample-image:latest
The push refers to repository []
f7fc81371cdd: Pushed
c9bc48198222: Pushed
702ecc77a89d: Pushed
bb63d1a07b86: Pushed
8ee3253f625b: Pushed
39526f91715e: Pushed
18b9fb6c3de0: Pushed
d0c587c939d1: Pushed
441fed3d9760: Pushed
3cf5aaea17c8: Pushed
b4f1480e8823: Pushed
57884c707380: Pushing [========>                                          ]  34.54MB/209.4MB
0a5cdf6060bc: Pushed
d626a8ad97a1: Pushing [==========================>                        ]  29.28MB/55.29MB

(Depending on the size of your image and your internet connection, this can take a while)



This next step is a little messy, but bear with me here. We're going to create a task definition, which essentially orchistrates the containers' envrionments. This is a very similar concept to Docker compose, the key differences being that: task definitions are JSON formatted rather than a YML document, and these task definitions are built for each environment rather than templated with environment variables. I might explain this somewhere else another time, but for now, let's register the task definition.

First, create a file called my-sample-task-definition.json. Here is a basic example of what goes into that file:

In this example, the file is located /usr/share/ecs-task-definitions/my-sample-task-definition.json

  "containerDefinitions": [
      "portMappings": [
          "hostPort": 0,
          "protocol": "tcp",
          "containerPort": 80
      "mountPoints": [],
      "memory": 2048,
      "volumesFrom": [],
      "name": "my-sample-container"
  "volumes": []



Once your file is saved, register it. If you don't mind the ugliness, you can also put that json string into the argument directly (omitting the tabs and newlines). I imagine this would be a good way to automate this and use variables in a script. But for this tutorial, we'll just refer to the file:

$ aws ecs register-task-definition \
    --region ${AWS_REGION} \
    --profile ${ECS_CRED_PROFILE} \
    --cli-input-json file:///usr/share/ecs-task-definitions/my-sample-task-definition.json \
    --family "my-sample-task-family"



Now it's time to create the target group for the app load balancer. The naming scheme of some of these entities can be a little confusing, because the "load balancer" is more like a "proxy." We're going to setup a listener and a path-based rule that should affiliate with the target group. Now the target group will have a single target within it, per the flow of this example/tutorial. The target group can contain more than one targets, and in that regard, the target group effectively would be the "load balancer" as it directs traffic to the first healthy target in the mix. If one container is down (fails a health check), the application load balancer is not aware of this as the target group is. The load balancer will continue to redirect traffic, based on the path rule, to the target group. The target group will determine the target based on the health of each of the targets.



Here are some helpful diagrams of this (I stole the images from here):

ALB Components


Sample App



So, it's important to understand the details of the order of these next commands. Creating a target group will allow us to create a path listener and rule. The listener and rule require the ARN of the new target group, so we need to parse the JSON response from the CLI command. My approach to this was:

$ ARN_FROM_CREATING_TARGET_GROUP=$(aws elbv2 create-target-group \
    --name "my-example-target-group" \
    --protocol HTTP \
    --port 80 \
    --vpc-id ${ECS_INSTANCE_VPC_ID} \
    --region ${AWS_REGION} \
    --profile ${ECS_CRED_PROFILE} | grep TargetGroupArn | awk '{ print $2 }' | tr -d '",')




Before the first pipe, the response was a multi-lined JSON output indicating the success of the newly created target group. The grep TargetGroupArn isolates the line of that json where the new ARN was output in that response. In the next segment, the awk command gave us the second column, where the JSON value was. Finally, this bit was trimmed of quotes and commas in the last piped segment. The entire command was contained as a variable so the pure ARN can be used in the next sequence of commands.



Naturally, without the piped parsing, the command...

$ aws elbv2 create-target-group \
    --name "my-example-target-group" \
    --protocol HTTP \
    --port 80 \
    --vpc-id ${ECS_INSTANCE_VPC_ID} \
    --region ${AWS_REGION} \
    --profile ${ECS_CRED_PROFILE}


... would look like:

  "TargetGroups": [
          "TargetGroupName": "my-example-target-group",
          "Protocol": "HTTP",
          "Port": 80,
          "VpcId": "vpc-3ac0fb5f",
          "TargetType": "instance",
          "HealthyThresholdCount": 1,
          "Matcher": {
              "HttpCode": "200"
          "UnhealthyThresholdCount": 0,
          "HealthCheckPath": "/",
          "HealthCheckProtocol": "HTTP",
          "HealthCheckPort": "traffic-port",
          "HealthCheckIntervalSeconds": 30,
          "HealthCheckTimeoutSeconds": 5,
          "TargetGroupArn": "arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/my-example-target-group/123abc456def987"



So, the piped parsing is necessary so we set the ARN_FROM_CREATING_TARGET_GROUP to (per the example above):


Perhaps there is a better way to extract that - PLEASE hit me up (reply to this blog) and provide a cleaner solution if you come accross one =)



This is essentially the same principle is applied when creating the load balancer's listener. We want to wrap that in a variable so we can pass that ARN into the functionality that creates the "rule." Notice that in this command, we're using the last ARN wrapping variable for the listener's default actions, then wrapping all of that into yet another variable called ARN_FROM_CREATING_LISTENER.

$ ARN_FROM_CREATING_LISTENER=$(aws elbv2 create-listener \
    --default-actions "Type=forward,TargetGroupArn=${ARN_FROM_CREATING_TARGET_GROUP}" \
    --protocol HTTP \
    --port 80 \
    --region ${AWS_REGION} \
    --profile ${ECS_CRED_PROFILE} \
    --load-balancer-arn ${ELBV2_LOAD_BALANCER_ARN} \
    --protocol HTTP \
    --port 80 | grep ListenerArn | awk '{ print $2 }' | tr -d '",')



K, so this is fun. We need to also set a priority number in case we have more than one path rule for the load balancer. A quick (hacky) way to accompish that is:

# List out the priority numbers
GET_HIGHEST_PRIORITY_NUM=$(aws elbv2 describe-rules \
    --region ${AWS_REGION} \
    --profile ${ECS_CRED_PROFILE} \
    --listener-arn "${ARN_FROM_CREATING_LISTENER}" | grep "Priority" | sort -rn | awk '{ print $2 }' | tr -d '",' | tail -1)

# Set the desired priority number to 0 if it's currently not set or set as "default"
if [ -z "${GET_HIGHEST_PRIORITY_NUM}" ] || [ "${GET_HIGHEST_PRIORITY_NUM}" == "default" ]; then

# Set the priority number +1 for next incoming rule

$ aws elbv2 create-rule \
    --listener-arn "${ELBV2_LISTENER_ARN}" \
    --priority ${PRIORITY} \
    --actions "Type=forward,TargetGroupArn=${ARN_FROM_CREATING_TARGET_GROUP}" \
    --conditions "Field=path-pattern,Values=/my-project-uri*" \
    --region ${AWS_REGION} \
    --profile ${ECS_CRED_PROFILE}



Finally, at the final step - creating the service to manage it all:

$ aws ecs create-service \
    --cluster ${ECS_CLUSTER_NAME} \
    --service-name "my-example-service" \
    --task-definition "my-sample-task-family" \
    --desired-count 1 \
    --role "ecsServiceRole" \
    --region ${AWS_REGION} \
    --profile ${ECS_CRED_PROFILE} \
    --load-balancers "targetGroupArn=${ARN_FROM_CREATING_TARGET_GROUP},containerName=my-sample-container,containerPort=80"


You're all set! Log into the AWS console and take a peek at your service and tasks to see the components working. I've also created a very basic demo on Github which should evolve a little over the next few weeks.