Simple logs handling in AWS with Docker apps

One of the things we advocate for our customer when they look at starting to use AWS for their infrastructure needs is to keep as simple as possible.

What we mean is to focus on what is absolutely needed (hosting one to five web backends and front ends usually) and go with AWS products for all the things around that. One particular place where we don’t go with custom stuff right away is logs.

There are plenty of commercial solutions to handle logs but why would someone go outside of AWS when it comes with a very capable solution from the start ?

Here is an example on how to use Cloudwatch logs for Docker based application hosting.

Docker and log drivers

Docker comes with a variety of logs drivers and they are very easy to configure. We only need to specify the driver and a few options when starting a container.

Here is an example we have extracted from a launch template :

docker run --dns 10.11.0.2 --restart always -p "8080:8080" \
--name=$SERVICE_NAME \
--log-driver=awslogs \
--log-opt awslogs-region=eu-central-1 \
--log-opt awslogs-group=infra/production/hello-world \
--log-opt awslogs-stream=$SERVICE_NAME-$COLOR-$DOCKER_TAG \
somewhere/a_container:$DOCKER_TAG
EOF

For such an example to work we also need a Cloudwatch Logs group named infra/production/hello-world.

Aside from that the important part is the aws-logs-stream option. It allows us to specify a prefix that will be used for every log stream. This will allow us to regroup within a log stream all the logs related to the release of a specific service.
In our case we also use a color parameter this is related to our use of blue/green deployments.

A word on IAM and Instance roles

For this example to work the instances should be associated with an IAM role that has rights to create Cloudwatch log streams and push events to it :

  • logs:CreateLogStream
  • logs:PutLogEvents

Without those logs will simply not appear in the log group.

Usage

Once the setup is done your stream of logs from the docker container will appear in Cloudwatch. You can simply browse to there and pick the logs related to the release. And because we use the release ID within the stream log’s name we can actually generate URLs to those logs easily from our custom monitoring dashboard. This can be very handy.

Conclusion

We don’t think it’s worth describing every step of setting up an ASG with such logs handling. We think this topic is relatively easy to understand if you have a good grasp of how to run docker containers.
If you have read other posts from us about setting up an ASG with a launch template and using ECR as source for your containers then it should be fairly straight forward to include this part in such a setup.

As usual: don’t hesitate to contact us if you have questions or remarks on this article.

Have fun !

Need help ?

We specialise in helping small and medium teams transform the way they build, manage and maintain their Internet based services.

With more than 10 years of experience in running Ruby based web and network applications, and 6 years running products servicing from 2000 to 100000 users daily we bring skills and insights to your teams.

Wether you have a small team looking for insights to quickly get into the right gear to support a massive usage growth, or a medium sized one trying to tackle growth pains between software engineers and infrastructure : we can help.

We are based in France, EU and especially happy to respond to customers from Denmark, Estonia, Finland, France, Italy, Netherlands, Norway, Spain, and Sweden.

We can provide training, general consulting on infrastructure and design, and software engineering remotely or in house depending on location and length of contract.

Contact us to talk about what we can do : sales@imfiny.com.