Exception handling with Sentry in AWS
While we can handle logs and metrics mostly through AWS Cloudwatch it’s not that straightforward when it comes to application exceptions. One solution is to use Sentry and run its “on-premise” version which is OpenSource.
While we can handle logs and metrics mostly through AWS Cloudwatch it’s not that straightforward when it comes to application exceptions.
Many teams make the choice of using commercial solutions such as NewRelic, AirBrake or AppSignal. Yet those solutions are either costly or don't support more than a few programming languages.
One alternative is to use Sentry and run its “on-premise” version which is OpenSource. The big interest of Sentry is that you can use it with most common programming languages.
Sentry's setup is relatively straight forward with Docker and not too difficult to setup on AWS when using Terraform. Here is one take on this topic.
A review of the needs
Sentry is a python web application written with Django relying primarily on Redis and PostgreSQL but advanced setups can also require RabbitMQ.
To put it simply here are the bricks you need to setup :
- A temporary EC2 instance to run the setup task
- An EC2 instance to run the web, background and cron workers (3 containers)
- A RDS PostgreSQL instance (apparently version 9.5)
- An Elasticache Redis instance
- An ELB public instance
As with everything AWS you will need to prepare :
- an instance profile
- an IAM Role
- security groups :
- one for the RDS and Elasticache instances allowing only private subnets to access them
- one for the EC2 instances running the application allowing only the load balancer to access them on port 8080 and all VPC on port 22 (bastion access)
- one for the ELB allowing everyone on the internet to access it on port 443
Finally, there will be a need for a SSL certificate that will be setup through AWS Certificate manager.
Of course, well before that we also need an AWS account, a working terraform setup, a VPC with a minimum of two public and two private subnets in it.
Docker image for Sentry
If one follows the “official” documentation from Sentry “on-premise” things don’t go according to plan, at least when we tried in mid-January 2019. Instead one needs to use a different Dockerfile from the official repository and mix it with files prepared.
Most of the configuration is done through environment variables so we don’t have to worry too much about that and just prepare a docker image. We have uploaded our build to this repository on Docker Hub, you can use it. We will refer to it within the rest of the article.
The general idea is that you can follow Sentry’s documentation (linked above) but the following terraform files will give you a good basis to setup AWS resources that are needed.
If you are familiar with AWS and Terraform this shouldn’t be much of an issue for you.
Initial preparation locally
There is one step that needs to be done before we start: generating a secret for Sentry.
To do that on your local machine or a handy workstation terminal you can run the following :
#> docker pull mcansky/sentry:latest
#> docker run mcansky/sentry:latest config generate-secret-key
Preserve the result safely and paste in the file below instead of “<big secret>
”.
The setup with Terraform
Now to get further we should have :
- a vpc (referred to as
aws_vpc.app_vpc
) - a set of public and private subnets
- the address of the DNS on your VPC. Usually, it’s easy to find, it’s probably whatever your CIDR block second address is (10.11.0.2 in our case). Put that in the locals block at the head. This is related to the container networking setup. Without it, the container would not be able to use AWS local DNS for name resolution.
Note the first part regroups several bits you should change in a locals block and two variable blocks.
The rest is standard Terraform things but you should look out for the reference to private and public subnets and possibly adjust those reference according to your needs.
/ *
Fill the information below
*/
locals {
dns_address = "10.11.0.2"
vpc_cidr_block = "10.11.0.0/16"
private_subnet_0_cidr = "10.11.0.0/24"
private_subnet_1_cidr = "10.11.1.0/24"
private_subnet_2_cidr = "10.11.2.0/24"
public_subnet_0_cidr = "10.11.3.0/24"
public_subnet_1_cidr = "10.11.4.0/24"
public_subnet_2_cidr = "10.11.5.0/24"
aws_region_name = "eu-central-1"
aws_account_id = "<id>"
aws_ssl_cert_id = "<cert_id>"
}
variable "ssl_certs" {
default {
sentry_cert = "${format("arn:aws:acm:%s:%s:certificate/%s", local.aws_region_name, local.aws_account_id, local.aws_ssl_cert_id)}"
}
}
variable "sentry" {
default {
secret_key = "<big secret>"
email_host = "smtp.example.org"
email_port = "2525"
email_user = "username"
email_password = "password"
email_use_tls = "True"
}
}
/* template
You should adapt the following to your existing terraform templates but all the bits needed for Sentry to work are in there.
*/
resource "aws_iam_role" "ec2-instance-service" {
name = "ec2-instance-service"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_instance_profile" "ec2-instance-service" {
name = "ec2-instance-service"
role = "${aws_iam_role.ec2-instance-service.name}"
}
resource "aws_iam_role_policy_attachment" "ec2-instance-ecr" {
role = "${aws_iam_role.ec2-instance-service.name}"
policy_arn = "${aws_iam_policy.ec2_release_read.arn}"
}
resource "aws_iam_policy" "ec2_release_read" {
name = "ec2_release_read"
path = "/"
description = "Policy to allow ec2 instance to get s3 files"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject"
],
"Resource":[
"arn:aws:s3:::*"
],
"Effect": "Allow"
}
]
}
POLICY
}
resource "aws_security_group" "instances-service" {
vpc_id = "${aws_vpc.app_vpc.id}"
name = "instances service"
description = "services sec group"
tags {
name = "instances services sec group"
}
egress {
protocol = "-1"
from_port = "0"
to_port = "0"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
protocol = "tcp"
from_port = "8080"
to_port = "8080"
cidr_blocks = ["${local.vpc_cidrblock}"]
}
egress {
protocol = "tcp"
from_port = "22"
to_port = "22"
cidr_blocks = ["${local.vpc_cidrblock}"]
}
ingress {
protocol = "tcp"
from_port = "22"
to_port = "22"
cidr_blocks = ["${local.vpc_cidrblock}"]
}
}
resource "aws_security_group" "rds-db-service" {
vpc_id = "${aws_vpc.app_vpc.id}"
name = "rds-db-sec-group"
description = "rds db rules sec group"
tags {
Name = "DBs sec group"
Region = "${local.aws_region_name}"
Environment = "all"
Service = "sentry"
}
egress {
protocol = "-1"
from_port = "0"
to_port = "0"
cidr_blocks = [
"${private_subnet_0_cidr}",
"${private_subnet_1_cidr}",
"${private_subnet_2_cidr}"
]
}
ingress {
protocol = "tcp"
from_port = "6379"
to_port = "6379"
cidr_blocks = [
"${private_subnet_0_cidr}",
"${private_subnet_1_cidr}",
"${private_subnet_2_cidr}"
]
}
ingress {
protocol = "tcp"
from_port = "5432"
to_port = "5432"
cidr_blocks = [
"${private_subnet_0_cidr}",
"${private_subnet_1_cidr}",
"${private_subnet_2_cidr}"
]
}
}
resource "aws_db_subnet_group" "dbs" {
name = "services"
subnet_ids = [
"${private_subnet_0_cidr}",
"${private_subnet_1_cidr}",
"${private_subnet_2_cidr}"
]
tags = {
Name = "services dbs"
}
}
resource "aws_db_instance" "sentry-postgresql" {
allocated_storage = 20
storage_type = "gp2"
engine = "postgres"
engine_version = "9.5.15"
instance_class = "db.t2.micro"
name = "sentry"
username = "postgres"
password = "postgres-sentry"
db_subnet_group_name = "${aws_db_subnet_group.dbs.name}"
vpc_security_group_ids = [
"${aws_security_group.rds-db-service.id}"
]
}
resource "aws_elasticache_subnet_group" "caches" {
name = "services"
subnet_ids = [
"${aws_subnet.priv-subnets.0.id}",
"${aws_subnet.priv-subnets.1.id}",
"${aws_subnet.priv-subnets.2.id}"
]
}
resource "aws_elasticache_cluster" "sentry-redis" {
cluster_id = "cluster-redis"
engine = "redis"
node_type = "cache.t2.small"
num_cache_nodes = 1
parameter_group_name = "default.redis3.2"
engine_version = "3.2.10"
subnet_group_name = "${aws_elasticache_subnet_group.caches.name}"
security_group_ids = [
"${aws_security_group.rds-db-service.id}"
]
}
data "template_file" "user_data_sentry_blue" {
template = <<EOF
#!/bin/bash -xe
INSTANCE_ID=`curl http://169.254.169.254/latest/meta-data/instance-id`
SERVICE_NAME=sentry
ENVIRONMENT=production
apt-get update -y
apt-get install -y bridge-utils awscli docker.io jq
DOCKER_TAG="latest"
docker pull mcansky/sentry:$DOCKER_TAG
echo "SENTRY_DB_PASSWORD=postgres" >> /etc/config.env
echo "SENTRY_DB_USER=postgres" >> /etc/config.env
echo "SENTRY_DB_NAME=sentry" >> /etc/config.env
echo "SENTRY_POSTGRES_HOST=${aws_db_instance.sentry-postgresql.address}" >> /etc/config.env
echo "SENTRY_POSTGRES_PORT=${aws_db_instance.sentry-postgresql.port}" >> /etc/config.env
echo "SENTRY_REDIS_HOST=${aws_elasticache_cluster.sentry-redis.cache_nodes.0.address}" >> /etc/config.env
echo "SENTRY_REDIS_PORT=${aws_elasticache_cluster.sentry-redis.cache_nodes.0.port}" >> /etc/config.env
echo "SENTRY_SECRET_KEY=${var.sentry["secret_key"]}" >> /etc/config.env
echo "SENTRY_EMAIL_HOST=${var.sentry["email_host"]}" >> /etc/config.env
echo "SENTRY_EMAIL_PORT=${var.sentry["email_port"]}" >> /etc/config.env
echo "SENTRY_EMAIL_USER=${var.sentry["email_user"]}" >> /etc/config.env
echo "SENTRY_EMAIL_PASSWORD=${var.sentry["email_password"]}" >> /etc/config.env
# run web instance
docker run -d --name sentry-web-01 --dns ${local.dns_address} --restart always -e SENTRY_EMAIL_USE_TLS=true -p "8080:9000" --env-file /etc/config.env mcansky/sentry:$DOCKER_TAG run web
# run worker instance
docker run -d --name sentry-worker-01 --dns ${local.dns_address} --restart always -e SENTRY_EMAIL_USE_TLS=true --env-file /etc/config.env mcansky/sentry:$DOCKER_TAG run worker
# run cron instance
docker run -d --name sentry-cron --dns ${local.dns_address} --restart always -e SENTRY_EMAIL_USE_TLS=true --env-file /etc/config.env mcansky/sentry:$DOCKER_TAG run cron
EOF
}
resource "aws_launch_template" "sentry-blue" {
name = "sentry-blue"
disable_api_termination = true
iam_instance_profile {
name = "${aws_iam_instance_profile.ec2-instance-service.name}"
}
image_id = "${data.aws_ami.ubuntu-1804.id}"
instance_initiated_shutdown_behavior = "terminate"
instance_type = "t2.small"
key_name = "${var.ssh['key_name']}"
monitoring {
enabled = true
}
vpc_security_group_ids = ["${aws_security_group.instances-service.id}"]
user_data = "${base64encode(data.template_file.user_data_sentry_blue.rendered)}"
}
resource "aws_security_group" "public-sentry-elb" {
vpc_id = "${aws_vpc.app_vpc.id}"
name = "sentry-public"
description = "sentry sec group"
tags {
name = "sentry public elb"
}
egress {
protocol = "-1"
from_port = "0"
to_port = "0"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
protocol = "tcp"
from_port = "443"
to_port = "443"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_elb" "sentry" {
name = "sentry-elb"
internal = false
security_groups = ["${aws_security_group.public-sentry-elb.id}"]
subnets = [
"${aws_subnet.pub-subnets.0.id}",
"${aws_subnet.pub-subnets.1.id}",
"${aws_subnet.pub-subnets.2.id}"
]
listener {
instance_port = 8080
instance_protocol = "http"
lb_port = 443
lb_protocol = "https"
ssl_certificate_id = "${var.ssl_certs["sentry_cert"]}"
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 10
timeout = 20
target = "TCP:8080"
interval = 30
}
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 120
tags {
Name = "sentry elb"
Environment = "all"
Service = "sentry"
}
}
resource "aws_autoscaling_group" "sentry-blue" {
vpc_zone_identifier = [
"${aws_subnet.priv-subnets.0.id}",
"${aws_subnet.priv-subnets.1.id}",
"${aws_subnet.priv-subnets.2.id}"
]
name = "sentry-blue"
max_size = 2
min_size = 0
desired_capacity = 1
health_check_grace_period = 300
health_check_type = "ELB"
force_delete = true
load_balancers = ["${aws_elb.sentry.id}"]
launch_template = {
id = "${aws_launch_template.sentry-blue.id}"
version = "$$Latest"
}
tag {
key = "name"
value = "sentry http"
propagate_at_launch = true
}
tag {
key = "service"
value = "sentry"
propagate_at_launch = true
}
}
The SSL part needs to be handled by hand just because we all might have different ways to get certificates. But you should add those into AWS Certificate Manager and get the ID of that into a variable named “sentry_cert”.
Setup process
Once the AutoScaling Group is started you will need to use the first instance it launches as stepping stone to launch another like it.
The reason is simple: the instance will start in the ASG but it will fail one way or another and the ASG will cycle it at some point. So we need to get a similar instance to run on the side with all the same configuration but without being under the purview of the ASG.
Once that instance is running you need to SSH into it and run the following :
#> docker run --rm -it --dns <your DNS address> --env-file /etc/config.env mcansky/sentry:latest upgrade
The <your DNS Address>
bit should be replaced by the IP address of the DNS in your VPC. Usually, it’s easy to find, it’s probably whatever your CIDR block second address is (10.11.0.2 in our case).
This will start a container and do a bunch of things mainly running migrations in the database and then ask you to decide a few things including the first super admin user. So keep an eye on it.
Once you have done this you can kill the instance and cycle the one in the ASG so that all containers there are restarted properly.
You can then connect to the Sentry web UI through the ELB (if you have taken care of linking up the hostname to the ELB that is).
Sentry will ask you for a hostname such as “sentry.example.org” so you should prepare such an entry and point it to the ELB DNS Name through a CNAME or using the following Route53 example if you used Route53 to handle the domain.
resource "aws_route53_record" "sentry" {
zone_id = "${var.route53_zone_ids["dev_domain"]}"
name = "sentry"
type = "A"
alias {
name = "${aws_elb.sentry.dns_name}"
zone_id = "${aws_elb.sentry.zone_id}"
evaluate_target_health = true
}
}
Important note
Sentry expects applications to directly connect to it, wherever they are, to notify it of any exceptions. This means that they use the public DNS name of the sentry server (sentry.example.org). So even if your applications are running within AWS private subnets … they will look for the public address of the server and try to connect to it through Internet. That explains why the ELB is public and open to “0.0.0.0/0”.
We are split on that idea but we guess that the use of both SSL and API keys will allow a more or less secure Sentry. Still … we don’t like the idea of having a service we don’t develop exposed to the internet.
Conclusion
From there you can setup your users, teams, and projects within Sentry and quickly you will be handling exceptions as expected.
Keep in mind that while the ASG will avoid losing your Sentry instance if the EC2 fails you don’t want to run more than one of them with the Launch template that is included as example. While you might be happy with more than one web and one background worker you will have troubles with more than one cron worker.
So if you need more power for your Sentry setup you should either look at having a bigger EC2 instance running or add another ASG which will run a micro instance probably to just run the cron worker while all the instances in the original ASG will run both background and web workers.
As usual: don’t hesitate to contact us if you have questions or remarks on this article.
Have fun !