V
V
VITYA-XY12021-02-12 16:37:57
Amazon Web Services
VITYA-XY1, 2021-02-12 16:37:57

How to "orthodox" set up Blue / Green deployment through terraform on aws?

Let's start with what led up to this broad question. Here is the picture:
1*7vUIT3fqmcyRIRBWM28YmA.png

In contrast to the above scheme. At the moment, instead of the autoscaling group, I am replacing the launch configuration with a new ami image. After deploying new infra regardless of value:
deregistration_delay = 300
Resource:

resource "aws_lb_target_group" "lb_tg_name" {
  name  = "${var.env_prefix}-lb-tg"
  vpc_id  = var.vpc_id
  port  = "80"
  protocol  = "HTTP"
  deregistration_delay = 300

  stickiness {
    type = "lb_cookie"
  }

  health_check {
                path = "/"
                port = "80"
                protocol = "HTTP"
                healthy_threshold = 3
                unhealthy_threshold = 3
                interval = 30
                timeout = 5
                matcher = "200-308"
  }

  tags = {
    Name       = "app-alb-tg"
    Env        = var.env_prefix
    CreatedBy  = var.created_by
  }
}


The target group immediately assigns the draining status to the old machine and issues a 503 page code. This is despite the fact that the previous machine (ec2) is still alive and able to respond to requests.
So we "lie" for about a minute until the new machine rises and can respond to requests.

1) Where to dig so that everything is smooth and without any 503?
2) Maybe you don't need to change the launch configuration, but also the autoscaling group, how to do it?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
C
chupasaurus, 2021-02-13
@chupasaurus

Chip from 2015 from a Hashicorp employee . You need to create an ASG with a lifecycle for each LC and manage traffic switching on the balancer.
In Terraform, you can do this:

resource "aws_launch_configuration" "myapp" {
  name_prefix = "myapp_"
...
resource "aws_autoscaling_group" "myapp" {
  name = "myapp - ${aws_launch_configuration.myapp.name}"
  min_elb_capacity =  = "${var.myapp_asg_min_size}"
...
  lifecycle { create_before_destroy = true }

Until the number of instances in the InService status reaches min_elb_capacity, they will not be attached to the balancer. Then the healthcheck balancer itself should change the status of new instances as InService and start sending traffic to them, at the same moment TF will start deleting the old ASG.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question