Terraform

An Introduction to Terraform

Learn the basics of Terraform in this step-by-step tutorial of how to deploy a cluster of web servers and a load balancer on AWS

Update: we took this blog post series, expanded it, and turned it into a book called Terraform: Up & Running!

This is Part 2 of the Comprehensive Guide to Terraform series. In Part 1, we explained why we picked Terraform as our IAC tool of choice and not Chef, Puppet, Ansible, SaltStack, or CloudFormation. In this post, we’re going to introduce the basics of how to use Terraform to define and manage your infrastructure.

The official Terraform Getting Started documentation does a good job of introducing the individual elements of Terraform (i.e. resources, input variables, output variables, etc), so in this guide, we’re going to focus on how to put those elements together to create a fairly real-world example. In particular, we will provision several servers on AWS in a cluster and deploy a load balancer to distribute load across that cluster. The infrastructure you’ll create in this example is a basic starting point for running scalable, highly-available web services and microservices.

This guide is targeted at AWS and Terraform newbies, so don’t worry if you haven’t used either one before. We’ll walk you through the entire process, step-by-step:

  1. Set up your AWS account
  2. Install Terraform
  3. Deploy a single server
  4. Deploy a single web server
  5. Deploy a cluster of web servers
  6. Deploy a load balancer
  7. Clean up

You can find complete sample code for the examples below at: https://github.com/gruntwork-io/intro-to-terraform. Note that all the code samples are written for Terraform 0.7.x.

Set up your AWS account

Terraform can provision infrastructure across many different types of cloud providers, including AWS, Azure, Google Cloud, DigitalOcean, and many others. For this tutorial, we picked Amazon Web Services (AWS) because:

When you first register for AWS, you initially sign in as the root user. This user account has access permissions to everything, so from a security perspective, we recommend only using it to create other user accounts with more limited permissions (see IAM Best Practices). To create a more limited user account, head over to the Identity and Access Management (IAM) console, click “Users”, and click the blue “Create New Users” button. Enter a name for the user and make sure “Generate an access key for each user” is checked:

Click the “Create” button and you’ll be able to see security credentials for that user, which consist of Access Key ID and a Secret Access Key. You MUST save these immediately, as they will never be shown again. We recommend storing them somewhere secure (e.g. a password manager such as Keychain or 1Password) so you can use them a little later in this tutorial.

Save the credentials somewhere secure. Never share them with anyone. Don’t worry, the ones in the screenshot above are fake.

Once you’ve saved the credentials, click “Close” (twice) and you’ll be taken to the list of users. Click on the user you just created and select the “Permissions” tab. By default, a new IAM user does not have permissions to do anything in the AWS account. To be able to use Terraform for the examples in this tutorial, add the AmazonEC2FullAccess permission (learn more about Managed IAM Policies here):

Install Terraform

Follow the instructions here to install Terraform. When you’re done, you should be able to run the terraform command:

> terraform
usage: terraform [--version] [--help] <command> [args]
(...)

In order for Terraform to be able to make changes in your AWS account, you will need to set the AWS credentials for the user you created earlier as environment variables:

export AWS_ACCESS_KEY_ID=(your access key id)
export AWS_SECRET_ACCESS_KEY=(your secret access key)

Deploy a single server

Terraform code is written in a language called HCL in files with the extension “.tf”. It is a declarative language, so your goal is to describe the infrastructure you want, and Terraform will figure out how to create it. Terraform can create infrastructure across a wide variety of platforms, or what it calls providers, including AWS, Azure, Google Cloud, DigitalOcean, and many others. The first step to using Terraform is typically to configure the provider(s) you want to use. Create a file called “main.tf” and put the following code in it:

provider "aws" {
  region = "us-east-1"
}

This tells Terraform that you are going to be using the AWS provider and that you wish to deploy your infrastructure in the “us-east-1” region (AWS has data centers all over the world, grouped into regions and availability zones, and us-east-1 is the name for data centers in Virginia, USA). You can configure other settings for the AWS provider, but for this example, since you’ve already configured your credentials as environment variables, you only need to specify the region.

For each provider, there are many different kinds of “resources” you can create, such as servers, databases, and load balancers. Before we deploy a whole cluster of servers, let’s first figure out how to deploy a single server that will run a simple “Hello, World” web server. In AWS lingo, a server is called an “EC2 Instance.” To deploy an EC2 Instance, add the following code to main.tf:

resource "aws_instance" "example" {
  ami = "ami-2d39803a"
  instance_type = "t2.micro"
}

Each resource specifies a type (in this case, “aws_instance”), a name (in this case “example”) to use as an identifier within the Terraform code, and a set of configuration parameters specific to the resource. The aws_instance resource documentation lists all the parameters it supports. Initially, you only need to set the following ones:

  • ami: The Amazon Machine Image to run on the EC2 Instance. The example above sets this parameter to the ID of an Ubuntu 14.04 AMI in us-east-1.
  • instance_type: The type of EC2 Instance to run. Each EC2 Instance Typehas different amount CPU, memory, disk space, and networking specs. The example above uses “t2.micro”, which has 1 virtual CPU, 1GB of memory, and is part of the AWS free tier.

In a terminal, go into the folder where you created main.tf, and run the “terraform plan” command:

> terraform plan
Refreshing Terraform state in-memory prior to plan...
(...)
+ aws_instance.example
    ami:                      "ami-2d39803a"
    availability_zone:        "<computed>"
    ebs_block_device.#:       "<computed>"
    ephemeral_block_device.#: "<computed>"
    instance_state:           "<computed>"
    instance_type:            "t2.micro"
    key_name:                 "<computed>"
    network_interface_id:     "<computed>"
    placement_group:          "<computed>"
    private_dns:              "<computed>"
    private_ip:               "<computed>"
    public_dns:               "<computed>"
    public_ip:                "<computed>"
    root_block_device.#:      "<computed>"
    security_groups.#:        "<computed>"
    source_dest_check:        "true"
    subnet_id:                "<computed>"
    tenancy:                  "<computed>"
    vpc_security_group_ids.#: "<computed>"
Plan: 1 to add, 0 to change, 0 to destroy.

The plan command lets you see what Terraform will do before actually doing it. This is a great way to sanity check your changes before unleashing them onto the world. The output of the plan command is a little like the output of the diff command: resources with a plus sign (+) are going to be created, resources with a minus sign (-) are going to be deleted, and resources with a tilde sign (~) are going to be modified. In the output above, you can see that Terraform is planning on creating a single EC2 Instance and nothing else, which is exactly what we want.

To actually create the instance, run the “terraform apply” command:

> terraform apply
aws_instance.example: Creating...
  ami:                      "" => "ami-2d39803a"
  availability_zone:        "" => "<computed>"
  ebs_block_device.#:       "" => "<computed>"
  ephemeral_block_device.#: "" => "<computed>"
  instance_state:           "" => "<computed>"
  instance_type:            "" => "t2.micro"
  key_name:                 "" => "<computed>"
  network_interface_id:     "" => "<computed>"
  placement_group:          "" => "<computed>"
  private_dns:              "" => "<computed>"
  private_ip:               "" => "<computed>"
  public_dns:               "" => "<computed>"
  public_ip:                "" => "<computed>"
  root_block_device.#:      "" => "<computed>"
  security_groups.#:        "" => "<computed>"
  source_dest_check:        "" => "true"
  subnet_id:                "" => "<computed>"
  tenancy:                  "" => "<computed>"
  vpc_security_group_ids.#: "" => "<computed>"
aws_instance.example: Still creating... (10s elapsed)
aws_instance.example: Still creating... (20s elapsed)
aws_instance.example: Creation complete
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Congrats, you’ve just deployed a server with Terraform! To verify this, you can login to the EC2 console, and you’ll see something like this:

It’s working, but it’s not the most exciting example. For one thing, the Instance doesn’t have a name. To add one, you can add a tag to the EC2 instance:

resource "aws_instance" "example" {
  ami = "ami-2d39803a"
  instance_type = "t2.micro"
  tags {
    Name = "terraform-example"
  }
}

Run the plan command again to see what this would do:

> terraform plan
aws_instance.example: Refreshing state... (ID: i-6a7c545b)
(...)
~ aws_instance.example
    tags.%:    "0" => "1"
    tags.Name: "" => "terraform-example"
Plan: 0 to add, 1 to change, 0 to destroy.

Terraform keeps track of all the resources it already created for this set of templates, so it knows your EC2 Instance already exists (note how Terraform says “Refreshing state…” when you run the plan command), and it can show you a diff between what’s currently deployed and what’s in your Terraform code (this is one of the advantages of using a declarative language over a procedural one). The diff above shows that Terraform wants to create a single tag called “Name”, which is exactly what we want, so you should run the “apply” command again. When you refresh your EC2 console, you’ll see:

Deploy a single web server

The next step is to run a web server on this Instance. In a real-world use case, you’d probably install a full-featured web framework like Ruby on Rails or Django, but to keep this example simple, we’re going to run a dirt-simple web server that always returns the text “Hello, World” using a code borrowed from the big list of http static server one-liners:

#!/bin/bash
echo "Hello, World" > index.html
nohup busybox httpd -f -p 8080 &

This is a bash script that writes the text “Hello, World” into index.html and runs a web server on port 8080 using busybox (which is installed by default on Ubuntu) to serve that file at the URL “/”. We wrap the busybox command with nohup to ensure the web server keeps running even after this script exits and put an “&” at the end of the command so the web server runs in a background process so the script actually can exit rather than being blocked forever by the web server.

How do you get the EC2 Instance to run this script? Normally, instead of using an empty Ubuntu AMI, you would use a tool like Packer to create a custom AMI that has the web server installed on it. But again, in the interest of keeping this example simple, we’re going to run the script above as part of the EC2 Instance’s User Data, which AWS will execute when the instance is booting:

resource "aws_instance" "example" {
  ami = "ami-2d39803a"
  instance_type = "t2.micro"
  
  user_data = <<-EOF
              #!/bin/bash
              echo "Hello, World" > index.html
              nohup busybox httpd -f -p 8080 &
              EOF
  tags {
    Name = "terraform-example"
  }
}

The “<<-EOF” and “EOF” are Terraform’s heredoc syntax, which allows you to create multiline strings without having to put “\n” all over the place (learn more about Terraform syntax here).

You need to do one more thing before this web server works. By default, AWS does not allow any incoming or outgoing traffic from an EC2 Instance. To allow the EC2 Instance to receive traffic on port 8080, you need to create a security group:

resource "aws_security_group" "instance" {
  name = "terraform-example-instance"
  ingress {
    from_port = 8080
    to_port = 8080
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

The code above creates a new resource called aws_security_group (notice how all resources for the AWS provider start with “aws_”) and specifies that this group allows incoming TCP requests on port 8080 from the CIDR block 0.0.0.0/0. CIDR blocks are a concise way to specify IP address ranges. For example, a CIDR block of 10.0.0.0/24 represents all IP addresses between 10.0.0.0 and 10.0.0.255. The CIDR block 0.0.0.0/0 is an IP address range that includes all possible IP addresses, so the security group above allows incoming requests on port 8080 from any IP.

Note that in the security group above, we copied & pasted port 8080. To keep your code DRY and to make it easy to configure the code, Terraform allows you to define input variables:

variable "server_port" {
  description = "The port the server will use for HTTP requests"
}

You can use this variable in your security group via Terraform’s interpolation syntax:

from_port = "${var.server_port}"
to_port = "${var.server_port}"

You can also use the same syntax in the user_data of the EC2 Instance:

nohup busybox httpd -f -p "${var.server_port}" &

If you now run the plan or apply command, Terraform will prompt you to enter a value for the server_port variable:

> terraform plan
var.server_port
  The port the server will use for HTTP requests
Enter a value: 8080

Another way to provide a value for the variable is to use the “-var” command line option:

> terraform plan -var server_port="8080"

If you don’t want to enter the port manually every time, you can specify a default value as part of the variable declaration (note that this default can still be overridden via the “-var” command line option):

variable "server_port" {
  description = "The port the server will use for HTTP requests"
  default = 8080
}

One last thing to do: you need to tell the EC2 Instance to actually use the new security group. To do that, you need to pass the ID of the security group into the vpc_security_group_ids parameter of the aws_instance resource. How do you get this ID?

In Terraform, every resource has attributes that you can reference using the same syntax as interpolation. You can find the list of attributes in the documentation for each resource. For example, the aws_security_group attributes include the ID of the security group, which you can reference in the EC2 Instance as follows:

vpc_security_group_ids = ["${aws_security_group.instance.id}"]

The syntax is “${TYPE.NAME.ATTRIBUTE}”. When one resource references another resource, you create an implicit dependency. Terraform parses these dependencies, builds a dependency graph from them, and uses that to automatically figure out in what order it should create resources (e.g. Terraform knows it needs to create the security group before using it with the EC2 Instance). In fact, Terraform will create as many resources in parallel as it can, which means it is very fast at applying your changes. That’s the beauty of a declarative language: you just specify what you want and Terraform figures out the most efficient way to make it happen.

If you run the plan command, you’ll see that Terraform wants to replace the original EC2 Instance with a new one that has the new user data (the “-/+” means “replace”) and to add a security group:

> terraform plan
(...)
-/+ aws_instance.example
    ami:                      "ami-2d39803a" => "ami-2d39803a"
    instance_state:           "running" => "<computed>"
    instance_type:            "t2.micro" => "t2.micro"
    security_groups.#:        "0" => "<computed>"
    vpc_security_group_ids.#: "1" => "<computed>"
(...)
+ aws_security_group.instance
    description:                         "Managed by Terraform"
    egress.#:                            "<computed>"
    ingress.#:                           "1"
    ingress.516175195.cidr_blocks.#:     "1"
    ingress.516175195.cidr_blocks.0:     "0.0.0.0/0"
    ingress.516175195.from_port:         "8080"
    ingress.516175195.protocol:          "tcp"
    ingress.516175195.security_groups.#: "0"
    ingress.516175195.self:              "false"
    ingress.516175195.to_port:           "8080"
    owner_id:                            "<computed>"
    vpc_id:                              "<computed>"
Plan: 2 to add, 0 to change, 1 to destroy.

This is exactly what we want, so run the apply command again and you’ll see your new EC2 Instance deploying:

In the description panel at the bottom of the screen, you’ll also see the public IP address of this EC2 Instance. Give it a minute or two to boot up and then try to curl this IP at port 8080:

> curl http://<EC2_INSTANCE_PUBLIC_IP>:8080
Hello, World

Yay, a working web server! However, having to manually poke around the EC2 console to find this IP address is no fun. Fortunately, you can do better by specify an output variable:

output "public_ip" {
  value = "${aws_instance.example.public_ip}"
}

We’re using the interpolation syntax again to reference the public_ip attribute of the aws_instance resource. If you run the apply command again, Terraform will not apply any changes (since you haven’t changed any resources), but it’ll show you the new output:

> terraform apply
aws_security_group.instance: Refreshing state... (ID: sg-db91dba1)
aws_instance.example: Refreshing state... (ID: i-61744350)
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
public_ip = 54.174.13.5

Input and output variables are a big part of what make Terraform powerful, especially when combined with modules, a topic we’ll discuss in Part 4, How to create reusable infrastructure with Terraform modules.

Deploy a cluster of web servers

Running a single server is a good start, but in the real world, a single server is a single point of failure. If that server crashes, or if it becomes overwhelmed by too much traffic, users can no longer access your site. The solution is to run a cluster of servers, routing around servers that go down, and adjusting the size of the cluster up or down based on traffic (for more info, check out A Comprehensive Guide to Building a Scalable Web App on Amazon Web Services).

Managing such a cluster manually is a lot of work. Fortunately, you can let AWS take care of it for by you using an Auto Scaling Group (ASG). An ASG can automatically launch a cluster of EC2 Instances, monitor their health, automatically restart failed nodes, and adjust the size of the cluster in response to demand.

The first step in creating an ASG is to create a launch configuration, which specifies how to configure each EC2 Instance in the ASG. From deploying the single EC2 Instance earlier, you already know exactly how to configure it, and you can reuse almost exactly the same parameters in the aws_launch_configuration resource:

resource "aws_launch_configuration" "example" {
  image_id = "ami-2d39803a"
  instance_type = "t2.micro"
  security_groups = ["${aws_security_group.instance.id}"]
  user_data = <<-EOF
              #!/bin/bash
              echo "Hello, World" > index.html
              nohup busybox httpd -f -p "${var.server_port}" &
              EOF
  lifecycle {
    create_before_destroy = true
  }
}

The only new addition is the lifecycle block, which is required for using a launch configuration with an ASG. You can add a lifecycle block to any Terraform resource to customize its lifecycle behavior. One of the available lifecycle settings is create_before_destroy, which tells Terraform to always create a replacement resource before destroying an original (e.g. when replacing an EC2 Instance, always create the new Instance before deleting the old one).

The catch with the create_before_destroy parameter is that if you set it to true on resource X, you also have to set it to true on every resource that X depends on. In the case of the launch configuration, that means you need to set create_before_destroy to true on the security group:

resource "aws_security_group" "instance" {
  name = "terraform-example-instance"
  ingress {
    from_port = "${var.server_port}"
    to_port = "${var.server_port}"
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  lifecycle {
    create_before_destroy = true
  }
}

Now you can create the ASG itself using the aws_autoscaling_group resource:

resource "aws_autoscaling_group" "example" {
  launch_configuration = "${aws_launch_configuration.example.id}"
  min_size = 2
  max_size = 10
  tag {
    key = "Name"
    value = "terraform-asg-example"
    propagate_at_launch = true
  }
}

This ASG will run between 2 and 10 EC2 Instances (defaulting to 2 for the initial launch), each tagged with the name “terraform-example.” The configuration of each EC2 Instance is determined by the launch configuration that you created earlier, which we reference using Terraform’s interpolation syntax.

To make this ASG work, you need to specify one more parameter: availability_zones. This parameter specifies into which availability zones(AZs) the EC2 Instances should be deployed. Each AZ represents an isolated AWS data center, so by deploying your Instances across multiple AZs, you ensure that your service can keep running even if some of the AZs fail. You could hard-code the list of AZs (e.g. set it to [“us-east-1a”, “us-east-1b”]), but each AWS account has access to a slightly different set of AZs, so you can use the aws_availability_zones data source to fetch the exactly list for your account:

data "aws_availability_zones" "all" {}

A data source represents a piece of read-only information that is fetched from the provider (in this case, AWS) every time you run Terraform. In addition to availability zones, there are data sources to look up AMI IDs, IP address ranges, and the current user’s identity. Adding a data source to your Terraform templates does not create anything new; it’s just a way to retrieve dynamic data.

To use the data source, you reference it using the standard interpolation syntax:

resource "aws_autoscaling_group" "example" {
  launch_configuration = "${aws_launch_configuration.example.id}"
  availability_zones = ["${data.aws_availability_zones.all.names}"]
  min_size = 2
  max_size = 10
  tag {
    key = "Name"
    value = "terraform-asg-example"
    propagate_at_launch = true
  }
}

Deploy a load balancer

Before launching the ASG, there is one more problem to solve: now that you have many Instances, you need a load balancer to distributed traffic across all of them. Creating a load balancer that is highly available and scalable is a lot of work. Once again, you can let AWS take care of it for you by using an Elastic Load Balancer (ELB). To create an ELB with Terraform, you use the aws_elb resource:

resource "aws_elb" "example" {
  name = "terraform-asg-example"
  availability_zones = ["${data.aws_availability_zones.all.names}"]
}

This creates an ELB that will work across all of the AZs in your account. Of course, the definition above doesn’t do much until you tell the ELB how to route requests. To do that, you add one or more “listeners” which specify what port the ELB should listen on and what port it should route the request to:

resource "aws_elb" "example" {
  name = "terraform-asg-example"
  security_groups = ["${aws_security_group.elb.id}"]
  availability_zones = ["${data.aws_availability_zones.all.names}"]
  listener {
    lb_port = 80
    lb_protocol = "http"
    instance_port = "${var.server_port}"
    instance_protocol = "http"
  }
}

In the code above, we are telling the ELB to receive HTTP requests on port 80 (the default port for HTTP) and to route them to the port used by the Instances in the ASG. Note that, by default, ELBs don’t allow any incoming or outgoing traffic (just like EC2 Instances), so you need to add a security group to explicitly allow incoming requests on port 80:

resource "aws_security_group" "elb" {
  name = "terraform-example-elb"
  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

And now you need to tell the ELB to use this security group by adding the security_groups parameter:

resource "aws_elb" "example" {
  name = "terraform-asg-example"
  security_groups = ["${aws_security_group.elb.id}"]
  availability_zones = ["${data.aws_availability_zones.all.names}"]
  listener {
    lb_port = 80
    lb_protocol = "http"
    instance_port = "${var.server_port}"
    instance_protocol = "http"
  }
}

The ELB has one other nifty trick up it’s sleeve: it can periodically check the health of your EC2 Instances and, if an instance is unhealthy, it will automatically stop routing traffic to it. Let’s add an HTTP health check where the ELB will send an HTTP request every 30 seconds to the “/” URL of each of the EC2 Instances and only mark an Instance as healthy if responds with a 200 OK:

resource "aws_elb" "example" {
  name = "terraform-asg-example"
  security_groups = ["${aws_security_group.elb.id}"]
  availability_zones = ["${data.aws_availability_zones.all.names}"]
  health_check {
    healthy_threshold = 2
    unhealthy_threshold = 2
    timeout = 3
    interval = 30
    target = "HTTP:${var.server_port}/"
  }
  listener {
    lb_port = 80
    lb_protocol = "http"
    instance_port = "${var.server_port}"
    instance_protocol = "http"
  }
}

To allow these health check requests, you need to modify the ELB’s security group to allow outbound requests:

resource "aws_security_group" "elb" {
  name = "terraform-example-elb"
  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

How does the ELB know which EC2 Instances to send requests to? You can attach a static list of EC2 Instances to an ELB using the ELB’s instances parameter, but with an ASG, instances will be launching and terminating dynamically all the time, so that won’t work. Instead, you can use the load_balancers parameter of the aws_autoscaling_group resource to tell the ASG to register each Instance in the ELB when that instance is booting:

resource "aws_autoscaling_group" "example" {
  launch_configuration = "${aws_launch_configuration.example.id}"
  availability_zones = ["${data.aws_availability_zones.all.names}"]
  min_size = 2
  max_size = 10
  load_balancers = ["${aws_elb.example.name}"]
  health_check_type = "ELB"
  tag {
    key = "Name"
    value = "terraform-asg-example"
    propagate_at_launch = true
  }
}

Notice that we’ve also configured the health_check_type for the ASG to “ELB”. This tells the ASG to use the ELB’s health check to determine if an Instance is healthy or not and to automatically restart Instances if the ELB reports them as unhealthy.

One last thing to do before deploying the load balancer: let’s add its DNS name as an output so it’s easier to test if things are working:

output "elb_dns_name" {
  value = "${aws_elb.example.dns_name}"
}

Run the plan command to verify your changes, and if everything looks good, run apply. When apply completes, you should see the elb_dns_name output:

Outputs:
elb_dns_name = terraform-asg-example-123.us-east-1.elb.amazonaws.com

Copy this URL down. It’ll take a couple minutes for the Instances to boot and show up as healthy in the ELB. In the meantime, you can inspect what you’ve deployed. Open up the ASG section of the EC2 console, and you should see that the ASG has been created:

If you switch over to the Instances tab, you’ll see the two instances in the process of launching:

And finally, if you switch over to the Load Balancers tab, you’ll see your ELB:

Wait for the “Status” indicator to say “2 of 2 instances in service.” This typically takes 1–2 minutes. Once you see it, test the elb_dns_name output you copied earlier:

> curl http://<elb_dns_name>
Hello, World

Success! The ELB is routing traffic to your EC2 Instances. Each time you hit the URL, it’ll pick a different Instance to handle the request. You now have a fully working cluster of web servers! As a reminder, the complete sample code for the example above is available at: https://github.com/gruntwork-io/intro-to-terraform.

At this point, you can see how your cluster responds to firing up new Instances or shutting down old ones. For example, go to the Instances tab, and terminate one of the Instances by selecting its checkbox, selecting the “Actions” button at the top, and setting the “Instance State” to “Terminate.” Continue to test the ELB URL and you should get a “200 OK” for each request, even while terminating an Instance, as the ELB will automatically detect that the Instance is down and stop routing to it. Even more interestingly, a short time after the instance shuts down, the ASG will detect that fewer than 2 Instances are running, and automatically launch a new one to replace it (self healing!). You can also see how the ASG resizes itself by changing the min_size and max_size parameters or adding a desired_size parameter to your Terraform code.

Of course, there are many other aspects to an ASG that we have not covered here. For a real deployment, you would need to attach IAM roles to the EC2 Instances, set up a mechanism to update the EC2 Instances in the ASG with zero-downtime, and configure auto scaling policies to adjust the size of the ASG in response to load. For a fully pre-assembled, battle-tested, documented, production-ready version of the ASG, as well as other types of infrastructure such as Docker clusters, relational databases, VPCs, and more, you may want to check out the Gruntwork Infrastructure Packages.

Clean up

When you’re done experimenting with Terraform, it’s a good idea to remove all the resources you created so AWS doesn’t charge you for them. Since Terraform keeps track of what resources you created, cleanup is a breeze. All you need to do is run the destroy command:

terraform destroy
Do you really want to destroy?
  Terraform will delete all your managed infrastructure.
  There is no undo. Only 'yes' will be accepted to confirm.
Enter a value:

Once you type in “yes” and hit enter, Terraform will build the dependency graph and delete all the resources in the right order, using as much parallelism as possible. In about a minute, your AWS account should be clean again.

One of the example for creating AWS resource using terraform code:

EXAMPLE CODE:

variables.tf

variable “access_key” {
description = “AWS access key”
default = “ZKIAITH7YUGAZZIYYSZA”
}

variable “secret_key” {
description = “AWS secret key”
default = “UlNapYqUCg2m4MDPT9Tlq+64BWnITspR93fMNc0Y”
}

variable “region” {
description = “AWS region for hosting our your network”
default = “ap-southeast-1”
}

variable “key_path” {
description = “Key path for SSHing into EC2”
default  = “./ssh/linoxide-deployer.pem”
}

variable “key_name” {
description = “Key name for SSHing into EC2”
default = “linoxide-deployer”
}

variable “vpc_cidr” {
description = “CIDR for VPC”
default     = “10.0.0.0/16”
}

variable “public_subnet_cidr” {
description = “CIDR for public subnet”
default     = “10.0.1.0/24”
}

variable “private_subnet_cidr” {
description = “CIDR for private subnet”
default     = “10.0.2.0/24”
}

variable “amis” {
description = “Base AMI to launch the instances”
default = {
ap-southeast-1 = “ami-83a713e0”
ap-southeast-2 = “ami-83a713e0”
}
}

Let us define VPC with CIDR block of 10.0.0.0/16

vpc.tf

resource “aws_vpc” “default” {
cidr_block = “${var.vpc_cidr}”
enable_dns_hostnames = true
tags {
Name = “terraform-aws-vpc”
}
}

Define the gateway

gateway.tf

resource “aws_internet_gateway” “default” {
vpc_id = “${aws_vpc.default.id}”                                                                                                                                       tags {
Name = “linoxide gw”
}
}

Define public subnet with CIDR 10.0.1.0/24

public.tf

resource “aws_subnet” “public-subnet-in-ap-southeast-1” {
vpc_id = “${aws_vpc.default.id}”

cidr_block = “${var.public_subnet_cidr}”
availability_zone = “ap-southeast-1a”

tags {
Name = “Linoxide Public Subnet”
}
}

Define private subnet with CIDR 10.0.2.0/24

private.tf

resource “aws_subnet” “private-subnet-ap-southeast-1” {
vpc_id = “${aws_vpc.default.id}”

cidr_block = “${var.private_subnet_cidr}”
availability_zone = “ap-southeast-1a”

tags {
Name = “Linoxide Private Subnet”
}
}

Route table for public/private subnet

route.tf

resource “aws_route_table” “public-subnet-in-ap-southeast-1” {
vpc_id = “${aws_vpc.default.id}”

route {
cidr_block = “0.0.0.0/0”
gateway_id = “${aws_internet_gateway.default.id}”
}

tags {
Name = “Linoxide Public Subnet”
}
}

resource “aws_route_table_association” “public-subnet-in-ap-southeast-1-association” {
subnet_id = “${aws_subnet.public-subnet-in-ap-southeast-1.id}”
route_table_id = “${aws_route_table.public-subnet-in-ap-southeast-1.id}”
}

resource “aws_route_table” “private-subnet-in-ap-southeast-1” {
vpc_id = “${aws_vpc.default.id}”

route {
cidr_block = “0.0.0.0/0”
instance_id = “${aws_instance.nat.id}”
}

tags {
Name = “Linoxide Private Subnet”
}
}

resource “aws_route_table_association” “private-subnet-in-ap-southeast-1-association” {
subnet_id = “${aws_subnet.private-subnet-in-ap-southeast-1.id}”
route_table_id = “${aws_route_table.private-subnet-in-ap-southeast-1.id}”
}

Define NAT security group

natsg.tf

resource “aws_security_group” “nat” {
name = “vpc_nat”
description = “NAT security group”

ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“${var.private_subnet_cidr}”] }
ingress {
from_port = 443
to_port = 443
protocol = “tcp”
cidr_blocks = [“${var.private_subnet_cidr}”] }
ingress {
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”] }
ingress {
from_port = -1
to_port = -1
protocol = “icmp”
cidr_blocks = [“0.0.0.0/0”] }

egress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”] }
egress {
from_port = 443
to_port = 443
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”] }
egress {
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“${var.vpc_cidr}”] }
egress {
from_port = -1
to_port = -1
protocol = “icmp”
cidr_blocks = [“0.0.0.0/0”] }

vpc_id = “${aws_vpc.default.id}”

tags {
Name = “NATSG”
}
}

Define security group for Web

websg.tf

resource “aws_security_group” “web” {
name = “vpc_web”
description = “Accept incoming connections.”

ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”] }
ingress {
from_port = 443
to_port = 443
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”] }
ingress {
from_port = -1
to_port = -1
protocol = “icmp”
cidr_blocks = [“0.0.0.0/0”] }

egress {
from_port = 3306
to_port = 3306
protocol = “tcp”
cidr_blocks = [“${var.private_subnet_cidr}”] }

vpc_id = “${aws_vpc.default.id}”

tags {
Name = “WebServerSG”
}
}

Define security group for database in private subnet

dbsg.tf

resource “aws_security_group” “db” {
name = “vpc_db”
description = “Accept incoming database connections.”

ingress {
from_port = 3306
to_port = 3306
protocol = “tcp”
security_groups = [“${aws_security_group.web.id}”] }

ingress {
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“${var.vpc_cidr}”] }
ingress {
from_port = -1
to_port = -1
protocol = “icmp”
cidr_blocks = [“${var.vpc_cidr}”] }

egress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”] }
egress {
from_port = 443
to_port = 443
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”] }

vpc_id = “${aws_vpc.default.id}”

tags {
Name = “DBServerSG”
}
}

Define web-server instance

webserver.tf

resource “aws_instance” “web-1” {
ami = “${lookup(var.amis, var.region)}”
availability_zone = “ap-southeast-1a”
instance_type = “t2.micro”
key_name = “${var.key_name}”
vpc_security_group_ids = [“${aws_security_group.web.id}”] subnet_id = “${aws_subnet.public-subnet-in-ap-southeast-1.id}”
associate_public_ip_address = true
source_dest_check = false

tags {
Name = “Web Server LAMP”
}
}

Define DB instance

dbinstance.tf

resource “aws_instance” “db-1” {
ami = “${lookup(var.amis, var.region)}”
availability_zone = “ap-southeast-1a”
instance_type = “t2.micro”
key_name = “${var.key_name}”
vpc_security_group_ids = [“${aws_security_group.db.id}”] subnet_id = “${aws_subnet.private-subnet-in-ap-southeast-1.id}”
source_dest_check = false

tags {
Name = “Database Server”
}
}

Define NAT instance

natinstance.tf

resource “aws_instance” “nat” {
ami = “ami-1a9dac48” # this is a special ami preconfigured to do NAT
availability_zone = “ap-southeast-1a”
instance_type = “t2.micro”
key_name = “${var.key_name}”
vpc_security_group_ids = [“${aws_security_group.nat.id}”] subnet_id = “${aws_subnet.public-subnet-in-ap-southeast-1.id}”
associate_public_ip_address = true
source_dest_check = false

tags {
Name = “NAT instance”
}
}

Allocate EIP for NAT and Web instance

eip.tf

resource “aws_eip” “nat” {
instance = “${aws_instance.nat.id}”
vpc = true
}
resource “aws_eip” “web-1” {
instance = “${aws_instance.web-1.id}”
vpc = true
}

Execute terraform plan first to find out what terraform will do. You can also make a final recheck of your infrastructure before executing terraform apply