/ TECH

Managing Infrastructure on Digital Ocean with Terraform

As a general fan of the approach “automate all the things” I decided it was time to do a little more work on this blog. I wanted to be able to setup my blog, from datacentre to browser in one command. I decided to go with Terraform as a method for doing this, as it looks like a really well documented tool. Lets get started!

Installing

I use macOS, so I’ll be using brew, checkout Installing Terraform for guidance on your own system

brew install terraform

Setup

Before we can create any resources, we need to configure our provider. You can use many providers in one script, but at the moment we just need Digital Ocean.

variable "do_token" {}

provider "digitalocean" {
  token = "${var.do_token}"
}

This tells Terraform how to authenticate with my Digital Ocean account. You can create an API token here. The first part of the above code defines a variable named do_token. By not assigning a value, we have two options. Firstly, we can create a terraform.tfvars file and this will be loaded at runtime. Secondly, we could leave it blank and Terraform will prompt us for the value. We’ll go with the first option. Create terraform.tfvars and add the following content. It’s best not to commit this file to version control, just to be safe.

do_token=<YOUR_TOKEN>

Now that we have our token setup, we can create the setup for the digitalocean provider. At this point, we’ll need to run terraform init to download the Digital Ocean package for Terraform.

Creating a Server with SSH Keys

We’re now going to create a new server, with accompanying SSH keys so that we can login to it, if needed. To do this we can use the digitalocean provider we initialized above.

variable "key_path" {}

resource "digitalocean_ssh_key" "markprovan_key" {
   name = "Mark Provan Key"
   public_key = "${file("${var.key_path}")}"
}

This will upload our public key to Digital Ocean, so that it can be automatically added to our server. You’ll need to add a new variable to your terraform.tfvars file, containing the path to your public key. Now, lets get to work creating that server.

resource "digitalocean_tag" "blog_tag" {
  name = "blog"
}

resource "digitalocean_tag" "personal_tag" {
  name = "personal"
}

resource "digitalocean_droplet" "blog" {
  image  = "coreos-stable"
  name   = "blog"
  region = "lon1"
  size   = "s-1vcpu-1gb"
  tags   = [
    "${digitalocean_tag.blog_tag.id}",
    "${digitalocean_tag.personal_tag.id}"
  ]
  ssh_keys = [
    "${digitalocean_ssh_key.markprovan_key.id}"
  ]

  depends_on = ["digitalocean_ssh_key.markprovan_key"]
}

We’re doing a few things here. Firstly we’re creating two tags to use when provisioning the box, blog and personal. This is purely for me to be able to organize my server list in the Digital Ocean dashboard. This is where Terraform gets really cool. We can use ${digitalocean_tag.blog_tag.id} to access the ID given to the tag by Digital Ocean, to pass to the API when creating a new server. You can see we also do this for the SSH key ID. Being able to access information about other resources becomes super useful when we want to automate a few things that depend on each other. Speaking of dependencies, notice the depends_on array, which currently contains a reference to the SSH Key Resource. This means that Terraform won’t attempt to create this resource, until the SSH Key has been setup.

DNS

Now that we have defined our server, let’s create a DNS entry on Cloudflare, to point my domain name to it.

variable "cloudflare_email" {}
variable "cloudflare_token" {}

provider "cloudflare" {
  email = "${var.cloudflare_email}"
  token = "${var.cloudflare_token}"
}

resource "cloudflare_record" "markprovan" {
  domain = "markprovan.com"
  name   = "markprovan.com"
  proxied = true
  value  = "${digitalocean_droplet.blog.ipv4_address}"
  type   = "A"
}

Here we are creating a new provider instance for Cloudflare (remember to run terraform init to install the package) and defining two new variables that need to go into our variables file. Lastly, we create a new A record on my domain name, that points it to the IP address of the server we created. For reference, each resource in Terraform lists which variables are exposed in the documentation, here’s a Digital Ocean Droplet for example.

Provisioning Software

Now that we have a server running, we want to actually do something with it. As I already have my blog setup to run using Docker, I want to upload my docker-compose.yml file and run it. Let’s modify our server resource to make use of provisioners, which let us run commands on it.

resource "digitalocean_droplet" "blog" {
  image  = "coreos-stable"
  name   = "blog"
  region = "lon1"
  size   = "s-1vcpu-1gb"
  tags   = [
    "${digitalocean_tag.blog_tag.id}",
    "${digitalocean_tag.personal_tag.id}"
  ]
  ssh_keys = [
    "${digitalocean_ssh_key.markprovan_key.id}"
  ]

  depends_on = ["digitalocean_ssh_key.markprovan_key"]

  provisioner "file" {
    source = "docker-compose.yml"
    destination = "/home/core/docker-compose.yml"

    connection {
      type     = "ssh"
      user     = "core"
      private_key = "${file("${var.private_key_path}")}"
    }
  }

  provisioner "remote-exec" {
    inline = [
      "docker-compose up -d"
    ]

    connection {
      type     = "ssh"
      user     = "core"
      private_key = "${file("${var.private_key_path}")}"
    }
  }
}

I’ve created two provisioners here, one of type file to handle copying my local docker-compose.yml file to the server and a remote-exec provisioner which executes commands on the server, in this case docker-compose. I’ve also introduced a new variable for my local SSH key. It’s used in the connection block by Terraform to define how the server is accessed to run commands. I’d like to be able to set that as a global config, for all SSH connections, rather than define it twice, but I’m not sure how to do that at the moment. These provisioners are run in order, so my Docker file is uploaded first, before it is run.

Now that we’ve got all this defined, we can simply run terraform apply and type yes to confirm the changes when prompted and we are up and running! Terraform is a really handy tool to have in your arsenal and I’m looking forward to using it more in future.