Distribution of HashiCorp Packer templated Virtual Machines


Distribution of HashiCorp Packer templated Virtual Machines

After creating our virtual machine templates in multiple clouds, having tested them and validated that all variants are behaving as expected - there is just one more question: how to distribute them?

At first, there are 2 strategies how to conquer this problem:

  • Build the template once, test it and then distribute it to different regions
  • Build it in every region and test it everywhere

Build Once, Copy to all over

If you went for the first strategy then you want to copy your existing template to different regions. For this use case exists a resource in AWS aws_ami_copy to copy a certain image to different regions.

On the other hand, you might also have a staging/production separation of accounts. And to solve this you can just use aws_ami_launch_permission of terraform to update the permissions. In our example below we use a flag update_launch_permissions that must be set to true to update the launch permissions. This is a task that gets set to true when a Merge Request is merged into the default branch.

 1provider "aws" {
 2  alias = "source"
 4provider "aws" {
 5  region = var.aws_ami_region
 6  alias  = "dest"
 8data "aws_region" "source" {
 9  provider = aws.source
11variable "aws_ami_region" {
12  type = string
14variable "aws_ami_id" {
15  type = string
17variable "update_launch_permission" {
18  type    = bool
19  default = false
21data "aws_ami" "source" {
22  provider = aws.source
24  filter {
25    name   = "image-id"
26    values = [var.aws_ami_id]
27  }
29resource "aws_ami_copy" "copy" {
30  provider = aws.dest
32  name              = data.aws_ami.source.name
33  description       = data.aws_ami.source.description
34  tags              = data.aws_ami.source.tags
35  source_ami_id     = var.aws_ami_id
36  source_ami_region = data.aws_region.source.name
39resource "aws_ami_launch_permission" "source_launch" {
40  provider = aws.source
41  count    = var.update_launch_permission ? 1 : 0
43  image_id   = var.aws_ami_id
44  account_id = var.production_account_id
47resource "aws_ami_launch_permission" "dest_launch" {
48  provider = aws.dest
49  count    = var.update_launch_permission ? 1 : 0
51  image_id   = aws_ami_copy.copy.id
52  account_id = var.production_account_id

Build everywhere

The other tactic is to build every image in every location/region and test it also in these locations. In this case, you spend more time on build and test time but you must not be aware of in which location the image must be copied.

For this use case there is a solution from HashiCorp in development. At the time of writing the HashiCorp Cloud Platform Packer is in beta, can be used for free to test and get a first look at the implementation. The implementation is still going on so the information published here might also change when going in production.

You have to keep in mind that you have to define all those sources within this single Packer configuration. Then this build will get a new version within HCP Packer. Each build is here associated with a certain commit. So if your base image gets an update, you are forced to create also a change in your repository to create a new version in HCP Packer.

In our current example, we are using the previous source definitions of Azure and AWS to build one common image name on HCP Packer. The same solution can be used to define sources in the different regions you are using.

We are replacing the build item here with the previous one. In this sample HCP Packer also is set up as a common place for the templates across cloud platforms. You can afterwards access these templates with their ID of a certain cloud platform by querying HCP Packer via terraform.

 1{% raw %}
 2build {
 3  hcp_packer_registry {
 4    bucket_name = "UbuntuDocker"
 5    description = "Customized Ubuntu 21.04 Image with docker deployment"
 7    provisioner "shell" {
 8        inline = ["while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done"]
 9    }
11    provisioner "shell" {
12        execute_command = "echo 'packer' | {{ .Vars }} sudo -S -E bash '{{ .Path }}'"
13        script          = "packer/scripts/setup.sh"
14    }
16    provisioner "ansible-local" {
17        clean_staging_directory = true
18        playbook_dir            = "ansible"
19        galaxy_file             = "ansible/requirements.yaml"
20        playbook_files          = ["ansible/${var.playbook}.yml"]
21    }
23    provisioner "shell" {
24        execute_command = "echo 'packer' | {{ .Vars }} sudo -S -E bash '{{ .Path }}'"
25        script          = "packer/scripts/cleanup.sh"
26    }
28  }
29  sources = [
30   "source.amazon-ebs.core",
31   "source.azure-arm.core"
32  ]
34{% endraw %}

Within HCP Packer you can create channels that can be used to distinguish between different stages of availability. In our point of view, this feature is awesome to create promote pipelines where you can have development, staging and production channels.

  • development can be set/seen as default channel where all builds land in the first place and are ready to be tested
  • staging is the channel where you can find tested image templates that are ready for staging environments
  • production images that have run in staging and are flagged as ready for production

In this area is also some limitation in this current beta implementation: You cannot automate the channel selection of a certain image version. This must be done - at the moment - interactive via the UI.

Using HCP Packer Template information in Terraform

The last part missing on HCP Packer is how to use the generated information. In our sample, we assume that you have promoted the current image to a channel production. So you can use the following code to query HCP Packer for the latest information about the current customized Ubuntu template.

 1data "hcp_packer_iteration" "ubuntu" {
 2  bucket_name = "UbuntuDocker"
 3  channel     = "production"
 6data "hcp_packer_image" "ubuntu_us_east_1" {
 7  bucket_name    = "UbuntuDocker"
 8  cloud_provider = "aws"
 9  iteration_id   = data.hcp_packer_iteration.ubuntu.ulid
10  region         = "us-east-1"
13resource "aws_instance" "app_server" {
14  ami           = data.hcp_packer_image.ubuntu_us_east_2.cloud_image_id
15  instance_type = "t2.micro"
16  tags = {
17    Name = "Ubuntu Docker Custom HCP"
18  }

Final thoughts

HCP Packer is a nice addition to maintain all the different image templates, versions and variants in a common place. With the ability to use channels and promote these changes to certain follower channels, we get a manageable release pipeline for our image creation process.

In this current beta stadium, we are just missing the ability to promote the changes as Infrastructure as Code implementation by CICD automation. But we are in good hope that this feature will follow.

Go Back explore our courses

We are here for you

You are interested in our courses or you simply have a question that needs answering? You can contact us at anytime! We will do our best to answer all your questions.

Contact us