HashiCorp Nomad and Vault: Dynamic Secrets
In a cloud-native environment, secrets management is a critical aspect of security. HashiCorp Vault is a popular tool for managing secrets and protecting
In our previous post about Packer and azure, we used Azure to introduce a HashiCorp Packer definition in HCL Format which can easily be adapted to create any custom machine configuration. The next step is to use the same provisioning configuration also for other cloud providers and to have the same outcoming result each time - independent from the infrastructure your virtual machine is running.
A short recap on what we defined last time for azure, is this configuration item in HashiCorp Packer.
1source "azure-arm" "core" {
2
3 client_id = var.client_id
4 client_secret = var.client_secret
5 subscription_id = var.subscription_id
6 tenant_id = var.tenant_id
7
8 managed_image_name = "UbuntuDocker"
9 managed_image_resource_group_name = "images"
10
11 os_type = "Linux"
12 image_publisher = "Canonical"
13 image_offer = "0001-com-ubuntu-server-hirsute"
14 image_sku = "21_04"
15 image_version = "latest"
16
17 location = "westeurope"
18 vm_size = "Standard_F2s"
19}
Now we gonna redefine the same definition for AWS to create an AWS AMI Template. This template is going to have the same custom configuration as our previous Azure VM. So we also gonna use Ubuntu 21.04 base image as the starting point for our customizing process.
1{% raw %}
2source "amazon-ebs" "core" {
3 ami_description = "Ubuntu Docker AMI"
4 ami_name = "UbuntuDocker"
5 ami_regions = ["us-east-1"]
6 ami_virtualization_type = "hvm"
7 associate_public_ip_address = true
8 instance_type = "t3.medium"
9 profile = var.aws_profile
10 region = "us-east-1"
11 ssh_clear_authorized_keys = true
12 ssh_timeout = "5m"
13 ssh_username = "ubuntu"
14
15 source_ami_filter {
16 filters = {
17 architecture = "x86_64"
18 name = "ubuntu/images/hvm-ssd/ubuntu-hirsute-21.04-amd64-server*"
19 root-device-type = "ebs"
20 virtualization-type = "hvm"
21 }
22 most_recent = true
23 owners = ["099720109477"] # canonical
24 }
25}
26{% endraw %}
Once again, a small recap on our build configuration for customizing the image. We use ansible to run the actual customizing and we are using a variable on the Packer template to define which ansible playbook is used within the virtual machine.
1{% raw %}
2variable "playbook" {
3 type = string
4 default = "docker.yml"
5}
6
7build {
8 sources = [ ]
9
10 provisioner "shell" {
11 inline = ["while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done"]
12 }
13
14 provisioner "shell" {
15 execute_command = "echo 'packer' | {{ .Vars }} sudo -S -E bash '{{ .Path }}'"
16 script = "packer/scripts/setup.sh"
17 }
18
19 provisioner "ansible-local" {
20 clean_staging_directory = true
21 playbook_dir = "ansible"
22 galaxy_file = "ansible/requirements.yaml"
23 playbook_files = ["ansible/${var.playbook}.yml"]
24 }
25
26 provisioner "shell" {
27 execute_command = "echo 'packer' | {{ .Vars }} sudo -S -E bash '{{ .Path }}'"
28 script = "packer/scripts/cleanup.sh"
29 }
30}
31{% endraw %}
And finally here is the full definition to build 2 Virtual machines - one for use within Azure, the other within AWS. Both images will run the same provisioning process by ansible. In this case, we have to set all those variables for each of the infrastructures we are using and referencing within this build process, otherwise, we will experience errors from Packer, that the images cannot be built or some sources cannot be found.
Also, the full template gets quite messy if adding all your infrastructure within one single Packer definition.
1{% raw %}
2variable "playbook" {
3 type = string
4 default = "docker.yml"
5}
6
7variable "aws_profile" {
8 type = string
9 default = "${env("AWS_PROFILE")}"
10}
11
12variable "subscription_id" {
13 type = string
14 default = "${env("ARM_SUBSCRIPTION_ID")}"
15}
16
17variable "tenant_id" {
18 type = string
19 default = "${env("ARM_TENANT_ID")}"
20}
21
22variable "client_id" {
23 type = string
24 default = "${env("ARM_CLIENT_ID")}"
25}
26
27variable "client_secret" {
28 type = string
29 default = "${env("ARM_CLIENT_SECRET")}"
30}
31
32source "amazon-ebs" "core" {
33 ami_description = "Ubuntu Docker AMI"
34 ami_name = "UbuntuDocker"
35 ami_regions = ["us-east-1"]
36 ami_virtualization_type = "hvm"
37 associate_public_ip_address = true
38 force_delete_snapshot = true
39 force_deregister = true
40 instance_type = "t3.medium"
41 profile = var.aws_profile
42 region = "us-east-1"
43 ssh_clear_authorized_keys = true
44 ssh_timeout = "5m"
45 ssh_username = "ubuntu"
46
47 source_ami_filter {
48 filters = {
49 architecture = "x86_64"
50 name = "ubuntu/images/hvm-ssd/ubuntu-hirsute-21.04-amd64-server*"
51 root-device-type = "ebs"
52 virtualization-type = "hvm"
53 }
54 most_recent = true
55 owners = ["099720109477"] # canonical
56 }
57}
58
59source "azure-arm" "core" {
60
61 client_id = var.client_id
62 client_secret = var.client_secret
63 subscription_id = var.subscription_id
64 tenant_id = var.tenant_id
65
66 managed_image_name = "UbuntuDocker"
67 managed_image_resource_group_name = "images"
68
69 os_type = "Linux"
70 image_publisher = "Canonical"
71 image_offer = "0001-com-ubuntu-server-hirsute"
72 image_sku = "21_04"
73 image_version = "latest"
74
75 location = "westeurope"
76 vm_size = "Standard_F2s"
77}
78
79build {
80 sources = ["source.amazon-ebs.core", "source.azure-arm.core"]
81
82 provisioner "shell" {
83 inline = ["while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done"]
84 }
85
86 provisioner "shell" {
87 execute_command = "echo 'packer' | {{ .Vars }} sudo -S -E bash '{{ .Path }}'"
88 script = "packer/scripts/setup.sh"
89 }
90
91 provisioner "ansible-local" {
92 clean_staging_directory = true
93 playbook_dir = "ansible"
94 galaxy_file = "ansible/requirements.yaml"
95 playbook_files = ["ansible/${var.playbook}.yml"]
96 }
97
98 provisioner "shell" {
99 execute_command = "echo 'packer' | {{ .Vars }} sudo -S -E bash '{{ .Path }}'"
100 script = "packer/scripts/cleanup.sh"
101 }
102}
103{% endraw %}
These definitions can be adapted to any further cloud definition - e.g. Google Cloud, VMWare, Vagrant, ...
The outcome of this process should be identical provisioned virtual machines for the infrastructure you define as sources. It should be the same, but is it really the same? That's the next topic we gonna cover - how to ensure that those created virtual machines behave the same. This will open the possibility to separate some definitions and generate the template on the fly with very basic tooling.
You are interested in our courses or you simply have a question that needs answering? You can contact us at anytime! We will do our best to answer all your questions.
Contact us