Martin Buchleitner, Senior IT-Consultant

About the author

Martin Buchleitner is a Senior IT-Consultant for Infralovers and for Commandemy. Twitter github LinkedIn

See all articles by this author

HashiCorp Vault SSH Authentication

In the last HashiCorp Vault post we described a way to configure Vault and also our Infrastructure to use the One Time Passwords feature of Vault. This time we gonna review a way to use signed public keys.

This method is more useful for automation tasks, but also giving access to users because you must not maintain valid user keys on the servers - this is done by the configuration of your HashiCorp Vault.

As server administrators this method allows you to configure servers with the ability to validate a certain ssh user key against a trusted ca. If the CA signed their users keys, those keys are valid for login. Also these signed keys can have a time to live ( TTL ) set, so those users can only authenticate within 30 minutes after signimg the key.

With the addition that vault can have multiple ssh public key roles with different address ranges set, you can create different end points for different network zones which allow a certain subset of users to authenticate.

Configure Hashicorp Vault with Terraform

We are starting with the HashiCorp Vault configuration by terraform to configure the authentication backend. The first code block defines our vault installation and creates a secret backend in vault of type ssh. This is also mounted to the path ssh. You can adapt this to your design as it is also possible to use more than one ssh backend. Also the default lease times are set within this configuration.

provider "vault" {
  address = "https://vault.local:8200/"
}
resource "vault_mount" "ssh" {
  type = "ssh"
  path = "ssh_signed_keys"

  default_lease_ttl_seconds = "14400"  # 4h
  max_lease_ttl_seconds     = "604800" # 1 week

}

Next we configure the role which allows the key signing. This role has the parameter key_type set to ca. It is also necessary to disallow host certificates because users should not be able to generate certificates for our host infrastructure. How this can be done will be part of the next article. Within the ssh role we also define a short lifetime of those keys - we want the behavior that users just create a new signed key for a new connection or a another host.

In this code block a policy is included which must be assigned to users so that they are able to use this role and generate signed keys.

resource "vault_ssh_secret_backend_role" "client_keys" {
  name     = "client_keys"
  backend  = vault_mount.ssh.path
  key_type = "ca"

  allow_host_certificates = false
  allow_subdomains        = false
  allow_user_key_ids      = false
  allow_user_certificates = true
  default_extensions = {
    "permit-pty" = ""
  }
  allowed_extensions = "permit-pty,permit-port-forwarding"
  default_user       = "martin"
  allowed_users      = "martin,ubuntu"
  max_ttl            = "30m"
  ttl                = "10m"
  cidr_list          = "172.16.0.0/16"
}

resource "vault_policy" "user_signing" {
  name   = "user_signing"
  policy = <<EOT
path "${vault_mount.ssh.path}/sign/${vault_ssh_secret_backend_role.client_keys.name}" {
    capabilities = ["create", "read", "update"]
}
EOT
}

Afterwards you can run terraform with exporting a valid vault token as environment variable to apply these changes.

export VAULT_TOKEN="my-vault-token"
terraform apply

HashiCorp Vault Approle Authentication

Since we are using this code within our automation we do not want to use our own vault token to authenticate and read the ssh backends ca. For this purpose we are adding an approle authentication and also a policy which allows this role to read the required data from vault.

resource "vault_auth_backend" "approle" {
  type = "approle"
}
resource "vault_approle_auth_backend_role" "automated_access" {
  backend        = data.vault_auth_backend.approle.path
  role_name      = "automated_access"
  token_policies = [ vault_policy.ssh_ca_read.name ]
}
resource "vault_approle_auth_backend_role_secret_id" "automated_access" {
  backend   = vault_auth_backend.approle.path
  role_name = vault_approle_auth_backend_role.automated_access.role_name
}
resource "vault_policy" "ssh_ca_read" {
  name   = "ssh_ca_read"
  policy = <<EOT
path "${vault_mount.ssh.path}/config/ca" {
  capabilities = [ "read" ]
}
EOT
}
output "approle_id" {
  value = vault_approle_auth_backend_role.automated_access.role_id
}
output "secret_id" {
  value = vault_approle_auth_backend_role_secret_id.automated_access.secret_id
}

And again apply this change to your current vault configuration. On this part we must use our own token because this is assigned with higher permissions than the approle of course.

export VAULT_TOKEN="my-vault-token"
terraform apply

Ansible to rollout SSH Configuration

Ansible Role

Because ansible galaxy has no implementation available about an ansible role to sign ssh keys we just wrote one. You can use this role by following ansible-galaxy command:

ansible-galaxy install https://github.com/infralovers/ansible-vault-ssh-signed-keys

The role interacts with the HTTP API of HashiCorp Vault. So you are not forced to install any other tool for this role.

Ansible Rollout

To use the role you create a playbook which includes the role and set these variables. Additionally we suggest to configure ansible vault to provide vault_host_role_id and vault_host_secret_id which were created by the approle authentication configuration. But you can also use the commandline to provide these to parameters. For user key validation the TrusedUserCAKeys option within ssh daemon gets configured:

- hosts: all
  become: yes
  vars:
  - vault_addr: http://vault.local:8200/
  - user_ssh_path: "ssh_signed_keys"
  roles:
  - role: vault-ssh-signed-keys

When using ansible vault to load the variable use the following commandline

ansible-playbook vault-ssh.yml

If you add the variable by commandline - which is not the prefered way to go

ansible-playbook vault-ssh.yml -e vault_host_role_id="<your-approle-id>" -e vault_host_secret_id="<your-secret-id>"

After this step all hosts are configured so that users are able to connect with a signed ssh key from HashiCorp Vault.

Using HashiCorp Signed Public Keys

So now we also want to use our infrastructure configuration as clients to connect to our infrastructure. This is a 2 step process:

  • Sign own SSH Key
  • Open a SSH Connection with the signed SSH Key

Signing this ssh key can be done by the following function. All parameters of the function can be configured as environment variables. You might also review “directory based profiles” to configure/change these variable based on the current directory.

At this point the scripts/functions are expecting an environment variable VAULT_TOKEN to exist. You might combine these functions with another function where the login into vault is handled to get a valid token.

vault_sign_key () {
  VAULT_ADDR="${VAULT_ADDR:-http://vault.local:8200}"
  VAULT_MOUNT=${VAULT_MOUNT:-signed_keys}
  VAULT_ROLE=${VAULT_ROLE:-client_keys}

  VAULT_PUBLIC_SSH_KEY=${VAULT_PUBLIC_SSH_KEY:-"$HOME/.ssh/id_rsa.pub"}
  VAULT_SIGNED_KEY=${VAULT_SIGNED_KEY:-"$HOME/.ssh/vault_signed_key.pub"}
  SSH_USER=${SSH_USER:-ubuntu}


  if [[ ! -n "${VAULT_TOKEN}" ]]; then
    echo "[ERR] No vault access token found at ${VAULT_TOKEN}"
    return
  fi

  export TMP_DIR=$(mktemp -d)
  cat > "$(echo ${TMP_DIR}/ssh-ca.json)" << EOF
{
    "public_key": "$(cat ${VAULT_PUBLIC_SSH_KEY})",
    "valid_principals": "${SSH_USER}"
}
EOF
  if ! curl -s --fail -H "X-Vault-Token: ${VAULT_TOKEN}" -X POST -d @${TMP_DIR}/ssh-ca.json \
      ${VAULT_ADDR}/v1/${VAULT_MOUNT}/sign/${VAULT_ROLE} | jq -r .data.signed_key > "${VAULT_SIGNED_KEY}" ; then
    echo "[ERR] Failed to sign public key."
  fi
  chmod 0600 $VAULT_SIGNED_KEY
  rm -rf $TMP_DIR
}

The upper function should provide a signed ssh key which can be used now to connect to an infrastructure host. So we can take the next step and write a wrapper function for this ssh connection. Within this function the previous function is called to provide an ssh key for the current connection. This key as used with our own private key to connect to the host.

vault_ssh () {

  if [[ -z "${1}" ]]; then
    echo "[INFO] Usage: vault_ssh user@host [-p 2222]"
    return
  fi

  if [[ "${1}" =~ ^-+ ]]; then
    echo "[ERR] Additional SSH flags must be passed after the hostname. e.g. 'vssh user@host -p 2222'"
    return
  elif [[ "${1}" =~ ^[a-zA-Z]+@[a-zA-Z]+ ]]; then
    SSH_USER=$(echo $1 | cut -d'@' -f1)
    SSH_HOST=$(echo $1 | cut -d'@' -f2)
  else
    SSH_USER=$(whoami)
    SSH_HOST=${1}
  fi

  SSH_CONFIG_USER=$(ssh -G "$SSH_HOST" | awk '$1 == "user" p{ print $2 }')
  if [ -n "$SSH_CONFIG_USER" ]; then
    SSH_USER=$SSH_CONFIG_USER
  fi
  VAULT_PRIVATE_SSH_KEY=${VAULT_PRIVATE_SSH_KEY:$HOME/.ssh/id_ed25519_private}
  VAULT_SIGNED_KEY=$(echo "$HOME/.ssh/vault_signed_key.pub")

  # sign the public key
  vault_sign_key

  # shift arguments one to the left to remove target address
  shift 1

  # construct an SSH command with the credentials, and append any extra args
  ssh -i ${VAULT_SIGNED_KEY} -i ${VAULT_PRIVATE_SSH_KEY} ${SSH_USER}@${SSH_HOST} $@
}

With these functions available you can open a ssh key connection to a server which was configured to use the vault TrustedUserCAKeys and your ssh key is valid for this address range. But for administrators life the user key must not be configured on this server - this is done by our vault configuation and the ansible rollout of the user ca certificate.

$ vault_ssh my-server
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-1025-raspi aarch64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Wed Mar  3 10:16:00 UTC 2021

  System load:  0.08               Temperature:           42.3 C
  Usage of /:   10.6% of 29.05GB   Processes:             147
  Memory usage: 45%                Users logged in:       0
  Swap usage:   0%                 IPv4 address for eth0: 1.2.3.4

 * Introducing self-healing high availability clusters in MicroK8s.
   Simple, hardened, Kubernetes for production, from RaspberryPi to DC.

     https://microk8s.io/high-availability

0 updates can be installed immediately.
0 of these updates are security updates.

Last login: Wed Mar  8 00:00:00 2021 from 127.0.0.1
ubuntu at my-server in ~