Keeping Credentials Out of Code - A Practical Guide to 1Password and Vault
The Problem: Hardcoded Credentials Every developer has faced this temptation: you need to test something quickly, so you hardcode an API key or database

Every developer has faced this temptation: you need to test something quickly, so you hardcode an API key or database password directly in your code or config file. "I'll remove it before committing," you tell yourself. But we all know how that story ends - credentials leak into repositories, get shared in Slack channels, or end up in production configuration files.
The consequences are real. Data breaches, compromised systems, and security incidents often trace back to exposed credentials. The solution isn't just to "be more careful" - we need proper tooling and workflows that make it easier to do the right thing than the wrong thing.
In this guide, we'll explore two complementary approaches to credential management:
Both tools excel at keeping credentials out of your code and configuration files, but they serve different use cases and can work together beautifully.
While most people know 1Password as a browser extension for storing passwords, the 1Password CLI brings secrets management directly into your development workflow.
1# Sign in
2op signin
3
4# List vaults
5op vault list
6
7# List items in a vault
8op item list --vault "Development"
9
10# Get a specific item
11op item get "GitHub Token" --vault "Development"
12
13# Get just a specific field
14op item get "GitHub Token" --fields password
op runThe real magic happens with op run, which allows you to inject secrets as environment variables without ever exposing them in your shell history or process list.
First, create a .env file with references to 1Password items:
1# .env
2DB_PASSWORD="op://Development/PostgreSQL/password"
3API_KEY="op://Development/Stripe API/credential"
4AWS_ACCESS_KEY_ID="op://Development/AWS/access_key_id"
5AWS_SECRET_ACCESS_KEY="op://Development/AWS/secret_access_key"
6GITHUB_TOKEN="op://Development/GitHub/token"
The syntax is: op://<vault>/<item>/<field>
Now run your application with injected secrets:
1# Run a Node.js application
2op run -- node app.js
3
4# Run a Python script
5op run -- python manage.py runserver
6
7# Run Docker Compose
8op run -- docker-compose up
9
10# Run Terraform
11op run -- terraform apply
12
13# Run any command with secrets injected
14op run -- ./your-application
Let's look at real-world scenarios where 1Password CLI can transform your development workflow from insecure to secure.
Consider a typical Node.js application that needs to connect to a database. The insecure approach that many developers fall into looks like this in your config.js:
1// Never do this
2module.exports = {
3 database: {
4 host: 'localhost',
5 user: 'admin',
6 password: 'SuperSecret123!', // Hardcoded!
7 database: 'myapp'
8 }
9};
This is dangerous because the password is directly in your source code, visible to anyone with repository access, and will likely end up in version control history even if you try to remove it later.
Instead, refactor your configuration to read from environment variables:
1// Read from environment variables
2module.exports = {
3 database: {
4 host: process.env.DB_HOST || 'localhost',
5 user: process.env.DB_USER || 'admin',
6 password: process.env.DB_PASSWORD, // Injected by op run
7 database: process.env.DB_NAME || 'myapp'
8 }
9};
Now your code contains no secrets - it only references environment variables. The fallback values (using ||) provide defaults for non-sensitive settings, but notice that DB_PASSWORD has no fallback, ensuring the application fails fast if the secret isn't provided.
Next, create a .env file in your project root with 1Password secret references. This file can safely be committed to version control because it doesn't contain actual secrets, only references to items in your 1Password vault:
1DB_HOST=localhost
2DB_USER=admin
3DB_PASSWORD=op://Development/PostgreSQL/password
4DB_NAME=myapp
The magic is in the op:// reference syntax. When you run your application through op run, 1Password CLI automatically resolves these references, fetching the actual password from your vault and injecting it as an environment variable - but only for the duration of your application's process.
Finally, run your application:
1op run -- node app.js
The -- separator tells op run that everything after it is the command to execute. Your application runs normally, but with secrets securely injected, never touching your filesystem or appearing in process lists.
Beyond local development, 1Password integrates seamlessly with CI/CD pipelines. Here's how to securely inject secrets in GitHub Actions without exposing them in your workflow files or logs:
1name: Deploy
2on: [push]
3
4jobs:
5 deploy:
6 runs-on: ubuntu-latest
7 steps:
8 - uses: actions/checkout@v3
9
10 - name: Load secrets from 1Password
11 uses: 1password/load-secrets-action@v1
12 with:
13 export-env: true
14 env:
15 OP_SERVICE_ACCOUNT_TOKEN: ${{ secrets.OP_SERVICE_ACCOUNT_TOKEN }}
16 AWS_ACCESS_KEY_ID: op://Production/AWS/access_key_id
17 AWS_SECRET_ACCESS_KEY: op://Production/AWS/secret_access_key
18
19 - name: Deploy to AWS
20 run: |
21 # Secrets are now available as environment variables
22 aws s3 sync ./dist s3://my-bucket
The OP_SERVICE_ACCOUNT_TOKEN is stored as a GitHub secret (the only secret you need to store there!), and the action uses it to fetch all other secrets from 1Password. This approach centralizes secret management in 1Password while keeping your CI/CD workflows clean and auditable.
When you need more programmatic control over secret retrieval, 1Password provides official SDKs for Go, JavaScript, and Python. These SDKs are perfect for applications that need to fetch secrets dynamically at runtime rather than relying on environment variable injection:
1import asyncio
2import os
3from onepassword import Client
4
5async def main():
6 # Initialize client with service account token
7 client = await Client.authenticate(
8 auth=os.environ.get("OP_SERVICE_ACCOUNT_TOKEN"),
9 integration_name="My App",
10 integration_version="v1.0.0"
11 )
12
13 # Resolve a secret reference
14 password = await client.secrets.resolve("op://Development/Database/password")
15
16 # Use it in your application
17 db_connection = await connect_to_database(
18 host="localhost",
19 user="admin",
20 password=password,
21 database="myapp"
22 )
23
24if __name__ == '__main__':
25 asyncio.run(main())
The SDK approach gives you fine-grained control over when and how secrets are fetched. It's particularly useful for long-running applications like web servers that might need to refresh secrets periodically, or for applications that need to fetch different secrets based on runtime conditions.
1// JavaScript/Node.js example with 1Password SDK
2import { createClient } from "@1password/sdk";
3
4// Initialize the client (requires OP_SERVICE_ACCOUNT_TOKEN environment variable)
5const client = await createClient({
6 auth: process.env.OP_SERVICE_ACCOUNT_TOKEN,
7 integrationName: "My App",
8 integrationVersion: "v1.0.0",
9});
10
11// Resolve a secret reference
12const password = await client.secrets.resolve("op://Development/Database/password");
13
14// Use it in your application
15const connection = await connectToDatabase({
16 host: "localhost",
17 user: "admin",
18 password: password,
19 database: "myapp"
20});
While 1Password is excellent for developer workflows and smaller teams, HashiCorp Vault is built for enterprise-scale secrets management with advanced features like dynamic secrets, encryption as a service, and comprehensive audit logging.
Let's start with fundamental Vault operations. For local experimentation, you can run a development server (note: never use dev mode in production as it stores data in memory and runs unsealed):
1# Start a dev server (for testing only!)
2vault server -dev
3
4# In another terminal, set the address
5export VAULT_ADDR='http://127.0.0.1:8200'
6
7# Authenticate
8vault login <root-token>
9
10# Write a secret (KV v2)
11vault kv put -mount=secret myapp/config \
12 db_password="SuperSecret123" \
13 api_key="sk_test_123456"
14
15# Read a secret
16vault kv get -mount=secret myapp/config
17
18# Get just one field
19vault kv get -mount=secret -field=db_password myapp/config
The Key-Value (KV) secrets engine is Vault's simplest storage backend, perfect for static secrets like API keys and passwords. The -mount=secret flag specifies which KV mount to use - secret is the default in dev mode.
There are several strategies for getting secrets from Vault into your applications. Each has different trade-offs in terms of complexity, security, and operational overhead.
vault kvThe simplest approach is to write a wrapper script that fetches secrets from Vault and exports them as environment variables before starting your application:
1#!/bin/bash
2# load-secrets.sh
3
4export DB_PASSWORD=$(vault kv get -mount=secret -field=db_password myapp/config)
5export API_KEY=$(vault kv get -mount=secret -field=api_key myapp/config)
6export AWS_ACCESS_KEY_ID=$(vault kv get -mount=secret -field=access_key_id aws/credentials)
7export AWS_SECRET_ACCESS_KEY=$(vault kv get -mount=secret -field=secret_access_key aws/credentials)
8
9# Run your application
10exec "$@"
Use it:
1./load-secrets.sh node app.js
2./load-secrets.sh python manage.py runserver
This approach is straightforward but has a limitation: the secrets are exposed in the process environment, which can be visible to other processes on the system. It works well for development and testing but may not meet security requirements for production systems.
For more sophisticated deployments, Vault Agent acts as a local daemon that handles authentication and can automatically render configuration files with secrets injected. This is particularly useful for applications that need configuration files rather than environment variables:
1# vault-agent-config.hcl
2pid_file = "./pidfile"
3
4vault {
5 address = "http://127.0.0.1:8200"
6}
7
8auto_auth {
9 method {
10 type = "approle"
11
12 config = {
13 role_id_file_path = "/etc/vault/role-id"
14 secret_id_file_path = "/etc/vault/secret-id"
15 }
16 }
17}
18
19template {
20 source = "/etc/app/config.tmpl"
21 destination = "/etc/app/config.json"
22}
Template file (config.tmpl):
1{
2 "database": {
3 "host": "{{ with secret "secret/myapp/config" }}{{ .Data.data.db_host }}{{ end }}",
4 "password": "{{ with secret "secret/myapp/config" }}{{ .Data.data.db_password }}{{ end }}"
5 },
6 "api": {
7 "key": "{{ with secret "secret/myapp/config" }}{{ .Data.data.api_key }}{{ end }}"
8 }
9}
Start the Vault Agent, which will authenticate using AppRole and continuously watch for changes, re-rendering the template whenever secrets are updated:
1vault agent -config=vault-agent-config.hcl
Vault Agent is ideal for containerized environments and systems where you want to decouple secret retrieval from application code. The agent handles token renewal automatically, ensuring your application always has valid credentials.
For maximum flexibility and control, you can integrate Vault directly into your application code using official SDKs.
1# Python example with AppRole authentication
2import hvac
3import os
4
5# Initialize Vault client
6client = hvac.Client(url='http://127.0.0.1:8200')
7
8# Authenticate with AppRole
9auth_response = client.auth.approle.login(
10 role_id=os.environ.get('VAULT_ROLE_ID'),
11 secret_id=os.environ.get('VAULT_SECRET_ID')
12)
13
14# Read a secret
15secret = client.secrets.kv.v2.read_secret_version(
16 path='myapp/config'
17)
18
19db_password = secret['data']['data']['db_password']
20api_key = secret['data']['data']['api_key']
21
22# Use the secrets
23connection = connect_to_database(password=db_password)
1// Node.js example with AppRole authentication
2import vault from 'node-vault';
3
4async function getSecrets() {
5 // Initialize Vault client
6 const client = vault({
7 endpoint: 'http://127.0.0.1:8200'
8 });
9
10 // Authenticate with AppRole
11 const authResponse = await client.approleLogin({
12 role_id: process.env.VAULT_ROLE_ID,
13 secret_id: process.env.VAULT_SECRET_ID
14 });
15
16 // Token is automatically set by node-vault after approleLogin
17
18 // Read secrets
19 const result = await client.read('secret/data/myapp/config');
20
21 return {
22 dbPassword: result.data.data.db_password,
23 apiKey: result.data.data.api_key
24 };
25}
1// Go example with AppRole authentication
2package main
3
4import (
5 "fmt"
6 "os"
7
8 vault "github.com/hashicorp/vault/api"
9)
10
11func main() {
12 // Initialize Vault client
13 config := vault.DefaultConfig()
14 client, err := vault.NewClient(config)
15 if err != nil {
16 panic(err)
17 }
18
19 // Authenticate with AppRole
20 authData := map[string]interface{}{
21 "role_id": os.Getenv("VAULT_ROLE_ID"),
22 "secret_id": os.Getenv("VAULT_SECRET_ID"),
23 }
24
25 authResponse, err := client.Logical().Write("auth/approle/login", authData)
26 if err != nil {
27 panic(err)
28 }
29
30 client.SetToken(authResponse.Auth.ClientToken)
31
32 // Read secrets
33 secret, err := client.Logical().Read("secret/data/myapp/config")
34 if err != nil {
35 panic(err)
36 }
37
38 data := secret.Data["data"].(map[string]interface{})
39 dbPassword := data["db_password"].(string)
40 apiKey := data["api_key"].(string)
41
42 fmt.Printf("DB Password: %s\n", dbPassword)
43 fmt.Printf("API Key: %s\n", apiKey)
44}
One of Vault's most powerful features is dynamic secrets - credentials that are generated on-demand and expire automatically. Unlike static secrets that you store and rotate manually, dynamic secrets are created when requested and automatically deleted after their lease expires. This dramatically reduces the attack surface: if credentials are compromised, they're only valid for a limited time.
Let's configure Vault to generate temporary PostgreSQL credentials on demand. First, enable the database secrets engine and configure it to connect to your database:
1# Enable the database secrets engine
2vault secrets enable database
3
4# Configure PostgreSQL connection
5vault write database/config/mydb \
6 plugin_name=postgresql-database-plugin \
7 allowed_roles="readonly,readwrite" \
8 connection_url="postgresql://{{username}}:{{password}}@localhost:5432/myapp" \
9 username="vault" \
10 password="vault-password"
11
12# Create a role that generates 1-hour credentials
13vault write database/roles/readwrite \
14 db_name=mydb \
15 creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}' INHERIT; \
16 GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
17 default_ttl="1h" \
18 max_ttl="24h"
19
20# Generate credentials (they'll automatically expire!)
21vault read database/creds/readwrite
Each time you run this command, Vault generates a unique username and password, creates a PostgreSQL role with those credentials, and returns them to you. The output looks like this:
Key Value
--- -----
lease_id database/creds/readwrite/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6
lease_duration 1h
lease_renewable true
password A1a-kR0dZ7y2wP3lR6qS
username v-token-readwrite-x4kt9txzx
After one hour (the lease_duration), Vault automatically revokes these credentials. Your application can renew the lease before expiration, or simply request new credentials - there's no manual rotation required.
Dynamic secrets aren't limited to databases. Here's how to configure Vault to generate temporary AWS credentials with specific permissions:
1# Enable AWS secrets engine
2vault secrets enable aws
3
4# Configure AWS credentials
5vault write aws/config/root \
6 access_key=AKIAIOSFODNN7EXAMPLE \
7 secret_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY \
8 region=us-east-1
9
10# Create a role that generates temporary AWS credentials
11vault write aws/roles/ec2-admin \
12 credential_type=iam_user \
13 policy_document=-<<EOF
14{
15 "Version": "2012-10-17",
16 "Statement": [
17 {
18 "Effect": "Allow",
19 "Action": "ec2:*",
20 "Resource": "*"
21 }
22 ]
23}
24EOF
25
26# Generate temporary AWS credentials
27vault read aws/creds/ec2-admin
This creates a real IAM user in AWS with the specified permissions, returns the credentials to your application, and automatically deletes the user when the lease expires. This is incredibly powerful for CI/CD pipelines and temporary access scenarios - no more long-lived AWS access keys sitting in configuration files.
Now that we've covered the technical implementation, let's discuss operational best practices that will help you maintain a secure credential management system over time.
1# Add to .gitignore
2echo ".env" >> .gitignore
3echo ".env.local" >> .gitignore
4echo "*.key" >> .gitignore
5echo "*.pem" >> .gitignore
6echo "secrets.yml" >> .gitignore
1# Bad - .env file
2DATABASE_PASSWORD=SuperSecret123
3
4# Good - .env file with 1Password references
5DATABASE_PASSWORD=op://Production/Database/password
1# For static roles, trigger manual rotation
2vault write -f database/rotate-role/my-static-role
3
4# Or use dynamic secrets that expire and rotate automatically
5vault read database/creds/readwrite # New credentials each time!
Never share secrets across environments. Development should use development credentials, production should use production credentials. With 1Password's environment file support, this becomes straightforward:
1# Development
2op run --env-file=.env.dev -- npm start
3
4# Staging
5op run --env-file=.env.staging -- npm start
6
7# Production
8op run --env-file=.env.prod -- npm start
Each .env.* file references different vaults or items in 1Password, ensuring complete separation between environments. This isolation prevents accidental production data access during development and limits the blast radius of compromised credentials.
1# Vault policy - only read access to specific paths
2path "secret/data/myapp/*" {
3 capabilities = ["read", "list"]
4}
5
6path "secret/data/shared/*" {
7 capabilities = ["read", "list"]
8}
Grant only the minimum necessary permissions. Applications should only read secrets, not write or delete them. Service accounts should only access the specific paths they need. This policy-based approach means a compromised application can't escalate its privileges or access secrets it doesn't need.
Comprehensive audit logging is critical for security compliance and incident response. Enable it early:
1# Vault audit logging
2vault audit enable file file_path=/var/log/vault-audit.log
3
4# View who accessed what secret
5cat /var/log/vault-audit.log | jq '.request.path'
Every secret access, authentication attempt, and policy change gets logged with full context: who made the request, when, from what IP address, and whether it succeeded. This audit trail is invaluable for security investigations, compliance audits, and understanding usage patterns.
Keeping credentials out of code and configuration files isn't just a best practice - it's essential for security. Both 1Password and HashiCorp Vault provide powerful solutions:
The key is making secret management frictionless. When it's easier to use op run or the Vault SDK than to hardcode a password, developers will naturally do the right thing.
Start small: pick one project, replace its hardcoded credentials with environment variables, and use op run or Vault to inject them. You'll immediately see the benefits, and you can expand from there.
Your future self (and your security team) will thank you.
You are interested in our courses or you simply have a question that needs answering? You can contact us at anytime! We will do our best to answer all your questions.
Contact us