Keeping Credentials Out of Code - A Practical Guide to 1Password and Vault
The Problem: Hardcoded Credentials Every developer has faced this temptation: you need to test something quickly, so you hardcode an API key or database

How Claude Sonnet 4.5 and GitHub Copilot helped us navigate the maze of custom Backstage integrations
Spotify's Backstage platform promises a beautiful vision: a unified developer portal where teams can discover services, create resources from templates, and manage their entire infrastructure through a single pane of glass. The documentation is comprehensive, the community is vibrant, and the screenshots look amazing.
Then you actually try to implement it for your specific use case.
Suddenly, you're deep in YAML configurations, debugging GitHub action parameters, wrestling with ArgoCD sync policies, and wondering why your Crossplane resources aren't being created. The "just follow the quickstart" approach works perfectly... until it doesn't. And when it doesn't, you're on your own, piecing together Stack Overflow posts, GitHub issues, and documentation that assumes you already know what you're doing.
This is where things get tedious.
Not because Backstage is poorly designed - quite the opposite. It's incredibly flexible and powerful. But that flexibility means that ready-made solutions rarely fit your exact use case. You need to understand:
And honestly? Sometimes you just want to build the thing, not become an expert in every technology in your stack.
Here's what changed when we brought AI into the development process:
Traditional development often works like this:
AI-assisted development changes the workflow:
The key difference? You spend less time searching and more time validating. The AI becomes a knowledgeable pair programmer who remembers everything about your project.
Let me share what happened when we tried to integrate Backstage with Crossplane and ArgoCD. This wasn't a tutorial scenario - this was real work with real frustrations.
The Ask: "Make Backstage templates create Crossplane resources and store them locally."
Simple enough, right? Create a template, generate some YAML, save it to disk. Classic Backstage scaffolder use case.
The Reality: Backstage's scaffolder runs in a sandboxed temporary directory. Your files exist for about 3 seconds before they're deleted. The documentation mentions this in passing, but you don't realize the implications until you've spent an hour wondering why your files disappear.
Traditional Approach:
AI-Assisted Approach:
User: "The files are being created but disappear. Why?"
AI: "Backstage's scaffolder uses temporary directories that are cleaned up after execution. For persistence, you have three options:
Given your Crossplane use case, option 1 enables GitOps patterns with ArgoCD. Would you like to explore that?"
Boom. We pivoted from a dead-end approach to proper GitOps architecture in one exchange. No Googling, no piecing together Stack Overflow answers, no reading through 50-page design docs.
Now we needed to push files to an existing GitHub repository. How hard could that be?
The Reality: Backstage has THREE different GitHub-related actions:
publish:github - Creates NEW repositories onlygithub:repo:push - Pushes to existing repos (but has limitations)publish:github:pull-request - Creates pull requestsEach has different parameters. Documentation exists but is scattered. Examples are often outdated.
What Actually Happened:
Attempt 1: Used publish:github
1steps:
2 - id: publish
3 action: publish:github
4 input:
5 repoUrl: github.com?owner=me&repo=my-existing-repo
Result: Error: "Repository already exists" Time Lost: 15 minutes
Attempt 2: Switched to github:repo:push with force parameter
1steps:
2 - id: push
3 action: github:repo:push
4 input:
5 force: true # Found this in a GitHub issue
Result: Error: "force is not a valid parameter" Time Lost: 20 minutes
Attempt 3: Researched documentation, found correct action
1steps:
2 - id: pr
3 action: publish:github:pull-request
4 input:
5 repoUrl: github.com?owner=me&repo=my-repo
6 branchName: add-resource
Result: It worked! But only after checking official Backstage GitHub repo Time Lost: Another 20 minutes
Total Wasted Time: Nearly an hour of trial-and-error
What Should Have Happened:
User: "I need to push files to an existing GitHub repository."
AI: Searches Backstage documentation "For existing repositories, use publish:github:pull-request. This creates a PR which is better for GitOps workflows. Here's the configuration:
[exact YAML configuration]
The publish:github action only creates NEW repos, and github:repo:push requires fast-forward merges which won't work for your use case."
This is where AI shines: it can quickly parse documentation, understand context, and provide the specific solution for your exact scenario.
Our templates generated two files:
xnamespace.yaml - The actual Crossplane resourcecatalog-info.yaml - Backstage's catalog entryWe pushed both to GitHub. ArgoCD monitored the repo and synced changes to Kubernetes.
The Problem: ArgoCD tried to apply catalog-info.yaml to Kubernetes. Kubernetes said "What is this backstage.io/Component thing?" and rejected it.
Traditional Debugging:
Time Lost: 30-40 minutes
AI-Assisted Debugging:
User: "ArgoCD is failing with 'could not find backstage.io/Resource'"
AI: "That's a Backstage catalog entity, not a Kubernetes resource. ArgoCD is trying to apply it to your cluster. You need to exclude catalog-info.yaml files from ArgoCD sync:
source:
directory:
exclude: 'catalog-info.yaml'
This tells ArgoCD to sync everything EXCEPT Backstage catalog entries."
Time Saved: Immediately understood the problem and solution.
Here's what made working with AI fundamentally different:
We were working with:
That's SIX different technologies, each with their own mental model, terminology, and gotchas.
Traditional Approach: You need to context-switch constantly:
AI-Assisted: The model maintains context across ALL these systems:
User: "The GitHub push worked but ArgoCD isn't syncing."
AI: "Let me check:
You need to create a secret in the argocd namespace with label 'argocd.argoproj.io/secret-type=repository' containing your token."
The AI doesn't context-switch - it sees the whole system.
When we finally asked "What about certificates and RBAC for fresh clusters?", the AI immediately recognized:
This wasn't about knowing Backstage specifically - it was about understanding patterns:
You can't Google this. It's not in any single doc. It's the accumulated wisdom of having worked with these patterns before.
At the end of our session, we created an automation script. Initially, it just:
But then the question: "What about fresh cluster requirements?"
Traditional Approach: You'd deploy, encounter errors, then debug:
$ ./setup.sh
[cluster created]
[crossplane installed]
[resources applied]
[tests run]
ERROR: Crossplane provider can't create namespace
[spend 20 minutes debugging]
Oh, RBAC permissions!
AI-Assisted Approach:
AI: "For fresh clusters, we need:
Let me add functions to handle these automatically."
The AI anticipated problems based on understanding the system architecture, not just the immediate requirements.
After completing the project, I asked the AI to reflect on what went well and what didn't. The results were... surprisingly insightful.
Adaptive Problem-Solving: When local storage failed, we pivoted to GitOps without losing momentum. This is where AI excels - it's not emotionally invested in the first solution.
Comprehensive Documentation: We created 6 detailed guides as we solved problems. The AI documented not just "how" but "why", capturing context while it was fresh.
Final Automation: The setup script was excellent - handling dynamic service accounts, providing clear feedback, considering fresh cluster scenarios.
Multiple Template Iterations (8+ versions): We edited templates 8 times trying different GitHub actions. We should have researched available actions BEFORE starting implementation.
Delayed RBAC Configuration: We didn't think about fresh cluster requirements until explicitly asked. Should have considered "clean slate" scenarios from the beginning.
Incomplete Requirements Gathering: Started implementing "local storage" without asking "Why? What's the bigger picture?"
Destination: Excellent - complete GitOps platform with automation Journey: Educational but inefficient - could have been more direct Path: Too many trial-and-error iterations
The self-critique was honest: "We could have reached this destination more efficiently by consulting documentation earlier and asking more clarifying questions upfront."
Even with AI assistance, building custom Backstage integrations is still tedious. But it's tedious in a different way:
Without AI: Tedious because you're constantly:
With AI: Tedious because you're:
The latter is much more productive tedium. You're spending time on high-level decisions, not low-level implementation details.
If you want to try AI-assisted development, here are practical patterns that worked:
Don't say: "Create a Backstage template that stores files locally"
Do say: "I want Backstage users to create Crossplane resources. The resources should be version-controlled, automatically applied to the cluster, and visible in a UI. What's the best approach?"
This invites the AI to suggest architecture, not just implement your potentially-flawed idea.
Don't accept: "Try using the force parameter"
Do ask: "Can you show me in the official documentation where this parameter is defined?"
Make the AI cite sources. This catches assumptions early.
Don't assume: Current environment will always exist
Do ask: "If I delete everything and start over, what's needed? What about certificates, RBAC, secrets?"
This catches hidden dependencies.
Don't just: "Make it work"
Do say: "Create a test checklist before implementing. What could go wrong at each step?"
This encourages proactive problem-solving.
Don't just accept: The first implementation
Do ask: "What are alternative approaches? What are the trade-offs?"
This helps you make informed decisions.
Our final deliverable was a 573-line bash script that:
Traditional approach: This would take days to build:
AI-Assisted approach: Created in one request:
User: "Create a setup script that automates everything we did manually, including handling fresh clusters, dynamic service accounts, and providing clear user feedback."
AI: [Generates 573-line script with:
Was it perfect? No - we refined it based on the RBAC discussion. But it was 95% correct on first generation, and we could iterate on the specification ("add RBAC configuration") rather than the code.
Backstage isn't unique in being "tedious when ready-made solutions don't fit." This describes most of platform engineering:
Every platform tool promises "simple setup" and delivers "simple for the happy path."
AI doesn't eliminate this complexity. But it changes how you engage with it:
Instead of becoming an expert in 6 different technologies, you become an expert in describing what you need and validating what you get.
Instead of reading documentation for hours, you have conversations about trade-offs and approaches.
Instead of debugging for 30 minutes, you explain the error and get potential solutions in 30 seconds.
Building our Backstage + Crossplane + ArgoCD platform took about 6 hours of back-and-forth with Claude Sonnet 4.5. It included:
Without AI? I estimate this would have taken 2-3 days of work, possibly more. And the documentation would have been an afterthought.
Was it perfect? No - we could have been more efficient by:
Was it valuable? Absolutely. We went from zero to an actually working, automated GitOps platform in a single day, with comprehensive documentation and a deep understanding of how everything works.
That's the power of AI-assisted development: Spend your time validating and deciding, not searching and debugging.
If you want to experience this workflow:
And remember: AI is a collaborator, not a magic wand. You still need to:
But you'll spend your time on high-value activities (architecture, requirements, validation) instead of low-value activities (syntax debugging, documentation hunting, example copying).
That's a trade I'll take every time.
Looking back at our experience, we realized something important: we should have pursued truly spec-driven development.
Our workflow looked like this:
This is better than traditional development, but it's still reactive. We were still doing trial-and-error, just faster.
True spec-driven development would be:
Here are specific things we should have done differently:
What we did: Asked "How do I push to an existing GitHub repo?" and tried whatever the AI suggested first.
What we should have done:
"Before suggesting anything, search the official Backstage documentation for GitHub integration actions. List all available actions with their intended use cases, then recommend the best fit."
This forces verification before implementation. No guessing, no trial-and-error.
What we did: Started with "make templates that store files locally" and pivoted when it didn't work.
What we should have done:
"I want to create Kubernetes resources through Backstage. They need to be:
Create a design document that:
ONLY AFTER I approve the design should you generate code."
What we did: Built features, then realized we needed RBAC, then retrofitted it.
What we should have done:
"Before writing any code for the automation script, create a test plan that covers:
For each item, cite the official documentation requirement."
What we did: Iterated on code directly. Changed YAML, tweaked parameters, debugged in place.
What we should have done: Maintain a specification file that gets updated, then regenerate implementations:
1# spec.md
2
3## GitHub Integration Requirements
4- Action: publish:github:pull-request (per Backstage docs v1.2.3)
5- Target: Existing repository
6- Branch strategy: Feature branches
7- Source: Official docs link
8
9## When requirements change:
101. Update this spec
112. Ask AI to regenerate implementation based on updated spec
123. Never edit generated code directly
What we did: Had a conversation that evolved organically. Hard to reproduce.
What we should have done:
Project Structure:
├── specs/
│ ├── v1-initial-requirements.md
│ ├── v2-added-gitops.md
│ ├── v3-added-rbac.md
├── implementations/
│ ├── template-v1.yaml (generated from spec v1)
│ ├── template-v2.yaml (generated from spec v2)
│ ├── template-v3.yaml (generated from spec v3)
└── validation/
└── test-results-v3.md
This makes the evolution traceable and reproducible.
AI-Assisted (what we did):
Spec-Driven (what we should have done):
Use AI-Assisted when:
Use Spec-Driven when:
We saved enormous time compared to traditional development. But we could have saved even more and built something more maintainable by:
By adopting truly spec-driven development, we can elevate AI-assisted workflows from "faster trial-and-error" to "precise, reproducible engineering." This is the future of platform engineering, and we're excited to continue refining our approach.
You are interested in our courses or you simply have a question that needs answering? You can contact us at anytime! We will do our best to answer all your questions.
Contact us