When Agentic Building, Remember Some Friction is Actually Load-Bearing
A hard-won lesson learned while building at agent speed
Today, one keystroke deleted my entire vault. What that mistake uncovered was the digital equivalent of me storing the combination to my safe—inside my safe.
I was learning yazi, a terminal file manager, because I’ve wanted something faster than Finder for navigating files for a long time. Day one with the tool. I had the cursor on my vault folder—160GB of notes, code, agent infrastructure, daily journals, a decade of accumulated work on an encrypted external SSD—and inadvertently pressed d.
Yazi deleted all 713,000 files of the vault. Instantly. No confirmation dialog. Just gone.
Now, I’m not a reckless person. I had backups. I had a restic repository on an 8TB external drive, running nightly snapshots at 3 AM. Encrypted. Verified. The backup system was, by any reasonable measure, solid.
There was just one problem. The password to decrypt those backups was stored in a .env file. Inside the vault. The vault I had just deleted.
The combination to the safe was locked inside the safe.
The recovery that almost wasn’t
Let me walk you through the thirty seconds between pressing d and understanding what had just happened.
First: confusion. The folder just… wasn’t there anymore. Yazi’s file list refreshed and my vault was gone. I didn’t process it immediately. I thought maybe I’d navigated somewhere wrong.
Then: the adrenaline. That full-body “oh no” that anyone who’s ever run rm -rf on the wrong directory knows intimately. After flailing around a little bit, I finally checked the Trash. There it was—all 160GB of it, sitting calmly in the macOS Trash!
Then: the real fear. I knew the backup existed. But I also knew, with sudden and horrible clarity, exactly where the restic password lived. It lived in a .env file nested several directories deep inside the vault. The vault currently sitting in the Trash.
If I had emptied the Trash—which, by the way, was a habit I’d been actively trying to build—the backup would still exist on that 8TB drive. Encrypted. Permanently inaccessible. Every nightly snapshot, every careful verification, every byte of redundant data, all of it locked behind a password that no longer existed anywhere in the world.
As it happened, I dragged the vault out of Trash via Finder’s “Put Back.” Everything was intact. Twenty minutes of panic, no actual data loss.
But the vulnerability was now impossible to unsee.
How I built myself a trap
Here’s the part that stings: I didn’t build that backup system alone. I built it with an AI coding assistant. And the assistant did a great job. Really. The backup system worked flawlessly. It ran restic snapshots on a cron schedule. It pruned old backups. It verified integrity. If you asked me “do you have backups?” I would have said yes with complete confidence, because I did.
The assistant suggested storing the restic password in a .env file—standard practice for secrets in a development environment. I agreed. The .env file went into the project directory. The project directory was inside the vault. And neither of us—not me, not the assistant—ever asked the one question that mattered:
What happens if the vault disappears?
This is the trap. Not the technical mistake—that’s easy to fix. The trap is the process that created it.
AI removes the friction that used to force strategic thinking. I’d go further. Some friction is load-bearing. Remove it, and the structure looks fine until it collapses.
When you build with an AI agent, you move fast. Genuinely fast. I went from “I should probably back this up” to a fully operational backup system in maybe forty-five minutes. Cron job, encryption, snapshot pruning, the works. That speed feels incredible. It feels like the future.
But speed has a cost, and the cost is the friction you skip over. When I’ve set up backup systems manually in the past—SSH keys, rsync scripts, offsite rotation—the tedium of the process forced me to think about every piece. Where do the credentials live? What happens if this machine dies? What’s the restore procedure? The boring questions. The ones that don’t feel productive.
The agent doesn’t ask those questions. It builds the thing you asked for. It builds it well. And then you move on to the next thing, because you’re moving at agent speed, not reflection speed.
The agent built a perfect backup system. It just couldn’t survive losing itself.
Joe Masilotti wrote something recently that crystallized this for me: AI removes the friction that used to force strategic thinking. I’d go further. Some friction is load-bearing. Remove it, and the structure looks fine until it collapses.
The pattern is everywhere
Once I saw this in my own system, I started seeing it everywhere. The bootstrap paradox—storing a thing’s recovery mechanism inside the thing itself—is remarkably common.
- Backup passwords stored in the same environment they back up
- Server recovery scripts that only exist on the server
- 2FA recovery codes stored exclusively in the password manager that’s protected by 2FA
- Agent system prompts and configuration that only exist inside the agent’s own context
- Docker Compose files for a service stored only in the container’s volume
- SSH keys for a remote machine stored only on that remote machine
Every one of these works perfectly until the one moment it needs to work, and then it doesn’t, because the prerequisite for recovery is the thing you’re trying to recover.
And the agent era makes this worse, not because agents are bad at building things, but because they’re so good at it that you stop scrutinizing what they build. A manually configured backup system gets questioned at every step because every step is annoying. An agent-built system gets a “looks good, ship it” because the output is clean and the tests pass.
The failure mode isn’t “the backup doesn’t work.” The backup worked great. The failure mode is “the entire context in which the backup makes sense has disappeared.” No test catches that. No CI pipeline flags it. It’s an architectural flaw that only shows up when the architecture itself is gone.
What I actually fixed
After the adrenaline wore off, I spent a few hours hardening things. Here’s what changed.
Broke the bootstrap paradox. The restic password moved to macOS Keychain. It now survives independently of the vault, the drive, the backups—it’s part of the operating system’s credential store. I also created a recovery kit on the local machine, completely outside the encrypted drive. Recovery runbook, bootstrap scripts, credential references. If the vault vanishes again, the instructions for getting it back don’t vanish with it.
Added defense in depth. The 8TB nightly backup was already running. I added an offsite backup: a separate SSD drive stored in a fire safe (I’ll add a true offsite backup soon), also restic-encrypted, also with the password in Keychain. I added a cron job that alerts me if the backup hasn’t run in 26 hours. Before, I’d have found out the backup was stale by needing it and discovering it wasn’t there. Now I find out the next morning.
Fixed the tool. In yazi’s keymap.toml, I remapped the D key—permanent delete—to a no-op. Soft delete to Trash is now the only delete operation available. One-keystroke permanent deletion in a file manager is a footgun that shouldn’t exist, and I don’t care if that’s a controversial opinion. I kept the d key for Trash because soft delete is fine; the OS gives you a chance to recover. But permanent, irrecoverable deletion should require more ceremony than a single keypress.
Added a pre-flight checklist. This is the meta-fix, and honestly the most valuable one. Before building anything with an agent now, I force myself through three questions:
- What problem am I solving?
- What would be cool to build? (Recognize this as a trap.)
- What do I actually need?
Question two is the dangerous one. Agents make it trivially easy to build elaborate systems that feel productive and look impressive and don’t actually address the core need. The backup system I originally built was cool. What I needed was a recovery plan that could survive losing everything it was supposed to recover.
The principle
Your recovery plan must survive the thing it recovers.
That’s it. That’s the whole lesson. You can tape it to your monitor.
It sounds obvious when you say it out loud. But go check right now—go look at your backup system, your infrastructure recovery docs, your password manager’s recovery flow, your agent configurations. Trace the dependency chain all the way down. If any link in that chain lives exclusively inside the thing it’s supposed to recover, you have a safe with the combination locked inside.
The agent era makes this check more urgent, not less. We’re building faster than we ever have. The systems are more capable. The tests pass. The code is clean. And somewhere in the dependency graph, there’s a circular reference that nobody questioned because the build was fast and the output looked right.
Build fast with agents. I mean that sincerely—the speed is real, the productivity is real, don’t let this story scare you into doing everything by hand. But pause, even once, and ask: If this whole thing disappeared right now, what would I need to get it back? And where is that stored?
If the answer is “inside the thing”—fix it now. While you still can.
Because I was fortunate. The Trash hadn’t been emptied. The next person might not be.
For your agent
Here’s the practical payoff. Copy the prompt below and paste it into whatever AI coding assistant you use. It will audit your infrastructure for bootstrap paradoxes—circular dependencies where your recovery mechanism lives inside the thing it’s supposed to recover.
Audit my backup and recovery infrastructure for bootstrap paradoxes.
A bootstrap paradox is when a recovery credential, script, or procedure is stored exclusively inside the system it’s meant to recover—like a safe with the combination locked inside.
For each item below, trace the dependency chain and tell me if any link is circular:
- Backup encryption passwords. Where are they stored? Could I access them if every backed-up volume were gone?
- Recovery scripts and runbooks. Do they live on the machines or drives they’re supposed to restore?
- API keys and service credentials. If my primary environment disappeared, could I still authenticate to the services I need for recovery?
- 2FA recovery codes. Are they only in a password manager that itself requires 2FA?
- Agent configurations and system prompts. If this session disappeared, could a fresh session reconstruct the setup?
For anything that fails: suggest a fix that breaks the circular dependency. The recovery mechanism must survive the loss of the thing it recovers.
You will be surprised what this finds. I was.