FlexIT
← All Field Notes

The Double Lock Principle

In regulated industries, you never rely on a single control. You always have a backup. The double lock principle isn't just about redundancy — it's about building systems where a single point of failure can't compromise the whole thing.

When I was managing IT for medical device companies, this was second nature. Every access control had a secondary check. Every data flow had an audit trail. Every integration had a fallback. Not because we were paranoid, but because regulators would ask "what happens when this fails?" and you needed a good answer.

Now I'm building with AI, and I see the same principle applies — but most AI-assisted builders don't think about it.

Here's what I mean: when Claude or GPT generates code for me, that's one lock. But I never ship it without a second lock. That might be:

  • A manual review against the requirements I wrote (not the requirements the AI interpreted)
  • An automated test that validates the business logic independently
  • A compliance check against the regulatory framework we're operating in
  • A second AI pass with a different prompt, looking specifically for edge cases

This isn't about not trusting AI. It's about not trusting any single point in the chain. I didn't trust a single developer's code review in my enterprise days either — we always had peer review, QA, and staging environments.

The builders who will succeed with AI are the ones who bring this kind of operational discipline to the new tools. The code writes faster now. The thinking about what can go wrong still needs to be slow and careful.