Governance Is Architecture
Ask ten people what “AI governance” means, and nine of them will describe a compliance function. Policy documents. Review committees. Usage guidelines. Acceptable use policies. Risk assessments that live in a SharePoint folder nobody opens.
That’s not governance, it’s just ceremony.
I’ve been building AI-augmented systems for the past year — shipping production software, designing agent workflows, running adversarial security audits, and writing about all of it. Somewhere in the middle of that work, a pattern crystallized that I didn’t have a name for:
AI governance isn’t a compliance layer. It’s an architectural decision.
It needs to be designed into the system at the structural level — not applied after the system is built, and not delegated to a committee that reviews work they didn’t design.
The compliance trap
The default approach to AI governance looks like this: a team builds an AI system, then a separate group evaluates whether it meets policy requirements. The evaluation produces a report. The report produces a remediation list. Remediation is prioritized over feature work. Most of it ships eventually. Some of it doesn’t.
This is the same pattern that produces “secure” software that still ships with OWASP Top 10 vulnerabilities in production — issues that OWASP explicitly positions as design-time concerns, not a post-hoc audit checklist. Security review after implementation, compliance as an afterthought, the ceremony of oversight without the substance.
The problem isn’t that the review happens. The problem is that it happens after the architecture is set. By the time someone evaluates governance, the system’s boundaries are already drawn. The data flows are already established. The agent’s permissions are already scoped — or not, which is arguably more common.
Governance applied after architecture is remediation. Governance designed into architecture is prevention.
What this looks like when you build it
I didn’t arrive at this framework through theory. I arrived at it through building things and watching where they broke.
Governance documents as engineering artifacts. In Pass@1, the governance documents — ROADMAP.md, CHANGELOG.md, ARCHITECTURE.md, CLAUDE.md — aren’t project management overhead. They’re the constraints that make the AI agent produce correct implementations on the first attempt. The governance IS the product. The speed is a byproduct. Remove the governance documents, and the agent still generates code. It just tends to generate the wrong code, confidently, repeatedly.
Adversarial review as a structural pattern. The Adversary isn’t a code review checklist. It’s a separate agent whose architectural purpose is to attack the work of the building agent. Same AI, different governance constraints, different objectives. The insight wasn’t “we need code review” — it was that review and construction need structural separation, the same way a financial auditor can’t also be the accountant. That’s not a policy. That’s architecture.
The perimeter as a design decision. The AI Perimeter isn’t a list of things AI can’t do. It’s a design boundary — a deliberate architectural decision about where automation should stop and human judgment should begin. The three-question framework (Can I verify the output? Is the cost of a wrong answer low? Does sufficient context exist?) isn’t governance theater. It’s a runtime decision function built into the workflow.
Structural parallels as architectural insight. The ADHD–LLM isomorphism revealed something I didn’t expect: the architectural patterns that manage cognitive failures in ADHD brains are the same patterns that manage failures in language models. External memory, session continuity, and confabulation detection - these aren’t metaphors — they’re the same engineering problem solved at different scales. The architectures of governance for AI agents and for human cognition share a common structure because their failure modes are structurally identical.
When I treat governance as architecture, there are a few decisions I stop delegating to policy: where review happens in the flow, where agents are allowed to write, and where I deliberately stop automation and hand back to humans.
Where the industry is getting it wrong
The current wave of AI governance frameworks treats governance as a layer — something you wrap around an AI system to make it safe. Usage policies, guardrails (a word that’s become meaningless through overuse), and human-in-the-loop as a checkbox rather than a design pattern. When I say “guardrails as theater,” I mean controls that only exist in a policy document. Guardrails as architecture means the workflow makes it impossible to skip the control step.
The problem with layers is that they can be bypassed, ignored, or simply never implemented. A governance policy that says “all AI outputs must be reviewed by a human” is architecturally meaningless if the system doesn’t have a review step built into its execution flow. The policy exists, but the architecture doesn’t enforce it.
This is the same mistake enterprise software made with security twenty years ago. Write the code, then add security. That approach produced two decades of preventable breaches — the same vulnerability classes (injection, broken access control, insecure design) showing up in OWASP data year after year. We know better now — security is designed in, not added on. CISA’s Secure by Design initiative says it explicitly: security is a design-time responsibility for vendors, not a patch-time responsibility for customers.
AI governance is security’s sequel. The architectural thesis is already hiding underneath the formal frameworks — the EU AI Act’s risk-based controls, NIST’s AI Risk Management Framework, ISO/IEC 42001 — all assume governance is something you design into systems, not something you rubber-stamp after the fact. Most implementations haven’t caught up yet. And we’re making the same mistakes, faster, and with higher stakes.
The thread through everything I write
Every post on this site argues a version of this thesis. I didn’t say it outright until now.
When I write about governance documents as engineering artifacts, I’m arguing that governance should be structural. When I write about adversarial agents, I’m arguing that oversight should be architectural. When I write about the perimeter, I’m arguing that boundaries should be designed, not assumed. When I write about code that works versus code that belongs, I’m arguing that “functional” and “governed” are different things — and the gap between them is where the expensive failures live.
The thesis is the same every time: governance is architecture. Not policy. Not process. Not a committee. Architecture.
What I’m still figuring out
I don’t have clean answers for everything. These are the open questions I’m working through:
How should governance documents version-control as a codebase scales? A single CLAUDE.md works for a solo developer. What happens when ten agents are working on the same codebase with different governance contexts? The version control problem gets interesting fast. I’m currently exploring the idea of “personalities” - individual agentic personas tailored to their specific tasks, sharing a common central directory where they can “get to know each other” as they evolve, and a central governance document that drives their decisions and can only be changed through quorum. But that idea is still in the early stages.
Where does human-in-the-loop become human-in-the-way? I design for human oversight at every critical decision point. But I’ve seen cases where the oversight step becomes a bottleneck that degrades the system more than the risk it’s meant to mitigate. The boundary isn’t static, and I don’t have a formula for finding it.
What does governance look like for agent-to-agent systems? When AI agents coordinate with each other — passing context, delegating tasks, communicating via protocol — the governance model needs to handle delegation chains, permission inheritance, and audit trails across agents. Nobody’s building this well yet. The governance gap is a symptom of a missing discipline.
These aren’t hypothetical questions. They’re problems I’m actively building against.
A position, not a conclusion
This isn’t a manifesto. It’s a position — informed by building real systems, watching where they break, and noticing that failures almost always trace back to a governance decision made too late - or not made at all.
If you’re working on AI governance and treating it as a compliance function, you’re not wrong — you’re solving a smaller problem than the one in front of you. The compliance layer matters. But without the architectural foundation, it’s a policy document sitting atop a system that was never designed to enforce it.
Governance is architecture.
This post connects the threads running through everything I write. If you want the evidence: Pass@1 on governance as methodology, The Governance Documents on what each file contains and why, The Adversary on adversarial review as structural pattern, The AI Perimeter on boundary design, LLMs Are Practically ADHD on structural parallels, and managing agents like teams on applying these principles as organizational design. If you disagree with the premise or are working on these same problems, I’d like to hear from you.
Sources: OWASP Top 10 Web Application Security Risks · OWASP Top 10:2025 — A04: Insecure Design · CISA Secure by Design · EU AI Act, NIST AI RMF, and ISO/IEC 42001 comparison
Get new posts in your inbox
Occasional writing on systems, ADHD, and AI. No cadence pressure.
You're in. I'll send you the next one.