TEN PRINCIPLES

Authored Systems

Every operation decomposes into six primitives. Every AI decision should trace to a rule a human wrote. These principles aren't aspirational. They're drawn from watching how hospitals run trauma bays, how airlines run pre-flight, and how every "AI agent" demo falls apart the moment you ask "what happens when it's wrong?"

01

Six primitives. Always six.

Every operation decomposes into six things: Policy, Procedure, Asset, Person, Event, Ledger. Everything else is a composition. Nothing decomposes further. Nothing is missing.

Proof: The Subtraction Test

Pick any operation you run. Write down the six primitives. Now remove one. Remove Policy — who decides what's allowed? Remove Person — who does the work? Remove Ledger — how do you prove it happened? If the operation still makes sense with five, you've found a flaw. Nobody has yet.

02

"Agent" is a noun that hides verbs.

When you say "AI agent," you stop asking what it actually does. Take the agent apart and you'll find the same six bricks every time. The agent didn't add a capability. It obscured a structure.

Proof: The Decomposition Challenge

Take any AI agent demo. List every action the agent takes. Classify each one: is it applying a rule? Following steps? Acting on a thing? Assigning work? Responding to a trigger? Recording what happened? You will run out of actions before you run out of primitives.

03

AI sees. It never writes.

AI's job is perception — turning unstructured input into structured data. It classifies. It drafts. It suggests. But it does not author. The moment AI-generated content enters the record without attribution and sourcing, the entire audit trail is compromised.

Proof: The Attribution Audit

Open any system where AI "manages" records. Find five entries. For each one, answer: Who created this? Was it a human, an automation, or a model? If a model, what source data did it use? Can you trace the entry back to a real event? If you can't answer all four for every entry, your system of record is already contaminated.

04

The agent is commodity. Your policy is the asset.

Claude, GPT, Gemini — any of them can read email. They're interchangeable execution engines. What isn't interchangeable is your rules: which emails matter, what "urgent" means in your context, who gets escalated to. An agent without a policy is a contractor without a scope of work.

Proof: The Swap Test

Take your most sophisticated AI workflow. Replace the model with a different provider. What breaks? If the answer is "the routing logic, the rules, the escalation paths" — those aren't in the model. They're in your configuration, your prompts, your tribal knowledge. That's your policy. It's the valuable part, and it probably isn't versioned, reviewed, or even written down.

05

Hallucinations are inevitable. Architecture decides if they matter.

Every AI hallucinates. This is a property of the technology, not a bug in the implementation. Primitives make hallucinations a classification error — caught at the next gate, corrected, logged. Agents make hallucinations a decision error — acted on before anyone reviews it.

Proof: The Misclassification Drill

Feed your AI a tricky input. Watch what happens. In a primitives architecture, the wrong classification triggers the wrong policy, but the policy is visible, the action is logged, and a human can reverse it. In an agent architecture, the wrong classification triggers... whatever the model decided. Can you undo it? How long before you even notice?

06

Accountability requires explainability. Explainability requires structure.

The first time a customer asks "why did it do that?" and the answer is "the model decided," trust breaks. It doesn't come back. Every automated decision must trace to a policy a human wrote, approved, and can change.

Proof: The "Why?" Walkback

Pick any automated action your system took in the last week. Walk it back: Why did it happen? What rule triggered it? Who approved that rule? When was it last reviewed? If you hit a dead end — "the AI decided" — that's the gap. That's where trust will break.

07

You don't have procedures. You have habits.

The gap between "what we say we do" and "what we actually do" is where all operational risk lives. The expert does it right because they remember. The new hire does it wrong because nobody wrote it down.

Proof: The New Hire Test

Hand your most important process to someone who's never done it. Give them only what's written down. No phone calls, no Slack messages, no "just ask Jordan." Time how long it takes before they're stuck. That's the length of your actual procedure. Everything after that is tribal knowledge.

08

Structure first. AI second.

If you haven't written down how your business works, you're paying to automate operations you haven't defined. The most expensive AI deployment in the world can't fix a process that doesn't exist.

Proof: The Pencil Test

Before you automate anything, try running it on paper for a week. Printed checklists, handwritten logs. If the paper version works, you have a real procedure and AI will make it faster. If the paper version collapses, you don't have a process to automate. You have a conversation to have first.

09

The kill switch test.

What happens if you turn off the AI? If it stops your operation, the AI isn't assisting your operation — it is your operation. When AI is bounded to specific perception points, the rest of your system works fine without it. You lose speed. You don't lose the ability to operate.

Proof: Actually do it.

Pick a Saturday. Turn off every AI integration. Run your operation manually. What breaks? What slows down? What stops entirely? The things that slow down are good AI deployments. The things that stop entirely are dependencies you didn't know you had.

10

Authored beats inferred.

An inferred semantic layer tells you what has happened. An authored system tells you what should happen — and alerts you when reality diverges. This is the whole thesis in one line: your operation already has six primitives. The question is whether they're authored — or whether they're just in someone's head, waiting to walk out the door.

Proof: The Resignation Test

Your best operator quits on Friday. No notice. What do you lose? If the answer is "their knowledge of how we actually do things" — that knowledge was never authored. It was inferred from their behavior. Authored systems survive personnel changes. Inferred systems survive people.

How to Read These Principles

Founders

Start with #4 (The Swap Test) and #9 (The Kill Switch Test). They'll tell you how dependent your product is on things you don't own.

Operators

Start with #7 (The New Hire Test) and #10 (The Resignation Test). They'll tell you how much of your operation lives in someone's head.

Engineers

Start with #2 (The Decomposition Challenge) and #5 (The Misclassification Drill). They'll change how you think about "AI integration."

Investors

Start with #1 (The Subtraction Test) and #8 (The Pencil Test). A framework for evaluating every AI-for-operations pitch you'll hear this year.

These principles emerged from building real systems for real operations — property management, fitness facilities, field services — and discovering that the same six structures kept appearing regardless of industry. We didn't design them. We discovered them. And then we authored them.