Essay · Declare Your Frame

Declared Relationships Beat Inferred Relationships

Traverse what was declared before you generate from what was inferred. That rule survives every project.

Diptych contrasting declared relationships (structured, explicit) with inferred relationships (probabilistic, derived).
Declared relationships on one side. Inferred relationships on the other. Same data, two approaches.

You've been in this conversation. Someone joins halfway. They catch a few words and start talking, intelligently even, confidently, but they've missed the purpose, missed the frame, missed what the room had already agreed was true. At some point, you look at them like they're crazy. Like they're hallucinating.

That's an AI without a frame. Technically fluent. Contextually lost.

Meaning is not in the words

Illustration of the word 'bank' with multiple meanings branching from it: a financial institution, a riverbank, a plane banking in flight.
The same word, three different meanings. Meaning comes from the frame around the word, not the word itself.

Meaning isn't inherent in language. It's constructed through interaction. One party defines a symbol. The other responds to it. Sociologists call this symbolic interactionism. The frame matters just as much as the words inside it, because the frame is what gives the words meaning.

In any conversation, someone goes first. Someone establishes the axioms.

When you're working with AI at your desk, that someone is you. Your priorities, your principles, the working assumptions you're operating on today, the things you know are true because you've lived them. You are the frame provider. It's the most valuable thing you bring to the model, and most of the time we don't say it out loud. We type a question and hope the model picks it up from our phrasing.

Scale that up to a business. In a customer-vendor relationship, the frame provider is the customer. They configured their reality into the software over years. Categories, hierarchies, approval chains, who reports to whom, which contract supersedes which, which submittal belongs to which spec section. That configuration isn't metadata. It's the frame. It's the most expensive knowledge in the building.

Same rule, two scales. The frame was already there. Then we begin.

Two nested shapes showing the same structural pattern at two different scales: one person at a desk, and an organization as a whole.
Same shape, different scale. The frame-provider pattern holds whether it's you at a desk or a customer configuring software.

The pattern almost no one names

Watch anyone working with AI, at a desk or at a team scale, and there's a pattern that's so common it's invisible.

We have a frame. We don't declare it. We hand the model a question and hope it picks up the context from our phrasing. The model does its best, which means it improvises a frame of its own. Then we call the improvisation "context" and wonder why the answer drifted.

Same pattern at the team scale. An organization has years of declared structure, categories, relationships, rules the humans already put in place, and the common move is to treat that structure as background. Pour the raw text into a vector database. Ask the model to figure out the relationships from scratch. The declarations were right there. We re-derive them from prose and call the hallucination "context."

The cost of this move has a name. Context debt: the gap between what you already know to be true and what the model has to guess.

Venn diagram showing the overlap between what you know to be true and what the model has to guess, with the gap between them labeled context debt.
Context debt is the gap between what you already know to be true and what the model has to guess.

What I do instead

When I build with AI, I find better interactions when I start by sharing a frame. Ground principles. Things I know as truth. Then I communicate with the model the way I'd communicate with a collaborator: compassionately, and clearly. If I want it to tell me when it doesn't know something, I give it permission to say so, because it will always say something, and an out is the only way it'll take one. If I'm operating on a working assumption I haven't fully tested, I say that too. "Treat this as true for now. Tell me if you see it break."

That's the work. Context engineering. Most of us are already doing this, just not calling it out explicitly.

A construction project has thousands of declared relationships: this submittal belongs to that spec section, this RFI blocks that activity, this approver outranks that one. The data is right there, structured, ready. An AI that re-infers any of it from PDF text is doing more work to be less correct.

The general principle: traverse what was declared before you generate from what was inferred. Trust the human's structure first. Use the model only where structure is genuinely missing.

Why this matters now

Inferred structure is not governable. You cannot audit it. You cannot defend it in a regulated industry. You cannot point at it and say "the human told us this is true." It's a probability cloud doing its best.

Declared structure is the opposite. It's auditable, portable, and yours to govern. It survives across model upgrades. It compounds over time, because every new declaration adds to the graph rather than retraining the cloud.

The teams that figure this out first will build AI that earns trust in industries where trust is load-bearing: construction, healthcare, finance, law, infrastructure. The teams that don't will keep apologizing for confident hallucinations and calling it the cost of doing business.

The principle, again, as a practiced rule:

Traverse what was declared before you generate from what was inferred.

That's the rule. It survives every project I've worked on.