Your AI can write a Shakespearean sonnet on demand. It can summarize a 50-page legal contract in thirty seconds. It can generate working code in any programming language you throw at it.

It cannot apply your volume discount correctly.

This is the context gap. AI models are trained on the internet (billions of pages of public text, code, and documentation. They know everything about everything in general. They know nothing about your business specifically. Your pricing tiers, your customer segments, your approval workflows, your compliance requirements, the exception logic your operations director carries in her head) none of it exists in the model’s training data. And no amount of general intelligence compensates for specific ignorance.

The Knowledge That Lives Nowhere

Every organization runs on knowledge that exists in exactly one place: the people who do the work.

Your pricing model has twelve exceptions that evolved over seven years. Three of them contradict the published rate card. One applies only to a specific region. Another triggers only when a customer combines two product lines that the system doesn’t recognize as related. The senior account executive knows all twelve. The CRM contains none of them.

Your approval workflow has a documented version in the policy manual and an actual version that people follow. The documented version requires VP sign-off on purchases over $25K. The actual version skips that step for the engineering department because the VP approved a standing arrangement in 2022 and nobody updated the policy. Every employee in engineering knows this. The policy manual does not.

Your compliance requirements include a regulatory framework that was updated six months ago. The legal team sent an email explaining the changes. The operations team adapted their processes. The wiki page still describes the old framework. The training materials reference the old framework. The only accurate representation of current compliance requirements lives in the heads of the three people who attended the legal briefing and adjusted their work accordingly.

This is not unusual. This is every organization. The gap between documented knowledge and operational reality grows wider every quarter, and no one notices because the humans in the system compensate automatically. They carry the context. They apply the exceptions. They route around the outdated documentation.

AI agents cannot do any of this. They operate on what they can see. And what they can see is the documented world, which is incomplete, outdated, and wrong in ways that only domain experts would catch.

Three Approaches That Don’t Work

Organizations that hit the context gap tend to try three things before they find the real solution. All three fail for predictable reasons.

Prompt Stuffing

The first instinct: cram everything into the prompt. Paste in the policy manual, the product catalog, the customer data, the process documentation. Give the AI “all the context” and let it figure things out.

This fails at scale. A mid-market company’s operational knowledge exceeds any context window by orders of magnitude. Even if you could fit it all in, the model’s attention degrades across long contexts; critical details buried on page 37 get ignored while the model fixates on irrelevant information from page 3. Prompt stuffing also costs a fortune in token usage and produces inconsistent results because the model attends to different information on different runs.

The deeper problem: unstructured narrative documentation is ambiguous by design. It was written for humans who already have context. “Use your judgment” makes sense to a ten-year employee. It means nothing to an AI agent.

Fine-Tuning

The second approach: train the model on your data. Feed it your emails, your documents, your transaction history. Let it “learn” your business.

Fine-tuning teaches patterns, not facts. It can teach the model to write in your brand voice. It can teach it domain-specific vocabulary. It can shift the statistical distribution of its outputs toward your industry. What it cannot do is reliably encode your pricing rules, your org chart, your compliance requirements, or the 47 exceptions that your operations team manages manually.

Fine-tuning also freezes knowledge at training time. Your business changes daily. A fine-tuned model reflects the business as it was when you trained it: which, in a dynamic enterprise, means it’s already wrong by the time you deploy it. Continuous fine-tuning is expensive, slow, and still doesn’t solve the structural problem: the model needs to know facts, not just patterns.

RAG Without Structure

The third approach: Retrieval-Augmented Generation. Build a vector database of your documents, retrieve relevant chunks at query time, and inject them into the prompt.

RAG is a real technique with real value. It solves the factual recall problem: the model can look up information it wasn’t trained on. But RAG without structure is a search engine bolted onto a language model. It retrieves document chunks, not understanding.

Your pricing document says “Volume discounts apply to orders exceeding threshold values.” RAG retrieves this chunk. The model reads it. But which threshold values? For which products? With which exceptions? The chunk doesn’t say: those details are in a different document, or in a spreadsheet, or in Sarah’s head. The model fills the gaps with plausible-sounding defaults. The output looks right. It’s wrong.

RAG retrieves text. It does not understand relationships between entities, decision logic, exception handling, or the business context that makes retrieved text actionable. Without structure, RAG is a faster way to deliver incomplete context.

The Actual Fix: Context Engineering and Business-as-Code

The context gap is a structural problem. It requires a structural solution.

Context Engineering is the discipline of structuring organizational knowledge so AI systems can operate on it accurately. Not dumping documents into a prompt. Not training on historical data. Structuring, with intention, with precision, with the same rigor you’d apply to a database schema or an API contract.

Business-as-Code is the methodology that implements Context Engineering. Three components do the work.

Schemas define what your business IS. JSON Schema definitions of your entities: customers, orders, products, workflows, approval chains. Every entity gets a formal definition with required fields, valid states, relationships, and constraints. A customer schema doesn’t describe a customer in a paragraph. It defines one as a data structure. An AI agent reading the schema knows exactly what a customer is and how it relates to every other entity in your business. No guessing. No hallucinating plausible-sounding attributes.

Skills define what your business KNOWS. Structured documents that encode domain expertise: the decision logic, the exception handling, the judgment calls that experienced employees make automatically. A pricing skill defines the trigger conditions, the calculation steps, the discount rules, the approval thresholds, the exception cases, and the escalation paths. Skills-as-Documents means domain experts write them in structured markdown, not code. The VP of Sales writes a better pricing skill than any engineer because she knows the exceptions.

Context provides the background. Structured organizational knowledge that makes schemas and skills coherent: your industry, your strategic priorities, your regulatory environment, your team structure. Context is what ensures the same schema and skill produce different results in different business environments, because different businesses need different outcomes from the same data.

These artifacts are version-controlled in git. They evolve with your business. They’re modular; each agent gets the specific context it needs for its specific task. And they compound over time through The Recursive Loop: BUILD the context, OPERATE agents on it, LEARN from the gaps, BUILD deeper.

The context gap doesn’t close because you bought a better model. It closes because you did the work of encoding what your business knows in formats that AI can actually use. NimbleBrain builds this foundation in every engagement, typically 20-50 entity schemas and 30-80 operational skills in the first two weeks, with agents running on structured context by week three. The AI doesn’t need to guess your business. It reads the schemas.

Frequently Asked Questions

Can fine-tuning solve the context problem?

Partially. Fine-tuning teaches patterns, not facts. It can teach the model your writing style or domain vocabulary, but it won't reliably encode your pricing rules, org chart, or compliance requirements. You need structured context delivery (Business-as-Code) not just fine-tuning.

What is Context Engineering?

Context Engineering is the practice of structuring and delivering the right business knowledge to AI systems at the right time. It includes Business-as-Code schemas, retrieval systems, and context management, ensuring the AI has exactly the information it needs for each task.

How does NimbleBrain solve the context problem?

Business-as-Code. We encode your domain knowledge (processes, rules, terminology, constraints) as structured, version-controlled artifacts that AI systems consume. The AI doesn't need to guess your business. It reads the schemas.

Mat GoldsboroughMat Goldsborough·Founder & CEO, NimbleBrain

Ready to put AI agents
to work?

Or email directly: hello@nimblebrain.ai