Back to Essays

Context Graphs: From AI Pilot to Production

Vivek Khandelwal
Vivek KhandelwalChief Business Officer, CoFounder @ CogniSwitch
Jan 19, 2026·8 Min Read·Updated May 2, 2026
Reviewed by: Dilip Ittyera — CEO & Co-Founder, CogniSwitch

Context graphs are having a moment. I am already hearing versions of interpretations. Let's break down what exactly is the concept of context graphs, why it's a great idea but in isolation — it moves nothing.

Here's What You Already Know

Documentation is a mess. Not the "data-in-silos" mess. It's outdated, contradictory across departments and missing the operational exceptions that actually matter. It's universal. Small businesses and enterprises. All categories — Healthcare, Consumer, Retail and more. Yet somehow enterprises manage to run on this state of the documentation.

Truth be told — documentation is, and will never be up to date. Status is always incomplete. Tacit knowledge can't be fully articulated. That's what makes it tacit. Healthcare runs on workarounds. Finance runs on judgment calls. That's not even the insight. Table stakes now.

The gap between "what's written" and "what's enforced" grows. Every working day.

The Documentation Gap

Fig 1
What's Written

"All prior authorization requests require 6 weeks of documented conservative treatment before imaging approval."

The policy is clear. The language is specific. The system ingests it verbatim.

Click toggle to switch between states

So What's the Hoopla All About?

Human teams magically route around bad documentation. They know which SOPs to ignore, which policy is actually enforced, which version is current even when the doc says otherwise.

But AI agents don't have that "tribal knowhow". They ingest the book as-is — SOPs, policies, everything that's written. The not-so-updated book. Agents now suddenly have a "garbage-in" problem.

Related

For a deep dive on the garbage-in problem and why more documents make it worse, read: Garbage In, Garbage Out.

Garbage Has Reached the Foundation. What's the Fix?

A very reasonable response — the context graph proposition: if the book is incomplete, let's go around it. Capture what people actually do. The tribal knowledge. The patterns that work regardless of what's documented.

Context graphs say: stop trying to fix the book. Route around it. Capture the traces of actual decisions. Build the intelligence layer from reality.

This feels like the right answer. The documentation update process is broken or too slow and feels unfixable. The context graphs approach offers an end-run. Capture what people do. Formalize what works. Skip the mess. Problem solved?

Just One Problem

You can't route around something you also depend on. Tribal knowledge (context graphs) captures correlations, not causation. They are joint-at-the-hip with the very SOPs they're trying to route around. In regulated industries, this dependency can't be ignored.

What Context Graphs Capture vs What's NeededFig 2
Item
Captures (Correlation)
Needs (Causation)
Expert Behavior
When condition X occurs, expert Y does Z
Is Z correct because of policy Q — or despite violating it?
Outcome Patterns
These inputs correlate with successful outcomes
Would it still work if guidelines change next quarter?
Approval Sequences
This sequence tends to get approved
What's the audit trail linking sequence to policy?

""We followed the pattern" isn't a satisfying compliance answer. That's correlation dressed as explanation."

Trace ≠ Auditability

"We followed the pattern" isn't a satisfying compliance answer. When the auditor asks why the agent did X, "because experts usually do X" doesn't close the loop. That's correlation dressed as explanation. This is the explainability trap — agents work but can't prove why.

The Reconciliation Layer

The opportunity isn't unlocked with just tribal knowledge OR documentation. It comes together with the reconciliation layer.

  • When the workaround aligns with policy — formalize it. Update the book.
  • When the workaround violates policy — surface it. Decide intentionally.
  • When regulations change — update the book.

Context graphs capture operational reality. SOPs capture organizational intent. Neither alone qualifies to be the foundation. The foundation is the domain expert curated sync between them. Get that right, and you would have tackled the first problem — garbage-in. Head on. At the source — not routed around it.

That's also where explainability actually lives.

The Reconciliation LayerFig 3
Context Graphs
Tribal knowledge
Expert patterns
Operational workarounds
Decision traces
Reconciliation
Align with policy
Formalized SOPs
Versioned policies
✓ System Connected

Context graphs capture operational reality. SOPs capture organizational intent. The foundation is the curated sync between them.

Continue reading: The Explainability Trap →

Frequently Asked Questions

Context graphs capturing what people actually do sounds faster than fixing documentation that takes months to update. Why is 'fix the book' the right answer?

The argument isn't 'fix the book before you do anything else.' It's that tribal knowledge alone can't be the foundation. When a workaround aligns with policy, formalize it. When it violates policy, surface it and decide intentionally. An agent acting on undocumented workarounds, without knowing whether they conflict with a current regulation, is making compliance decisions on unvalidated practice. Tribal knowledge is real signal. It needs to be reconciled against written intent, not substituted for it.

The gap between what's written and what's enforced — isn't that just a process problem, not a technology problem?

It's a process problem that becomes a technology problem the moment you give an agent access to the book. Human teams route around bad documentation using judgment about which SOP is actually enforced. AI agents don't have that judgment — they ingest the book as-is. What was tolerable as a process inefficiency becomes a systematic error source at scale. The technology question isn't how to fix the process. It's what to put between the documents and the agent.

A reconciliation layer managed by domain experts sounds like another human bottleneck in a system you're trying to automate. How is that not more overhead?

The bottleneck you're worried about is the one you already have — humans correcting agent outputs one at a time, with no visibility into why the agent was wrong. The reconciliation layer moves that work upstream: a domain expert resolves a conflict once, with an audit trail, and that resolution governs every downstream query. One decision at the source versus repeated corrections at the output. The human touch stays, but it's applied where it has leverage.

Decision traces from actual workflows are exactly what we want agents to learn from. What's wrong with building the intelligence layer from what actually works?

Decision traces are valuable signal. The question is what happens when a trace conflicts with a current regulation, or when the person whose decision is being traced had unique context that doesn't generalize. The reconciliation layer decides: does this trace align with policy, conflict with it, or reveal a policy gap needing a deliberate decision? Explainability doesn't live in the trace — it lives in the reconciled judgment that says this practice is validated, and here's why.

How does a reconciliation layer handle environments where the enforced rules change quarterly — like payer claim rules?

That's exactly why the reconciliation layer has to be maintained as a living artifact, not built once. When external contracts and policies change, someone inside the organization needs to explicitly map that change to internal SOPs and sign off — with an audit trail. At a quarterly cadence, that means a formal review cycle: what changed, which SOPs are affected, what's the blast radius across agent workflows that reason over those SOPs.

About the Author
Vivek Khandelwal

Vivek Khandelwal

Chief Business Officer, CoFounder @ CogniSwitch·M.Sc. Chemistry, IIT Bombay

Vivek Khandelwal is the Chief Business Officer at CogniSwitch, where he leads go-to-market strategy, enterprise partnerships, and the company's thought leadership programs. He is the author of Signal, CogniSwitch's weekly newsletter that translates the complex machinery of enterprise AI infrastructure into clear, actionable intelligence for practitioners and executives in regulated industries.