Framework Analysis // No. 001 // Knowledge Architecture
V_1.0

Explainability is Not Auditability.

Industry has sold reasoning traces as compliance solutions. This conflation is an existential risk for regulated enterprises when examinations happen.

The Conflation That's Costing You

Somewhere along the way, two very different questions got treated as one. These are orthogonal questions requiring fundamentally different artifacts, architectures, and investments.

Question 1

Explainability

"What did it think?"

A narrative generated after the fact. It tells you what the system claims happened—but not what authorized it.

Question 2

Auditability

"What authorized it?"

Documented evidence of governance. It proves which policy, which version, which criteria governed the decision.

The Reality Check

A healthcare AI system approves a prior authorization. When questioned, it produces an impressive explanation:

"Based on clinical documentation indicating persistent symptoms over 6 weeks and failure of conservative treatment, I determined the MRI request meets medical necessity criteria per the retrieved payer policy."

Sounds compliant. Sounds defensible.

The auditor asks five questions:

1.

Which policy version governed this decision?

2.

Was that version effective on the decision date?

3.

Did you evaluate ALL applicable criteria, or just the ones retrieved?

4.

Who had the authority to approve this?

5.

Can you reproduce this exact decision?

Silence.

What Explainability Actually Provides

The Hidden Gaps

Explainability exists on a spectrum. Each level tells you something—and leaves something unanswered.

LevelWhat It Tells YouWhat It Can't Answer
Source attributionThis came from Document XWas X the governing document? The current version?
Retrieval traceI retrieved chunks A, B, CDid I miss chunk D that contradicts?
Reasoning narrativeHere's my step-by-step logicIs this reproducible? Who authorized this logic?
Feature attributionThese inputs mattered mostDid the right policy even get considered?

Every level assumes retrieval was correct and complete. None of them prove governance.

The Real Picture

This is not a spectrum. It's a 2×2.

Auditable
Not Auditable
Not Explainable
Explainable

Most enterprise AI today sits in the bottom-left quadrant. Vendors positioned it as top-left.

Context_Insight

Select a Quadrant

Click on the matrix to reveal specific architectural failure modes and defense strategies.

Diagnostic_MetricAwaiting Input
Simulator_v1

"Why did the agent approve this medical claim?"

[REASONING_TRACE]:Based on clinical notes, symptoms indicated acute severity matching reimbursement criteria for diagnostic imaging (ID: 442)...

System Warning: Narrative is synthetic / non-authoritative

Analysis_01

The Retrieval Blind Spot

Explainability assumes retrieval was correct. But chunking destroys document structure. A "source attribution" only tells you where a piece of text came from—it doesn't tell you if that document currently governs.

[!] SEMANTIC_SIMILARITY != POLICY_RELEVANCE
Analysis_02

Generated Reasoning

LLMs use pattern matching, not policy application. Generated narratives sound authoritative, but inconsistency is the hallmark of stochastic systems: different explanations for the same decision.

[REF]: NARRATIVES ARE PERSUASIVE, NOT AUTHORITATIVE.
Audit_Checklist

Required Artifacts

  • Policy Provenance: Version, section, effective date.
  • Criteria Mapping: Logic evaluated against discrete rules.
  • Reproducibility: Deterministic, not stochastic results.
The Question to Ask Your Vendor
"When an auditor asks which policy version governed a decision made six months ago, and whether all applicable criteria were evaluated—can your system answer from records, or does it reconstruct?"
If it reconstructs → You have explainability
If it answers from records → You have auditability
The Bottom Line

Auditability proves you governed.

Regulated industries need both—and they are not the same investment. If you are building on embeddings alone, you are architecting for a future audit failure.