A log tells you what happened.An audit trail tells you why — and proves it.
Every AI system produces logs. Regulators don’t want logs. They want evidence. There’s a difference — and in a prior auth dispute or a CMS audit, it’s the difference that determines liability.
Three Things People Confuse
A Log
A system record that something happened. "The agent ran at 2:47pm and returned an output." Useful for debugging. Inadmissible as compliance evidence.
A Reasoning Trail
The chain of logic an AI system used to reach a decision. "The model considered factors A, B, and C and concluded X." Useful for explainability. But in LLM systems, reasoning is reconstructed after the output. It is narrative, not evidence.
An Audit Trail
A structured, immutable record of: input received, policy version governed, specific rule applied, output produced, human review, and timestamp. It is reproducible. It is defensible in front of a regulator.
“A reasoning trail is the model explaining itself. An audit trail is the system proving itself — to someone who doesn’t trust the model’s explanation.”
What Regulators Actually Ask
When OCR audits an AI-assisted prior authorization denial, or a payer-provider dispute reaches legal, these are the six questions that determine whether your AI deployment survives scrutiny.
What policy was active at the moment of the decision?
Not generally — the exact version, in effect on that date.
Which specific clause triggered the outcome?
Not "the model said no." Which SOP section, which clinical guideline, which policy rule.
Was the knowledge the AI used clean?
Were there conflicting policies active simultaneously? If so, which one governed — and how was the conflict resolved?
Is the output reproducible?
Submit the same input again. Do you get the same answer? If not, you cannot defend the original decision.
Who reviewed it, and when?
The human-in-the-loop record — not just the AI’s output.
How long is this retained?
HIPAA requires six years. Most AI logging is architected for 30–90 day debugging cycles.
Three Stages. Three Ceilings.
Enterprise AI did not arrive fully formed. It has evolved in stages. Each stage solved a real problem. Each stage also introduced a ceiling — a point beyond which it cannot go without a different architecture underneath it.
Vector RAG
Retrieval is probabilistic. You cannot build a legally defensible audit trail on a probabilistic retrieval substrate.
Consumer apps, internal search, low-stakes Q&A
LLM + Chain-of-Thought
LLM reasoning is reconstructed, not recorded. A reasoning trail you cannot reproduce is a narrative, not proof.
Complex decisions, multi-step analysis
Neuro-Symbolic AI
The reasoning trail is the execution path. Deterministic, reproducible, and policy-anchored. Reasoning and audit trails converge.
Regulated industries, compliance-critical deployments
Concrete Scenarios
Retrieval-Based System
Scenario: Prior Authorization Denial — A patient’s prior authorization request is denied by an AI agent. The patient’s attorney requests documentation of the decision within 48 hours.
The agent ran. Here is the output. Here is a log entry with a timestamp.
Governed Reasoning System
The agent traversed the prior auth policy graph, version 4.2 (effective March 1, 2026). Criterion 3.1.4 — "documentation of medical necessity for outpatient procedure" — was evaluated as not met. The knowledge graph contained no conflicting guidance on this criterion as of that date. The traversal path is immutable and reproducible. Human review was completed by [role] at [timestamp].
The Unanswerable Question
Scenario: Payer-Provider Dispute — A provider disputes a claim denial. The payer’s AI made the original determination. The dispute goes to arbitration.
"Which version of the coverage policy was in effect on the date of the determination — and can you prove the AI used that version, and only that version?"
Vector retrieval cannot answer this. It retrieved what was most similar. That is not a policy attribution. That is not admissible.
The Governed Answer
Here is the coverage policy version. Here is the specific exclusion clause. Here is the traversal log showing exactly how the system reached the determination. Here is proof the same input produces the same output. Here is the complete chain from input to decision to source rule.
The Six Components of Defensibility
Regardless of the system, if your AI is making consequential decisions in a regulated environment, a defensible audit trail must contain all six of the following. If any one is missing, the trail is incomplete — and in a dispute, an incomplete trail is the same as no trail.
Decision Record
The input received, the output produced, and the timestamp. Immutable. Tamper-evident.
Policy Attribution
The exact version of every policy, SOP, or clinical guideline active at the moment of the decision — not generally, on that date.
Rule Lineage
The specific clause, criterion, or rule node that governed the outcome. Not "the policy said no." Which section. Which criterion. Which logic branch.
Knowledge State Verification
Confirmation that the knowledge base used was conflict-free at the time of the decision. If contradictory policies existed, which one governed — and why.
Reproducibility Proof
The same input, run today, produces the same output. Deterministic, not probabilistic.
Retention Compliance
The trail is retained for the duration required by applicable regulation. For HIPAA: six years. Not 90 days.
The audit trail your compliance team hasn’t seen yet.
CogniSwitch Audit is built on this architecture. It does not produce a log. It does not reconstruct a reasoning trail. It generates an independent, deterministic compliance record — linking every agent decision to the specific SOP clause it was evaluated against, with complete coverage across 100% of interactions.
If your AI agent is in production, or approaching it, and your compliance team has not yet seen a defensible audit trail — that is the conversation to start.
See how Audit works
Deterministic compliance trails for every AI decision.
Evidence, Not Explanation
Your AI is making consequential decisions. The question is not whether those decisions are good. The question is whether you can prove it — deterministically, reproducibly, to a regulator who was not in the room.
That is the standard. CogniSwitch Audit meets it.
A log tells you what happened. A reasoning trail tells you why. An audit trail proves it — to someone who doesn’t trust either.