Skip to content

Decision Traces & AI-Readiness

AI agents are increasingly doing things in production systems — booking flights, approving loans, executing trades, resolving tickets. For them to be trustworthy, two conditions have to hold:

  1. Before acting, the agent needs the full context of similar prior decisions — precedent, exceptions, policy history. Without it, the agent is guessing.
  2. After acting, someone needs to reconstruct what the agent did and why — for audit, for dispute resolution, for debugging the agent itself.

A traditional CRUD database fails at both. The state shows where things ended up, not how they got there. Decision context lives in Slack threads, email chains, and experienced employees’ heads.

A decision trace is the full narrative behind each business action: what command came in, what the system state was at that moment, what the decide function’s inputs were, what it returned, what component version executed it, and what the surrounding context looked like.

In EvidentSource, decision traces aren’t a bolt-on — every event is one.

Because of how EvidentSource is structured, every appended event carries (or is directly linkable to):

  • The command that triggered it (via causation ID)
  • The caller identity (from metadata signed at the API boundary)
  • The component version that produced it (from State Change provenance)
  • The state the component read (via the revisions of the state views queried — the decision’s inputs)
  • The effective time it claims and the transaction time it was recorded at
  • The correlation context linking it to the upstream request

Replay is lossless. You can reconstruct, at any point in the past, exactly what any component saw when it made a decision.

  • Context before acting. The agent can query the decision-trace history for similar subjects: how were past cases resolved, what policies applied, which exceptions were carved out. That’s the precedent you want an agent to consult before it writes to a ledger.
  • Auditability after acting. Regulators will not ask “what did the AI decide?” — they will ask “why, and what did it know when it decided?” EvidentSource answers both without you instrumenting anything special.
  • The right shape for LLM generation. The programming model is pure functions. LLMs produce correct decide and evolve functions with far less prompting than they need to produce correct SQL or ORM code, because the function signatures constrain the problem.

See MCP for AI agents for how agents can reach EvidentSource directly via the Model Context Protocol.