April 6, 2026 · 12 min read

The AI Governance Stack: Verified Data In, Governed Decisions Out, Cryptographic Proof at Every Step

Every enterprise deploying AI agents faces three problems simultaneously: agents hallucinate data, decisions lack oversight, and there's no legal proof of what happened. We built two platforms that solve all three.

The Problem: Three Failures, One Root Cause

A lending agent approves a €200,000 loan. Six months later, the borrower defaults. The regulator asks three questions:

  1. What data did the agent use? Was it the real credit report, or did it hallucinate a credit score?
  2. Who authorized the decision? Did a human review it? Was there a policy in place?
  3. Can you prove it? Not a database log — cryptographic, tamper-proof, independently verifiable proof.

Most companies can't answer any of these. The agent pulled data from an embedding that was 3 months stale. Nobody reviewed it. The only "proof" is a log file the company controls.

FINRA 2026, the EU AI Act (August 2026), and SR 11-7 all require answers to these questions. The fines for not having them range from €7.5M to €35M.

The Solution: A Two-Layer Stack

Qanatix — Verified Input

Push business data from ERPs, databases, and APIs. Agents query structured, verified data — not hallucinated content. MCP native. <20ms latency.

Aira — Governed Output

Policies evaluate every action. Humans approve high-stakes decisions. Cryptographic receipts prove it all. Ed25519 + RFC 3161 timestamps.

Layer 1: Verified Data with Qanatix

The first failure point is input. When an AI agent needs customer data, it typically searches a vector database built from embeddings. Those embeddings are stale, lossy, and unverifiable.

Qanatix replaces this with deterministic data access. You push structured data in and agents query it through MCP or REST. No embeddings. No chunking. No hallucination.

# Push customer data from your CRM
import qanatix

qx = qanatix.Qanatix("sk_live_...")
qx.push("customers", [
    {"id": "C-4521", "name": "Maria Schmidt", "credit_score": 742,
     "annual_income": 85000, "existing_debt": 12000, "status": "active"}
])

# Agent queries verified data
result = qx.search("customers", "Maria Schmidt credit score")
# Returns: {"credit_score": 742, "source": "CRM", "updated_at": "2026-04-06T08:00:00Z"}

Layer 2: Governed Decisions with Aira

The agent has verified data. Now it makes a decision. This is where Aira's governance pipeline takes over.

Step 1: Policy Evaluation

Before the action is notarized, Aira's policy engine evaluates it. Three modes:

Rules mode — deterministic conditions, instant:

# Policy: "All loan decisions over €100,000 require human approval"
# Conditions: action_type == "loan_decision" AND amount > 100000
# Decision: require_approval

AI mode — a single LLM evaluates against natural language policy:

# Admin writes: "Any action involving customer PII or financial transactions
# over €5,000 must be reviewed by the compliance team."
# Claude evaluates each action against this policy automatically.

Consensus mode — multiple LLMs evaluate independently:

# Claude: "DENY — loan amount exceeds 2x income threshold"
# GPT-5.2: "DENY — credit score 742 borderline for €200K"
# Gemma 4:  "REVIEW — needs more context on existing debt"
# Disagreement detected → held for human review

Step 2: Human Approval

When a policy triggers require_approval, the action is held. Approvers receive an email with Approve/Deny links. No receipt is issued until a human decides.

from aira import Aira

aira = Aira(api_key="aira_live_xxx")

receipt = aira.notarize(
    action_type="loan_decision",
    details="Approved €200,000 loan for Maria Schmidt.",
    agent_id="lending-agent",
)
# receipt.status == "pending_approval"
# Compliance team gets email → clicks Approve → receipt minted

Step 3: Cryptographic Receipt

After approval, Aira mints a receipt: Ed25519 signature, RFC 3161 trusted timestamp, payload hash. Tamper-proof, independently verifiable, court-admissible.

The Full Pipeline

import qanatix
from aira import Aira

# Layer 1: Verified data
qx = qanatix.Qanatix("sk_live_...")
customer = qx.search("customers", "applicant Maria Schmidt")

# Layer 2: Agent decides
decision = evaluate_loan(customer)

# Layer 3: Aira governs
aira = Aira(api_key="aira_live_xxx")
receipt = aira.notarize(
    action_type="loan_decision",
    details=f"Decision: {decision['result']} for €200K loan.",
    agent_id="lending-agent",
)

# Automatically:
# 1. Policy engine evaluates (receipt for evaluation)
# 2. Held for human approval (email sent)
# 3. Compliance approves (authorization receipt)
# 4. Action notarized (Ed25519 + RFC 3161 receipt)

What This Replaces

Approach Verified Data Policy Eval Human Approval Crypto Proof
RAG + vector DBNoNoNoNo
Logging (Langfuse)NoNoNoNo
SidClawNoStatic rulesYesNo
Auth (Okta)NoNoNoNo
Qanatix + AiraYesRules+AI+ConsensusYesEd25519+RFC 3161

Regulatory Mapping

Getting Started

pip install qanatix aira-sdk

# Push your data
import qanatix
qx = qanatix.Qanatix("sk_live_...")
qx.push("customers", your_data)

# Govern your agent's actions
from aira import Aira
aira = Aira(api_key="aira_live_xxx")
receipt = aira.notarize(action_type="loan_decision", details="...", agent_id="lending-agent")
Try Aira — free Try Qanatix — free

© 2026 Softure UG (haftungsbeschränkt) · Berlin, Germany
Aira · Qanatix