Axiograf Foundation · Protocol Infrastructure

Machine-verifiable
enterprise AI action
before it executes.

Enterprise AI systems are beginning to make decisions that move money, approve vendors, access data, and trigger real-world outcomes. As autonomous action scales, a question will become unavoidable — not whether the AI intended the outcome, but whether the action was legitimate: authorised, evaluated against declared policy, and executed within scope at the moment it occurred. Today, no enterprise can answer that question with machine-verifiable proof. Axiograf is the protocol that makes legitimacy provable.

See the gap → Read the Whitepaper ↓
Zero Trust assumptions required for verification
100% Action coverage — no agent action unrecorded
Any AI model — protocol is vendor-neutral

Enterprise Reality

The gap that
must be closed.

Within the next few years, AI agents will negotiate contracts, approve procurement, move financial value, alter regulated data, and make compliance decisions. When something goes wrong — and it will — the central question will not be intent. It will be proof. Four structural gaps in current enterprise architecture mean that proof does not exist.

Four architectural gaps — regardless of existing tooling
What Axiograf provides
G1
No evaluation record at execution time
Existing systems log what happened — not whether the action was evaluated against a declared constraint set before it executed. The evaluation is implicit, internal, and unverifiable. Audit trails record outcomes; they do not prove admissibility.
Evaluation record at execution time
Every action passes through the evaluator before execution. The Admissibility Record is generated at that moment — cryptographically anchored, tamper-evident. The evaluation is not reconstructed later. It is proven at the time it occurs.
G2
Policy and execution are not bound together
The policy document that authorised an action and the system log that recorded it are separate artefacts, reconciled manually after the fact. There is no structural mechanism binding the constraint version active at execution time to the execution record. Retroactive disputes cannot be resolved — only argued.
Policy and execution bound in one object
The constraint version active at execution is pinned by cryptographic hash inside the same Admissibility Record as the execution outcome. Policy and action are structurally inseparable. No manual reconciliation. No retroactive ambiguity.
G3
Verification requires access to internal systems
Proving an AI action was authorised means giving a regulator, insurer, or counterparty access to your internal logs, your policy management system, and your AI platform — all proprietary, none sharing a common verification interface. There is no external verification path.
External verification without system access
Any authorised third party — regulator, insurer, counterparty — verifies the Admissibility Record independently using only the public protocol. No access to internal systems required. The record is the evidence.
G4
Compliance frameworks assume human decision-makers
SOC 2, ISO 27001, financial conduct frameworks — all designed for systems where a human authorises consequential actions. They have no native concept of machine-speed autonomous action, no evaluation record requirement, and no cross-party verification mechanism. AI agents operate in a compliance gap these frameworks structurally cannot close.
A compliance framework designed for autonomous action
Axiograf is not an extension of frameworks designed for human decision-makers. It is a protocol substrate built specifically for machine-speed autonomous action — with evaluation, constraint enforcement, and verifiable records as first-class protocol properties.

Just as TLS made secure web transactions possible at scale, Axiograf makes verifiable AI action possible at enterprise scale. AI will not be trusted at enterprise scale until its actions are provable. Axiograf makes them provable.


First Principles

Why Axiograf is
structurally resistant.

Axiograf is not assembled from best practices. It is derived from a formal mathematical framework — CIMD — that proves why constraint-based systems produce stable, verifiable outcomes. The protocol's properties are not claimed. They are derived. Everything follows from three primitives.

Ω,  C,  S' ∈ C(S)
Ω
Possibility Space
All logically conceivable states prior to admissibility evaluation. No empirical or causal commitment — Ω is a prerequisite for exclusion, not a claim about what is feasible.
C
Constraint Operator
Acts solely by elimination. Does not generate states, impose preferences, or imply direction. Defines admissibility only — what is impermissible is excluded, what remains is equally admissible.
Ωᵥ
Viable Subspace
The subset of Ω surviving constraint application. Any transition S' must belong to C(S). All states within Ωᵥ are equally admissible — no preference imposed among them.

The paradigm shift: Constraints are not goals. They define the boundary of admissibility. Non-invariant states cannot persist — admissible transitions occur, and configurations that remain admissible across ordered evaluations constitute stable structure.

Diagram 01 — Foundation

Ω: The Possibility Space

Every conceivable state prior to evaluation. Before constraint is applied, all logically admissible states exist simultaneously within Ω. Move your cursor over the field to explore.

Ω — Possibility Space
All logically conceivable states prior to any evaluation. The total universe of what could occur.
Ωᵥ — Viable Subspace
States surviving constraint application C(S). The admissible zone — not chosen, merely not eliminated.
S* — Stable Fixed Point
A state where S' ∈ C(S) holds across ordered evaluations — stable structure by persistence alone.
Hover the field · Blue = unconstrained states · Amber = viable subspace Ωᵥ · Green = stable attractor S*
Diagram 02 — Core Mechanic

C: Constraint acts by elimination

Constraint does not generate actions or impose preferences. It defines only what is impermissible. Adjust the slider to watch the viable subspace shift.

✗  Goal-based (not CIMD)
Problem: Goal-based systems maximise toward a target.
They generate actions and impose preferences.
Audit trail is: intent → output (black box).
✓  Constraint-based (CIMD)
C eliminates inadmissible states.
Viable subspace Ωᵥ: ~42% of Ω remains.
Any S' ∈ C(S) is permissible — no preference imposed.
Policy scope 62%

Axiograf Protocol

The mathematics
becomes infrastructure.

Every element of the Axiograf protocol maps directly to a formal CIMD primitive. The protocol's properties — verifiability, replayability, constraint enforcement — are not design goals. They are mathematical consequences. This is what makes them provable, not just claimed.

CIMD — Mathematical Framework
Axiograf — Protocol Implementation
Ω
Possibility Space
All logically conceivable states prior to evaluation. Includes every action an AI agent could propose.
AE
Action Envelope
A structured representation of a proposed enterprise action in the Canonical Fact Language — the shared, typed substrate enabling independent verification across enterprise boundaries.
C
Constraint Operator
Maps possibility space to viable subspace by elimination only. No preferences imposed — only boundaries enforced.
CS
Constraint Set
The enterprise's declared admissibility conditions expressed against the Canonical Fact Language. Defines what is impermissible — by elimination only. Versioned and immutable once published.
Ωᵥ
Viable Subspace
States surviving constraint application. All equally admissible. The protocol does not select among them.
AA
Admissible Actions
Actions whose fact representation satisfies all declared constraints. The protocol does not select among them — no preference is imposed. Any admissible action may proceed.
S'∈C(S)
Admissibility-Preserving Transition
Each transition must belong to the viable subspace generated by the current state's constraint application. Ordering is logical, not temporal.
AR
Admissibility Record
The signed, cryptographically anchored result of constraint evaluation: ADMISSIBLE, BLOCKED, or ESCALATE. The core protocol artifact — machine-verifiable proof of evaluation.
S*
Stable Fixed Point
A state where S' ∈ C(S) holds across ordered evaluations. Equilibrium reached without external direction.
R
Cryptographic Receipt
Signed, verifiable record of the complete lifecycle. Tamper-evident. Verifiable by any third party — permanent.

Placement

Infrastructure,
not another tool.

Axiograf is not an AI model, a governance dashboard, or a compliance platform. It is a protocol substrate — narrow, invariant, and positioned at the one point where every AI-proposed action must pass before execution. Before anything acts, Axiograf evaluates it.

Layer 4
AI Agents
Your AI models, orchestration engines, and automation tools. Any system generating actions on behalf of your enterprise. Agents propose actions — they do not authorise them.
  • Submits proposed Action Envelopes to the protocol layer
  • Receives structured verdict: ADMISSIBLE / BLOCKED / ESCALATE
  • No access to constraint logic — separation is structural
Model-agnostic Vendor-neutral Replaceable
Layer 3
Axiograf Protocol
The invariant substrate. Receives proposed actions, evaluates them against the active constraint set, issues cryptographic Admissibility Records, and makes the full record verifiable to third parties. Does not generate, prefer, or direct actions — enforces admissibility boundaries only.
  • Evaluates every action against the constraint set before execution
  • Issues tamper-evident, cryptographically signed Admissibility Records
  • Exposes verification API — no enterprise access required to verify
  • Constraint versions immutable once published — full history preserved
Axiograf-defined Open standard Immutable
Layer 2
Policy Layer
Enterprise-defined constraint sets. Your admissibility conditions — authorization scope, schema references, and data governance rules — expressed against the Canonical Fact Language. Legal and compliance teams define what is impermissible. All versions immutable once published.
  • Authorization scope declarations
  • Schema and data classification rules
  • Escalation paths for edge cases
Enterprise-defined Versioned Auditable
Layer 1
Enterprise Systems
Your ERP, CRM, procurement platforms, compliance tools, and data infrastructure. These systems receive only actions admitted by the protocol layer. Nothing reaches this layer without a cryptographic Admissibility Record confirming it passed constraint evaluation.
  • Receives only protocol-admitted actions
  • Every incoming action has a verifiable record
  • Existing systems unchanged — protocol is additive
Existing infrastructure Unchanged

Enterprise Briefs

One protocol,
understood two ways.

The same formal properties — verifiability, constraint enforcement, replayability — answer different urgent questions depending on who is facing them. Architects need to know where it fits. Legal teams need to know what it proves.

The three infrastructure components that make Axiograf work — a canonical fact language, a structured action object, and a stateless evaluator. Each has a defined role and a defined contract. None requires trust in any other party to function.

A · 01 The Canonical Fact Language A typed, bounded vocabulary of enterprise fact types. The shared substrate that makes constraints universally composable.
What it is
Typed fact schema
A canonical, typed representation of enterprise reality — not a programming language, not free-form data. A formally defined vocabulary of enterprise fact types: agent identity, resource type, quantity, jurisdiction, timestamp, prior action reference, and context. Defined by the protocol. Universal across all enterprises and all constraint authors.
  • Agent identity — cryptographically bound URI of the acting system
  • Resource type — what the action addresses, typed and schema-referenced
  • Quantity, jurisdiction, timestamp — bounded numeric and categorical fields
  • Prior action reference — typed field enabling trajectory evaluation
  • Context envelope — operational metadata declared by the submitting agent
Protocol-defined Typed Bounded
Why it matters
The SQL of admissibility
The canonical fact language is the architectural insight that makes Axiograf genuinely novel. OPA, Cedar, runtime verification — none have a canonical fact language as a first-class protocol component. They evaluate policies against system-specific data structures. That is why they are enterprise-specific. Axiograf is cross-enterprise universal because every constraint, regardless of author, references the same fact types.
  • A regulator's constraint is directly evaluable by any conformant enterprise
  • Constraints from different authors compose because they share a substrate
  • The language is expressive enough for enterprise reality, bounded enough to remain decidable
Cross-enterprise Composable Decidable
Adaptor layer
Enterprise translation
Enterprise operational systems — ERP, CRM, procurement platforms — do not natively speak the canonical fact language. The adaptor layer translates. Each adaptor converts system-specific output into conformant canonical facts before an Action Envelope is assembled. The adaptor is enterprise-defined and extensible. Third parties build adaptors for SAP, Workday, Stripe, or any system — the protocol does not change.
  • Adaptor is enterprise-defined — the protocol remains unchanged
  • Any system with a conformant adaptor participates in the protocol
  • Adaptor conformance is independently verifiable against the canonical schema
Enterprise-defined Extensible Verifiable
Source: Axiograf Protocol, §3 — Canonical Fact Language · §4 — Adaptor Layer architecture
A · 02 The Action Envelope A self-contained, typed object representing one proposed enterprise action. Everything the evaluator needs — nothing more.
Structure
Self-contained object
The Action Envelope is the unit of submission to the evaluator. It contains all canonical facts required for admissibility evaluation — assembled by the submitting agent before submission. The evaluator never fetches, resolves references, or makes external calls. It receives a complete fact set and evaluates against it in a single stateless computation.
  • Agent identity — who is acting
  • Action type and canonical fact payload — what is being proposed
  • Constraint version reference — which constraint set applies
  • Trajectory facts (if applicable) — prior actions as typed values, not references to retrieve
  • Timestamp and context — when and under what operational conditions
Typed Versioned Complete
Design principle
Statefulness in the agent
The statefulness lives in the agent and the adaptor — which must assemble a complete envelope before submitting. The evaluator itself sees only a complete, self-contained fact set. This is the critical design choice: it preserves evaluator purity while allowing trajectory-aware constraints. An agent submitting a horizon-N trajectory assembles all N action facts into the envelope. The evaluator evaluates them as a complete set — no sequencing, no lookup, no state.
  • Agent assembles complete trajectory before submission
  • Evaluator receives complete facts — no external calls ever made
  • Logical statelessness is preserved regardless of envelope complexity
Stateless eval Trajectory-aware Deterministic
Source: Axiograf Protocol, §4 — Action Envelope specification · §5.1 — Stateless evaluation design
A · 03 The Distributed Stateless Evaluator Trustworthy without requiring trust in any party that operates it. Deterministic. Parallelisable. Policy-blind.
Core design
Two inputs, one output
The evaluator receives exactly two inputs: an Action Envelope and a Constraint Object. It produces exactly one output: an Admissibility Record containing ADMISSIBLE, BLOCKED, or ESCALATE. It retains no state between evaluations. It has no knowledge of what the constraint means, what enterprise authored it, or what operational context surrounds the action. It only determines whether the facts in the Envelope satisfy the predicate in the Constraint.
  • Input: Action Envelope + Constraint Object
  • Output: Admissibility Record — ADMISSIBLE / BLOCKED / ESCALATE
  • No state retained between evaluations
  • No knowledge of constraint meaning or enterprise context
  • Same inputs always produce same output — on any infrastructure, by any operator
Stateless Policy-blind Deterministic
Why policy-blind
The source of trustworthiness
Policy-blindness is not a limitation. It is the source of the evaluator's trustworthiness. An evaluator that understood policy context could be manipulated through that context. An evaluator that only processes typed facts against a logical predicate cannot be. Its output is fully determined by its inputs. Given the same inputs, any evaluator instance — operated by any party, on any infrastructure — produces the same output. This is what makes cross-enterprise verification possible without a central authority.
  • Cannot be manipulated through context it does not receive
  • Output verifiable by any party — run the same inputs, get the same result
  • No central evaluator required — any conformant instance is equivalent
Tamper-resistant Independently verifiable
Distribution
No bottleneck by design
Because the evaluator is stateless and deterministic, it can be deployed as distributed infrastructure without consensus requirements. Multiple evaluator instances process different actions simultaneously. No evaluator instance holds privileged state. No central evaluator is required. High-frequency autonomous agents operate at machine speed because admissibility evaluation is a distributed, parallelisable computation — not a serialised approval queue.
  • Horizontally scalable — no consensus between instances required
  • No privileged central instance — any conformant deployment is equivalent
  • Evaluation throughput scales with agent volume, not approval queue depth
Distributed Horizontally scalable No bottleneck
Source: Axiograf Protocol, §5 — The Distributed Stateless Evaluator · §5.1 Stateless and Policy-Blind · §5.2 Distributed Without Bottleneck

The liability exposure created by AI-driven enterprise action — and how the protocol transforms each phase from assertion to provable fact.

C · 01 Where liability lives in AI-driven enterprise action Each phase carries a distinct legal question. The difference is whether your answer is an assertion or a provable fact.
Phase 01
Intent & authorisation
Without Axiograf
No structured record exists of what the AI agent was authorised to do at the time of action. Authorisation is reconstructed from policy documents, emails, and staff recollection. Defensibility depends on documentation culture, not system design.
With Axiograf
Authorisation scope is declared in the constraint set before execution and is immutable once signed. The constraint version active at action time is pinned to the record. Intent is a system record — not a reconstruction.
Phase 02
Constraint evaluation
Without Axiograf
No evidence the action was evaluated against current policy before execution. The AI "checked the rules" in a process that is opaque and not independently verifiable. A challenger can allege the check did not occur.
With Axiograf
Evaluation is deterministic and replayable. Any third party can re-run the evaluation using the constraint version that was active — and will get the same result. The evaluation cannot be disputed on procedural grounds.
Phase 03
Execution record
Without Axiograf
Action log exists but is not linked to the authorisation record. Two documents must be reconciled manually. Gaps between authorised scope and actual execution may not be discoverable until challenged.
With Axiograf
Execution outcome is appended to the same Action Envelope as intent and constraint declarations. One signed document. The entire lifecycle is bound together — no reconciliation required.
Phase 04
Audit & dispute
Without Axiograf
Audit response is an assertion. "We believe the action was within scope." The burden of proof falls on the enterprise to reconstruct, not on the challenger to disprove. Litigation risk is highest here.
With Axiograf
Audit response is a cryptographic record. Share the record ID. The challenger verifies independently. The enterprise's assertion is not the evidence — the protocol record is the evidence.
Source: Axiograf Protocol, §4 — Mandatory Enforcement, Replayability · §5 — Receipt and Verification architecture
C · 02 The Axiograf evidence chain Five elements, each legally distinct, all bound in a single signed record.
01
Agent identity
The verified URI of the AI agent that proposed the action. Not a human assertion — a cryptographically bound agent identifier tied to the operational context.
02
Constraint version at time of action
The exact policy version active when the action was evaluated. Pinned by cryptographic hash. Cannot be changed retroactively.
03
Evaluation result
Output of constraint evaluation: ADMISSIBLE, BLOCKED, or ESCALATE — with the constraint clauses applied and facts evaluated. Deterministically replayable by any third party.
04
Execution outcome
What actually occurred — appended to the same object as the authorisation record. Any divergence between authorised scope and actual execution is structurally detectable.
05
Cryptographic receipt
Ed25519 signature over the entire record. Tamper-evident. Verifiable by any party — including regulators, trading partners, and courts — without access to the enterprise's systems.
Source: Axiograf Protocol, §5.4 — Receipt Architecture · Ed25519 signature standard

How two enterprises can verify each other's AI actions without trusting each other's judgment — using only the shared protocol as substrate.

CE · 01 Cross-enterprise trust without a central authority Neither enterprise trusts the other's policy judgment. Both trust the shared protocol. Legitimacy is verified, not assumed.
Enterprise A — initiating party
01
Agent proposes action
Enterprise A's AI agent generates a proposed action expressed in the Canonical Fact Language common to both parties.
Action Envelope — shared schema
02
Evaluated against A's constraints
Enterprise A's constraint set is applied. The evaluation is deterministic — the same Action Envelope, evaluated against the same constraint version, always produces the same result.
Constraint set — A's policy
03
Signed Admissibility Record issued
A cryptographically signed record is produced. It asserts only that A evaluated it against A's declared constraints — tamper-evident.
Record — cryptographically signed
Axiograf
Protocol

Shared
substrate
Enterprise B — receiving party
04
Receives the same Action Envelope
Enterprise B receives the Action Envelope in the Canonical Fact Language. It does not rely on A's evaluation result. It runs its own.
Same schema — independent read
05
Evaluated against B's constraints
B applies its own constraint set independently. If B's constraints are satisfied and A's record integrity is verified — the interaction proceeds. No negotiation. No central authority.
Constraint set — B's policy
06
Interaction proceeds — or doesn't
Both records exist independently. Disputes resolved by replaying the evaluation against historical constraint versions. The protocol record is the evidence.
Both records verifiable — permanently
Core insight "Axiograf enables a new kind of commercial relationship: one in which legitimacy is verified, not assumed — and verified without requiring trust in any counterparty." — Axiograf Protocol, §6
Source: Axiograf Protocol, §6 — Cross-Enterprise Trust Without Central Authority · Canonical Fact Language substrate

Publication

The formal
specification.

The complete specification — from the CIMD mathematical foundations through to the protocol architecture, distributed stateless evaluator, cross-enterprise verification model, and cryptographic receipt structure. The protocol is an open standard. Read it, verify it, build on it.

Axiograf Protocol · 2026
Axiograf: A Protocol for Machine-Verifiable Enterprise AI Action
This paper formalises the Axiograf protocol — a structured schema for AI agent actions that enables cryptographic verification, constraint enforcement, and enterprise audit trail generation. The core properties are derived from the CIMD framework, ensuring that verifiability and auditability are structural features of the protocol rather than procedural overlays.
Download PDF Axiograf_Pub_20260311.pdf

Contact

Founding
members.

Invitation

Axiograf is a formal protocol proposal exploring how enterprise AI action may become machine-verifiable, constraint-enforced, and independently auditable. At this stage it is not production-deployable — the objective of this publication is to establish first principles and invite early collaboration.

We welcome interest from enterprise architects, researchers, regulators, and infrastructure builders who recognise the emerging need for verifiable autonomous decision systems and wish to contribute to shaping the specification and evaluation pathways.

If you are working on enterprise AI governance, formal methods, or protocol standardisation — we want to hear from you.

Live demonstration

Carrier eligibility
certification — corridor pilot

Real-time admissibility proof for load commitment in the Quebec–Ontario corridor. Nine predicates across four domains. Toggle Pass / Fail — expand any row for full protocol depth.

axiograf · carrier-eligibility · corridor-pilot · v0.1
Axiograf Protocol demo