Verifiable Agents

Signed rules. Runtime enforcement. Verifiable proof.

Middleware that forces AI agents to follow signed rules, blocks violations at runtime, and produces tamper-evident logs anyone can audit.

Agents have permissions. Nobody has proof.

AI agents are beginning to manage economic activity across payments, trading, and procurement—with zero behavioral accountability. MCP handles tool I/O. A2A handles agent messaging. AP2 handles payments. Nobody handles proof.

Until now.

The trust stack

Together, these turn “policy” from a promise into a proof.

Identity

DID

Decentralized identifiers. Who is this agent? Cryptographic binding to keys and lineage.

Covenant

Behavioral spec

What will it do — and won’t do? Signed, immutable constraints. Only narrowable, never loosened.

Attestation

W3C VC

Verifiable credentials. Prove identity and compliance without revealing internals.

Action log

Hash-chained

Every action logged. Tamper-evident. Audit trail that anyone can verify.

Verification

Deterministic

Same inputs, same result. Independent, deterministic verification—no trust in the verifier.

Enforcement

Staking / slashing

Skin in the game. Violations are costly. Stake at risk aligns behavior with commitments.

From action to verifiable proof

1 Agent decides action
2 Middleware intercepts
3 Evaluate covenant
4 Block or allow
5 Log to action log
6 Verifiable by anyone

Agent tries $600 transfer → middleware blocks → log verifies

nobulex-demo
$ mw.execute({ action: 'transfer', params: { amount: 600 } }, ...)
Agent: Attempting transfer of $600
Middleware: BLOCKED — covenant: amount > 500
Action log: { action: "transfer", amount: 600, result: "blocked", hash: "0x7a3f..." } ✓ verifiable
Full demo →

Impossible or costly

Tier 1

Impossible violation

Middleware runs in a TEE. Policy bypass is prevented inside the enforcement boundary (TEE + attestation assumptions).

Assumptions & threat model: We assume the TEE is uncompromised and correctly attests its identity. The enclave intercepts all agent actions before execution; forbidden actions never reach the host. If the enclave is compromised, an attacker could bypass enforcement — attestation lets you verify which software is running. See the Spec for details.

Tier 2

Costly violation

Stake at risk. Slashing on breach. Rational agents don’t violate when the cost exceeds the gain.

Not just guardrails

Guardrails / policy engines Nobulex
Enforcement Best-effort; can be bypassed Signed commitments; TEE or staking
Verification Trust the operator Third-party verifiable; anyone can audit
Consequences Policy violation = incident Economic enforcement; slashing on breach

See the covenant engine in action

covenant.nbl
permit read;
forbid transfer where amount > 500;
require log_all;

Start free. Scale when ready.

The accountability primitive for AI agents.

Open Source

$0 — forever

Self-hosted. MIT licensed. Full protocol.

  • Unlimited agents
  • Unlimited verifications
  • Full covenant engine
  • Hash-chained audit logs
  • LangChain integration
  • Community support
View on GitHub
RECOMMENDED

Nobulex Cloud

From $0.005 / verification

Like Stripe, but for AI agent compliance.

Managed compliance infrastructure. Pay per check.

  • Everything in Open Source, plus:
  • Managed hosting — zero ops
  • Compliance dashboard & reporting
  • Audit-ready exports
  • Nobulex Verified badge for your agents
  • Team collaboration & RBAC
  • 99.9% uptime SLA
  • Email + priority support

Volume discounts at 1M+ verifications/month.

Join the Cloud Waitlist

Enterprise

Custom

For regulated industries with complex requirements.

  • Everything in Cloud, plus:
  • Custom compliance frameworks (EU AI Act, NIST AI RMF, ISO 42001)
  • On-prem / VPC deployment
  • SSO / SAML / SCIM
  • Insurance-linked compliance coverage
  • Dedicated compliance engineer
  • Custom SLA & support
  • Compliance intelligence reports
Contact Us
Per-Action Toll $0.005–$2.00 per verification based on action complexity
Certification Nobulex Verified badges for trusted AI agents
Intelligence Anonymized compliance benchmarks & risk scores
Insurance Compliance-linked coverage that reduces AI liability premiums
Middleware Embedded in LangChain, CrewAI, MCP, A2A — ships with every framework

Built in the open

134K+Lines TypeScript
6,115Tests
60Packages
CLITooling
ElizaOSPlugin

MIT · Open source · No vendor lock-in

Add accountability in minutes

const { protect } = require('@nobulex/quickstart');
const agent = protect('permit read; forbid transfer where amount > 500; require log_all;');
const result = agent.check({ action: 'transfer', amount: 200 });
from langchain_nobulex import NobulexComplianceMiddleware

agent = create_agent(
    model='gpt-4',
    tools=tools,
    middleware=[NobulexComplianceMiddleware(
        rules='permit read; forbid transfer where amount > 500;'
    )]
)
Works with LangChain Available on PyPI