Safe Execution for AI Agents
IBITI EPK is an on-chain execution kernel that enforces deterministic constraints for autonomous agents: spend limits, allowlists/denylists, typed intents (EIP-712), nonces/deadlines, and instant revoke. No “trust the bot”.
AI is probabilistic. Money is not.
Modern agents and LLM-driven bots are not deterministic systems. They hallucinate, get prompt-injected,
misread context, and occasionally do the exact wrong thing with very high confidence.
In normal software that’s a bug. In finance it’s an incident.
The moment an agent is given a private key, a signer, or unlimited token approvals, you’ve created a single point of failure
that can execute irreversible actions. UI confirmations, monitoring dashboards, and off-chain “guardrails” help only until
the on-chain call is valid. The chain does not care that the agent was “confused”.
For wallets, exchanges, protocols, and institutions this is a hard blocker: you cannot ship “autonomous execution” as a feature
unless you can prove a deterministic safety boundary — not in policy docs, but in the execution path itself.
- • Unlimited approvals turn a small bug into unlimited damage and uncontrolled loss.
- • Prompt injection can redirect actions without changing the user’s perceived intent.
- • Off-chain checks fail the moment the on-chain call is allowed — execution is final.
- • Wrong target / wrong calldata mistakes are common and irreversible once mined.
- • Replay and stale signatures become attack surface without strict nonces + deadlines.
- • “Trust the bot” is not a security model — institutions need enforceable constraints.
Deterministic constraints on-chain.
EPK solves this by moving safety from “best effort” off-chain logic into an on-chain execution kernel.
Agents can propose actions — but the kernel enforces a policy that defines exactly what may execute.
If a policy rule fails, the transaction reverts atomically. No partial execution. No “we’ll catch it later”.
The policy is enforced at execution time: spend caps (per-tx and rolling windows), allow/deny rules for targets and call-keys,
and human-auditable EIP-712 typed intents. Every intent is bound to nonce + deadline replay protection so signatures cannot be reused
outside their intended window.
Result: even if the agent hallucinates, is compromised, or behaves maliciously, it cannot exceed predefined limits or interact with
unapproved contracts. Autonomy becomes deployable for real platforms because the safety boundary is mathematical — enforced by the chain.
- • Limits enforced in the kernel, not in UI — constraints survive front-end failures.
- • Per-tx caps + rolling windows reduce blast radius and enable risk-tiered automation.
- • Allowlist/denylist (targets + call-keys) blocks unknown interactions by default.
- • EIP-712 typed intents make approvals auditable — wallets can display what’s authorized.
- • Nonce + deadline prevent replay and stale signatures — each intent is time-bounded.
- • Instant revoke (panic) halts agent execution immediately when risk is detected.
Offer controlled agent execution with explicit policy boundaries. Reduce support risk, improve trust, and enable agentic features without giving bots unlimited power.
Don’t reinvent security primitives. Plug into a hardened permission kernel and focus on intelligence, strategy, and UX.
Delegate execution while preserving governance: limits, allowlists, auditability, and revocation by design.
Tell us your use case (wallet integration, exchange automation, protocol execution, or agent runtime). We’ll map policies and ship a clean pilot plan.