The Autonomous
AI Agent Platform
Deploy long-running AI agents that reason, plan, and act autonomously for hours or days. Built in Rust for performance and token efficiency. Enterprise-grade multi-agent orchestration with governed outcomes.
Not just a wrapper.
A complete AI agent framework.
Persistent memory, multi-agent orchestration, 14 production plugins, enterprise security, and scheduling that never misses a beat — all in Rust.
Long-Running Autonomy
Agents run for hours or days, not seconds. They handle multi-step pipelines, recover from failures, manage their own context, and produce results without hand-holding.
4-Tier Persistent Memory
Short-term, long-term, episodic, and semantic memory with intelligent task-aware routing. Agents build expertise over time, remember past outcomes, and learn from mistakes.
DAG Task Orchestration
Agents decompose work into sub-task DAGs with parallel and sequential execution, cascade cancellation, and depth limits. Complex pipelines, orchestrated automatically.
Agent Council System
Multi-agent deliberation for decisions that exceed a single agent. Four modes: Brainstorm (idea synthesis), Debate (advocate + judge), Review (producer + critics), Consensus.
6-Layer Prompt Injection Defense
Role hijack, instruction override, context manipulation, data exfiltration, recursive injection, and credential exfiltration detection. Weighted composite scoring with co-occurrence bonuses.
14 Production Plugins
Email, Slack, Mattermost, sandboxed Python/Node.js, file manager, Git, Google Drive, HTTP requests, webhooks, RAG, object store, and browser automation — pair with Owl Browser for undetectable web automation at scale.
Intelligent Tool Routing
Intent-based dynamic tool selection with ONNX embeddings keeps agent context focused. Only relevant tools are injected per turn, minimizing tokens while maximizing output.
Hybrid Storage Engine
redb (source of truth) + PostgreSQL (queryable views with JSONB, tsvector FTS, pgvector) with CDC sync. AES-256-GCM encrypted storage.
Multi-Model AI Integration
Unified LLM provider abstraction across OpenAI and Anthropic. Tier-based routing (Frontier / Standard / Fast) with automatic fallback chains and per-provider circuit breakers.
Advanced Scheduling
Cron, interval, one-shot, and event-driven triggers with full timezone and DST awareness. Overlap policies, missed trigger recovery, and auto-disable on consecutive failures.
Built-In Reliability
Erlang-OTP supervision trees, circuit breakers per LLM provider, emergency kill switch, and behavioral loop detection. Agents recover from failures automatically.
MCP Integration
Connect to external Model Context Protocol servers via HTTP or stdio. Automatically discover and register tools, resources, and prompts with health tracking and reconnection.
5x fewer tokens. Rust-native logic.
Most AI agent frameworks burn tokens on orchestration the LLM shouldn't be doing. steffi.ai handles planning, tool routing, and memory in compiled Rust — so the model focuses on reasoning.
Tokens per task completion
avg. across comparable workloadsControl stays in compiled Rust. The LLM only sees what it needs to reason about.
Intent-Based Tool Routing
ONNX embeddings match intent to tools. Only relevant tools are injected per turn — no bloated system prompts.
Smart Context Compaction
Automatic conversation compaction preserves critical context while aggressively pruning redundant history.
Rust-Native Logic
Task decomposition, scheduling, memory routing, and security checks happen in Rust — not in LLM calls.
See every decision, in real time
A management dashboard with live WebSocket events, deep analytics, and full CRUD for agents, tasks, memory, tools, and credentials.
Agent Dashboard
Real-time operational overview with metric cards, active agents, recent activity feed, task distribution charts, and quick actions.
AI agent framework in Rust, one binary
A modular Rust workspace with clean separation of concerns. Single-writer engine pattern eliminates lock contention. Deploy as a single binary — no Python, no dependency hell.
REST API, WebSocket subscriptions, dual auth (API keys + JWT), rate limiting
Agent executors, LLM providers, model router, compaction, council executors, ONNX tool routing
Plugin manager, event bus, tool registry, MCP client, messaging abstraction
Cron/interval/one-shot triggers, overlap policy, missed trigger recovery
Prompt injection detection (6 layers), encoding normalization, DLP output filtering
Single-writer engine, DAG task manager, agent manager, memory manager, supervisor, budget tracker
redb + PostgreSQL hybrid with CDC sync, AES-256-GCM encryption, JSONB columns, tsvector FTS, pgvector embeddings
Logging, Prometheus metrics, audit log with chain hashing, health checks, incident tracking
Single-writer engine pattern
Enterprise AI agent security, governed outcomes
Every autonomous agent runs within enforced boundaries — resource budgets, capability-gated permissions, immutable audit trails, and encrypted credentials. Full governance with SOC 2 controls.
The LLM is a powerful component inside the system, not the system itself. Planning, routing, scheduling, security, and memory management happen in compiled Rust — predictable, auditable, and deterministic.
No Black Box
Every decision, tool call, and reasoning step is logged in structured JSONL execution logs. Full conversation history, compaction traces, and council deliberations — nothing is hidden.
Resource Budgets
Per-agent and per-task hard limits on API calls, tokens, and wall-clock time. Warning thresholds fire alerts before limits hit. No runaway agents, no surprise bills.
Encrypted Credential Vault
AES-256-GCM encrypted storage with HKDF key derivation. Plugins reference credentials by label — no plaintext secrets in config. Per-agent scoped access with full audit trail.
Capability-Gated Permissions
Every plugin capability is permission-gated at registration time. Agents only access what they are explicitly allowed to. Circular dependency detection at startup.
Immutable Audit Logs
Every state mutation is recorded with actor, outcome, timestamp, and chain hash for tamper detection. Data lifecycle tracking, change attribution, and compliance report generation.
DLP Output Filtering
Pattern-based output scanning prevents data leakage in agent responses. Custom regex patterns, configurable masking, and automatic blocking of sensitive content before it leaves the system.
Production-grade, by every metric
Not a proof of concept. A platform shipping today for teams running agents in production.
Pair with Owl Browser
Owl Browser is a high-performance browser engine that replaces Playwright and Puppeteer with self-hosted infrastructure you control. 175+ automation tools, MCP protocol support, and hardware-level stealth — built for agents, not retrofitted from human browsers.
256 Parallel Sessions
Run hundreds of browser sessions from a single instance with sub-12ms cold start. Process 1,000+ pages per hour per node.
Undetectable by Design
Hardware-level fingerprint virtualization produces unique browser identities. 100% pass rate against Cloudflare, DataDome, and fingerprint.com.
Self-Hosted, Zero Telemetry
Deploy on your own infrastructure. Built-in CAPTCHA solving, Tor integration, proxy management, and content extraction — no external services.
Interested in steffi.ai?
steffi.ai is a private project. Access is available on request for teams building with autonomous AI agents. Pair with Owl Browser for undetectable web automation at scale.
Learn more at www.olib.ai