autonomous AI agents

The Autonomous
AI Agent Platform

Deploy long-running AI agents that reason, plan, and act autonomously for hours or days. Built in Rust for performance and token efficiency. Enterprise-grade multi-agent orchestration with governed outcomes.

Rust 1.93+Single BinaryDocker ReadySOC 2 Controls
steffi.ai - Agent Dashboard
Built for Production

Not just a wrapper.
A complete AI agent framework.

Persistent memory, multi-agent orchestration, 14 production plugins, enterprise security, and scheduling that never misses a beat — all in Rust.

Long-Running Autonomy

Agents run for hours or days, not seconds. They handle multi-step pipelines, recover from failures, manage their own context, and produce results without hand-holding.

4-Tier Persistent Memory

Short-term, long-term, episodic, and semantic memory with intelligent task-aware routing. Agents build expertise over time, remember past outcomes, and learn from mistakes.

DAG Task Orchestration

Agents decompose work into sub-task DAGs with parallel and sequential execution, cascade cancellation, and depth limits. Complex pipelines, orchestrated automatically.

Agent Council System

Multi-agent deliberation for decisions that exceed a single agent. Four modes: Brainstorm (idea synthesis), Debate (advocate + judge), Review (producer + critics), Consensus.

6-Layer Prompt Injection Defense

Role hijack, instruction override, context manipulation, data exfiltration, recursive injection, and credential exfiltration detection. Weighted composite scoring with co-occurrence bonuses.

14 Production Plugins

Email, Slack, Mattermost, sandboxed Python/Node.js, file manager, Git, Google Drive, HTTP requests, webhooks, RAG, object store, and browser automation — pair with Owl Browser for undetectable web automation at scale.

Intelligent Tool Routing

Intent-based dynamic tool selection with ONNX embeddings keeps agent context focused. Only relevant tools are injected per turn, minimizing tokens while maximizing output.

Hybrid Storage Engine

redb (source of truth) + PostgreSQL (queryable views with JSONB, tsvector FTS, pgvector) with CDC sync. AES-256-GCM encrypted storage.

Multi-Model AI Integration

Unified LLM provider abstraction across OpenAI and Anthropic. Tier-based routing (Frontier / Standard / Fast) with automatic fallback chains and per-provider circuit breakers.

Advanced Scheduling

Cron, interval, one-shot, and event-driven triggers with full timezone and DST awareness. Overlap policies, missed trigger recovery, and auto-disable on consecutive failures.

Built-In Reliability

Erlang-OTP supervision trees, circuit breakers per LLM provider, emergency kill switch, and behavioral loop detection. Agents recover from failures automatically.

MCP Integration

Connect to external Model Context Protocol servers via HTTP or stdio. Automatically discover and register tools, resources, and prompts with health tracking and reconnection.

Token Efficiency

5x fewer tokens. Rust-native logic.

Most AI agent frameworks burn tokens on orchestration the LLM shouldn't be doing. steffi.ai handles planning, tool routing, and memory in compiled Rust — so the model focuses on reasoning.

Tokens per task completion

avg. across comparable workloads
Other Frameworks~60k tokens/task
steffi.ai~12k tokens/task
5x less

Control stays in compiled Rust. The LLM only sees what it needs to reason about.

Intent-Based Tool Routing

ONNX embeddings match intent to tools. Only relevant tools are injected per turn — no bloated system prompts.

Smart Context Compaction

Automatic conversation compaction preserves critical context while aggressively pruning redundant history.

Rust-Native Logic

Task decomposition, scheduling, memory routing, and security checks happen in Rust — not in LLM calls.

Management Dashboard

See every decision, in real time

A management dashboard with live WebSocket events, deep analytics, and full CRUD for agents, tasks, memory, tools, and credentials.

steffi.ai - Agent Dashboard

Agent Dashboard

Real-time operational overview with metric cards, active agents, recent activity feed, task distribution charts, and quick actions.

Rust Architecture

AI agent framework in Rust, one binary

A modular Rust workspace with clean separation of concerns. Single-writer engine pattern eliminates lock contention. Deploy as a single binary — no Python, no dependency hell.

rb-api

REST API, WebSocket subscriptions, dual auth (API keys + JWT), rate limiting

rb-agent

Agent executors, LLM providers, model router, compaction, council executors, ONNX tool routing

rb-plugin

Plugin manager, event bus, tool registry, MCP client, messaging abstraction

rb-scheduler

Cron/interval/one-shot triggers, overlap policy, missed trigger recovery

rb-guardrail

Prompt injection detection (6 layers), encoding normalization, DLP output filtering

rb-core

Single-writer engine, DAG task manager, agent manager, memory manager, supervisor, budget tracker

rb-storage

redb + PostgreSQL hybrid with CDC sync, AES-256-GCM encryption, JSONB columns, tsvector FTS, pgvector embeddings

rb-observability

Logging, Prometheus metrics, audit log with chain hashing, health checks, incident tracking

Single-writer engine pattern

mpsc command channelactor agentsredb + PostgreSQL CDC
Enterprise Ready

Enterprise AI agent security, governed outcomes

Every autonomous agent runs within enforced boundaries — resource budgets, capability-gated permissions, immutable audit trails, and encrypted credentials. Full governance with SOC 2 controls.

The LLM is a powerful component inside the system, not the system itself. Planning, routing, scheduling, security, and memory management happen in compiled Rust — predictable, auditable, and deterministic.

No Black Box

Every decision, tool call, and reasoning step is logged in structured JSONL execution logs. Full conversation history, compaction traces, and council deliberations — nothing is hidden.

Resource Budgets

Per-agent and per-task hard limits on API calls, tokens, and wall-clock time. Warning thresholds fire alerts before limits hit. No runaway agents, no surprise bills.

Encrypted Credential Vault

AES-256-GCM encrypted storage with HKDF key derivation. Plugins reference credentials by label — no plaintext secrets in config. Per-agent scoped access with full audit trail.

Capability-Gated Permissions

Every plugin capability is permission-gated at registration time. Agents only access what they are explicitly allowed to. Circular dependency detection at startup.

Immutable Audit Logs

Every state mutation is recorded with actor, outcome, timestamp, and chain hash for tamper detection. Data lifecycle tracking, change attribution, and compliance report generation.

DLP Output Filtering

Pattern-based output scanning prevents data leakage in agent responses. Custom regex patterns, configurable masking, and automatic blocking of sensitive content before it leaves the system.

By The Numbers

Production-grade, by every metric

Not a proof of concept. A platform shipping today for teams running agents in production.

14
Production Plugins
Browser, email, Slack, Git, sandboxes & more
6
Security Layers
Multi-layer prompt injection defense
4
Memory Tiers
Persistent knowledge that grows over time
4
Council Modes
Brainstorm, Debate, Review, Consensus
1
Binary to Deploy
Single binary AI agent deployment, Docker ready
24/7
Autonomous Runtime
Agents that run for hours or days
Better Together

Pair with Owl Browser

Owl Browser is a high-performance browser engine that replaces Playwright and Puppeteer with self-hosted infrastructure you control. 175+ automation tools, MCP protocol support, and hardware-level stealth — built for agents, not retrofitted from human browsers.

256 Parallel Sessions

Run hundreds of browser sessions from a single instance with sub-12ms cold start. Process 1,000+ pages per hour per node.

Undetectable by Design

Hardware-level fingerprint virtualization produces unique browser identities. 100% pass rate against Cloudflare, DataDome, and fingerprint.com.

Self-Hosted, Zero Telemetry

Deploy on your own infrastructure. Built-in CAPTCHA solving, Tor integration, proxy management, and content extraction — no external services.

Private Access

Interested in steffi.ai?

steffi.ai is a private project. Access is available on request for teams building with autonomous AI agents. Pair with Owl Browser for undetectable web automation at scale.

Learn more at www.olib.ai