Arcan
The agent runtime daemon — cognition, LLM provider calls, tool execution, and streaming.
Arcan
Arcan is the agent runtime daemon and the primary implementation of the aiOS kernel contract. It handles the core agent loop: receiving user input, managing conversation state, calling LLM providers, executing tools through Praxis, and streaming responses.
The name comes from "arcane" -- the hidden, fundamental mechanism behind visible intelligence.
Architecture
Arcan is structured as a Rust workspace with these crates:
| Crate | Role |
|---|---|
arcan-core | Core agent loop, session management, message history, context compiler |
arcan-harness | Test harness and benchmarking infrastructure |
arcan-aios-adapters | Adapters between Arcan internals and aiOS kernel types |
arcan-store | Session storage abstraction |
arcan-provider | LLM provider abstraction (Anthropic, OpenAI-compatible, Mock) |
arcan-tui | Terminal UI for interactive sessions |
arcan-lago | Bridge to Lago persistence (event journal, blob store) |
arcan-spaces | Bridge to Spaces distributed networking |
arcand | HTTP daemon (axum server, SSE streaming) |
arcan | CLI binary (session management, log inspection) |
The agent loop
The core design principle: the agent's message history IS the application state. Every action produces an immutable event. The agent loop follows this cycle:
Phase 1: Reconstruct
Load the session from the Lago journal and rebuild the conversation state from events. This is a deterministic fold -- given the same event stream, you always get the same state. The session identifies a stream in the journal by its session_id.
Phase 2: Regulate
Before making an LLM call, Arcan consults the Autonomic controller:
GET http://localhost:3002/v1/autonomic/gatingThe gating profile determines which operations are allowed for this tick. If Autonomic is unreachable, Arcan uses an allow-all default -- regulation never blocks the core loop.
Phase 3: Compile context
The context compiler assembles the prompt from typed blocks with per-block budgets:
- System prompt -- agent personality, constraints, soul profile
- Memory -- relevant memories retrieved from the Lago knowledge index
- Conversation history -- previous messages in the session
- Tool definitions -- available tools from Praxis (filesystem, shell, skills, MCP)
- Observations -- external signals and sensor data
Each block has a token budget. The compiler deterministically assembles blocks in priority order until the context window is filled.
Phase 4: Provider call
Send the compiled context to the configured LLM provider. The provider handles token counting, retry logic, and format translation between Arcan's internal message format and the provider's API.
Phase 5: Execute tools
If the model requests tool use, execute tools through Praxis and collect results. Tool execution is governed by two policies:
- FsPolicy -- workspace boundary enforcement (prevents reads/writes outside the workspace)
- SandboxPolicy -- allowed commands and resource limits
Tool results are appended to the message history and the loop returns to Phase 4 (provider call) with the updated context.
Phase 6: Stream and persist
Emit response events to the client via SSE, persisting each event to the Lago journal as it is generated. The loop continues until the model produces a final text response without tool calls, or a budget limit (token, time, or cost) is reached.
Event-sourced state
All state is derived from events. There is no mutable database -- the event journal is the single source of truth. To recover state, replay the events from the beginning of the session. This gives you:
- Full auditability -- every decision and action is recorded
- Replayability -- sessions can be replayed from the journal for debugging or evaluation
- Branching -- fork a session at any point by replaying events up to that point and continuing differently (Lago supports this, Arcan defaults to "main" branch)
LLM providers
Arcan abstracts LLM providers behind a trait interface, allowing the same agent loop to work with any model:
| Provider | Implementation | Notes |
|---|---|---|
| Anthropic | Native Claude API | Full tool use, system prompts, caching, extended thinking |
| OpenAI-compatible | Any OpenAI-format endpoint | GPT, Gemini, Ollama, vLLM, Together, Groq, etc. |
| Mock | Deterministic test provider | Scripted responses for testing, no network calls |
Provider selection is per-session. The provider trait handles:
- Token counting for budget tracking
- Retry logic with exponential backoff
- Format translation between Arcan's
EventKindand the provider's wire format - Streaming token delivery
# Use Anthropic (requires API key)
ANTHROPIC_API_KEY=sk-ant-... cargo run -p arcan
# Use OpenAI-compatible endpoint (Ollama example)
ARCAN_PROVIDER=openai OPENAI_API_BASE=http://localhost:11434/v1 cargo run -p arcan
# Use mock provider (for testing)
ARCAN_PROVIDER=mock cargo run -p arcanStreaming formats
Arcan supports four SSE output formats, selectable per-request via the format query parameter or Accept header:
| Format | Query | Use case |
|---|---|---|
| Lago | format=lago | Native event format -- full EventEnvelope with ULID, checksum, metadata |
| OpenAI | format=openai | Compatible with OpenAI client libraries (choices[0].delta.content) |
| Anthropic | format=anthropic | Compatible with Anthropic client libraries (content_block_delta) |
| Vercel | format=vercel | Compatible with AI SDK v6 useChat and streamText (UiPart objects) |
The Vercel format is used by the broomva.tech chat application and emits UiPart objects with text-delta, tool-call, tool-result, and finish events.
Tool execution (Praxis)
Arcan delegates tool execution to Praxis, the canonical tool engine. Praxis is consumed by Arcan as the tool backend but has no dependency on Arcan, Lago, or Autonomic -- it depends only on aios-protocol.
Praxis provides:
- Filesystem tools -- read, write, list files within a sandboxed workspace
- Hashline editing -- content-hash-addressed line edits using Blake3. Each edit references lines by their content hash, not line number, making edits robust against concurrent modifications
- Command execution -- run shell commands within a sandbox policy
- Skill discovery -- find and invoke skills defined by
SKILL.mdfiles in the workspace - MCP bridge --
PraxisMcpServerexposes tools as an MCP server (stdio or Streamable HTTP). The client bridge connects to external MCP servers via subprocess (usingrmcp0.15)
Tool permissions are governed by:
FsPolicy-- workspace boundary enforcement (cannot read/write outside the designated workspace directory)SandboxPolicy-- allowlisted commands and resource limits
Running Arcan
As a daemon
cd arcan
cargo run -p arcan
# Listening on http://localhost:3000CLI usage
# Create a new session
cargo run -p arcan -- session new
# List sessions
cargo run -p arcan -- session list
# View session events
cargo run -p arcan -- log <session-id>
# Concatenate events as text
cargo run -p arcan -- cat <session-id>
# Initialize a new workspace
cargo run -p arcan -- init
# Interactive TUI
cargo run -p arcan-tuiWith Lago persistence
By default, Arcan uses the arcan-lago bridge to persist events to a local redb database. The data directory defaults to ~/.arcan/data/:
# Specify a custom data directory
cargo run -p arcan -- --data-dir /path/to/dataWith Spaces networking
To connect Arcan to a Spaces instance for multi-agent communication:
cargo run -p arcan -- --spaces-url http://localhost:3000 --spaces-db my-spaceConfiguration
Arcan is configured through command-line flags and environment variables:
| Flag | Env var | Default | Description |
|---|---|---|---|
--port | ARCAN_PORT | 3000 | HTTP server port |
--data-dir | ARCAN_DATA_DIR | ~/.arcan/data | Persistent storage directory |
--provider | ARCAN_PROVIDER | anthropic | Default LLM provider |
--model | ARCAN_MODEL | claude-sonnet-4-20250514 | Default model |
--lago-data-dir | -- | embedded | Lago journal data directory |
Rust 2024 Edition note: The codebase uses edition = "2024" with rust-version = "1.85". The keyword gen is reserved -- do not use it as an identifier. std::env::set_var and std::env::remove_var require unsafe {} blocks.