BroomVA

Arcan

The agent runtime daemon — cognition, LLM provider calls, tool execution, and streaming.

Arcan

Arcan is the agent runtime daemon and the primary implementation of the aiOS kernel contract. It handles the core agent loop: receiving user input, managing conversation state, calling LLM providers, executing tools through Praxis, and streaming responses.

The name comes from "arcane" -- the hidden, fundamental mechanism behind visible intelligence.

Architecture

Arcan is structured as a Rust workspace with these crates:

CrateRole
arcan-coreCore agent loop, session management, message history, context compiler
arcan-harnessTest harness and benchmarking infrastructure
arcan-aios-adaptersAdapters between Arcan internals and aiOS kernel types
arcan-storeSession storage abstraction
arcan-providerLLM provider abstraction (Anthropic, OpenAI-compatible, Mock)
arcan-tuiTerminal UI for interactive sessions
arcan-lagoBridge to Lago persistence (event journal, blob store)
arcan-spacesBridge to Spaces distributed networking
arcandHTTP daemon (axum server, SSE streaming)
arcanCLI binary (session management, log inspection)

The agent loop

The core design principle: the agent's message history IS the application state. Every action produces an immutable event. The agent loop follows this cycle:

Phase 1: Reconstruct

Load the session from the Lago journal and rebuild the conversation state from events. This is a deterministic fold -- given the same event stream, you always get the same state. The session identifies a stream in the journal by its session_id.

Phase 2: Regulate

Before making an LLM call, Arcan consults the Autonomic controller:

GET http://localhost:3002/v1/autonomic/gating

The gating profile determines which operations are allowed for this tick. If Autonomic is unreachable, Arcan uses an allow-all default -- regulation never blocks the core loop.

Phase 3: Compile context

The context compiler assembles the prompt from typed blocks with per-block budgets:

  • System prompt -- agent personality, constraints, soul profile
  • Memory -- relevant memories retrieved from the Lago knowledge index
  • Conversation history -- previous messages in the session
  • Tool definitions -- available tools from Praxis (filesystem, shell, skills, MCP)
  • Observations -- external signals and sensor data

Each block has a token budget. The compiler deterministically assembles blocks in priority order until the context window is filled.

Phase 4: Provider call

Send the compiled context to the configured LLM provider. The provider handles token counting, retry logic, and format translation between Arcan's internal message format and the provider's API.

Phase 5: Execute tools

If the model requests tool use, execute tools through Praxis and collect results. Tool execution is governed by two policies:

  • FsPolicy -- workspace boundary enforcement (prevents reads/writes outside the workspace)
  • SandboxPolicy -- allowed commands and resource limits

Tool results are appended to the message history and the loop returns to Phase 4 (provider call) with the updated context.

Phase 6: Stream and persist

Emit response events to the client via SSE, persisting each event to the Lago journal as it is generated. The loop continues until the model produces a final text response without tool calls, or a budget limit (token, time, or cost) is reached.

Event-sourced state

All state is derived from events. There is no mutable database -- the event journal is the single source of truth. To recover state, replay the events from the beginning of the session. This gives you:

  • Full auditability -- every decision and action is recorded
  • Replayability -- sessions can be replayed from the journal for debugging or evaluation
  • Branching -- fork a session at any point by replaying events up to that point and continuing differently (Lago supports this, Arcan defaults to "main" branch)

LLM providers

Arcan abstracts LLM providers behind a trait interface, allowing the same agent loop to work with any model:

ProviderImplementationNotes
AnthropicNative Claude APIFull tool use, system prompts, caching, extended thinking
OpenAI-compatibleAny OpenAI-format endpointGPT, Gemini, Ollama, vLLM, Together, Groq, etc.
MockDeterministic test providerScripted responses for testing, no network calls

Provider selection is per-session. The provider trait handles:

  • Token counting for budget tracking
  • Retry logic with exponential backoff
  • Format translation between Arcan's EventKind and the provider's wire format
  • Streaming token delivery
# Use Anthropic (requires API key)
ANTHROPIC_API_KEY=sk-ant-... cargo run -p arcan

# Use OpenAI-compatible endpoint (Ollama example)
ARCAN_PROVIDER=openai OPENAI_API_BASE=http://localhost:11434/v1 cargo run -p arcan

# Use mock provider (for testing)
ARCAN_PROVIDER=mock cargo run -p arcan

Streaming formats

Arcan supports four SSE output formats, selectable per-request via the format query parameter or Accept header:

FormatQueryUse case
Lagoformat=lagoNative event format -- full EventEnvelope with ULID, checksum, metadata
OpenAIformat=openaiCompatible with OpenAI client libraries (choices[0].delta.content)
Anthropicformat=anthropicCompatible with Anthropic client libraries (content_block_delta)
Vercelformat=vercelCompatible with AI SDK v6 useChat and streamText (UiPart objects)

The Vercel format is used by the broomva.tech chat application and emits UiPart objects with text-delta, tool-call, tool-result, and finish events.

Tool execution (Praxis)

Arcan delegates tool execution to Praxis, the canonical tool engine. Praxis is consumed by Arcan as the tool backend but has no dependency on Arcan, Lago, or Autonomic -- it depends only on aios-protocol.

Praxis provides:

  • Filesystem tools -- read, write, list files within a sandboxed workspace
  • Hashline editing -- content-hash-addressed line edits using Blake3. Each edit references lines by their content hash, not line number, making edits robust against concurrent modifications
  • Command execution -- run shell commands within a sandbox policy
  • Skill discovery -- find and invoke skills defined by SKILL.md files in the workspace
  • MCP bridge -- PraxisMcpServer exposes tools as an MCP server (stdio or Streamable HTTP). The client bridge connects to external MCP servers via subprocess (using rmcp 0.15)

Tool permissions are governed by:

  • FsPolicy -- workspace boundary enforcement (cannot read/write outside the designated workspace directory)
  • SandboxPolicy -- allowlisted commands and resource limits

Running Arcan

As a daemon

cd arcan
cargo run -p arcan
# Listening on http://localhost:3000

CLI usage

# Create a new session
cargo run -p arcan -- session new

# List sessions
cargo run -p arcan -- session list

# View session events
cargo run -p arcan -- log <session-id>

# Concatenate events as text
cargo run -p arcan -- cat <session-id>

# Initialize a new workspace
cargo run -p arcan -- init

# Interactive TUI
cargo run -p arcan-tui

With Lago persistence

By default, Arcan uses the arcan-lago bridge to persist events to a local redb database. The data directory defaults to ~/.arcan/data/:

# Specify a custom data directory
cargo run -p arcan -- --data-dir /path/to/data

With Spaces networking

To connect Arcan to a Spaces instance for multi-agent communication:

cargo run -p arcan -- --spaces-url http://localhost:3000 --spaces-db my-space

Configuration

Arcan is configured through command-line flags and environment variables:

FlagEnv varDefaultDescription
--portARCAN_PORT3000HTTP server port
--data-dirARCAN_DATA_DIR~/.arcan/dataPersistent storage directory
--providerARCAN_PROVIDERanthropicDefault LLM provider
--modelARCAN_MODELclaude-sonnet-4-20250514Default model
--lago-data-dir--embeddedLago journal data directory

Rust 2024 Edition note: The codebase uses edition = "2024" with rust-version = "1.85". The keyword gen is reserved -- do not use it as an identifier. std::env::set_var and std::env::remove_var require unsafe {} blocks.

On this page