Skip to content

Agent lifecycle

The agent daemon persists agents, mailbox messages, and conversation state across engine runs. This page covers every command in the agent lifecycle, the flags that control behavior, execution modes, and the tools available to each role.

Every agent runs in one of four execution modes, set at spawn time via --exec-mode:

ModeBehaviorTypical use
reactive (default)Waits for mailbox asks/commands and responds turn-by-turnStandard assistants, interactive Q&A
autonomousRuns the initial prompt plus bounded continuation turns, then exitsBounded research tasks, batch runs
proactiveRuns autonomous turns and stays alive with periodic think cycles and mailbox pollingLong-lived workers, research agents that should remain available for follow-up
tickStays alive and runs one scheduled interval-driven cycle each tickSimulations, scheduled workers, bridge-driven BEAM agents

Tick mode is intentionally cost-constrained:

  • Execution is forced onto lmstudio at runtime to keep costs local and predictable.
  • --think-interval controls the cadence between ticks (in seconds).
  • --timeout 0 means “no outer session deadline” — the agent can run indefinitely.
  • Individual LLM/tool turns remain bounded even when the outer session is infinite.

Tick agents can terminate themselves by calling the end_tick tool, which cleanly ends the long-running tick loop instead of leaving the agent alive and idling.

Terminal window
foxctl agent spawn \
--role researcher \
--prompt "Research the storage architecture" \
--exec-mode proactive \
--max-auto-turns 3 \
--max-iterations 20
Terminal window
foxctl agent spawn \
--role researcher \
--prompt "Advance the simulation one step per tick" \
--exec-mode tick \
--think-interval 5 \
--timeout 0
FlagDefaultDescription
--roleAgent role: overseer, researcher, coder, planner, reviewer
--promptInitial instructions for the agent
--exec-modereactiveOne of reactive, autonomous, proactive, tick
--llm-providerauto-detectProvider: openrouter, cerebras, groq, openai, anthropic
--llm-modelprovider defaultModel name (e.g., openrouter/aurora-alpha)
--max-iterations10Maximum tool calls per engine run
--max-auto-turns1Maximum autonomous continuation turns
--max-context-tokens0Context budget (0 means no limit)
--think-intervalSeconds between think cycles (for proactive and tick modes)
--timeoutSession timeout in seconds (0 = no deadline)

After spawning, the agent runs via the foreground daemon loop:

Terminal window
foxctl agent run <agent-id>

This reads mailbox messages for the agent’s namespace, executes turns, and writes replies and events back to storage.

Terminal window
# View agent metadata and current status
foxctl agent info <agent-id>
# Stream live events from the agent
foxctl agent watch <agent-id>

agent info returns the agent’s role, execution mode, current state, and session metadata. agent watch provides a live NDJSON event stream including agent starts, job submissions, mailbox messages, and termination.

Terminal window
# Send a question and wait for the reply
foxctl agent ask <agent-id> --question "What did you find?" --wait

The ask pipeline sends a MessageTypeAsk to the agent’s mailbox. After the engine processes it, a MessageTypeReply is sent back with correlation headers. The CLI polls its own namespace for the matching reply:

CLI (agent ask)
→ mailbox.Send(ask)
→ daemon poll loop receives message
→ agent runtime executes turn(s)
→ mailbox.Send(reply)
→ CLI polls caller namespace and returns reply

agent ask supports two dispatcher paths:

  • Mailbox dispatcher (default): Routes through the daemon’s mailbox system.
  • Jido dispatcher: Routes through the v2 AskService with Jido runtime bridge dispatch.

To use the Jido-backed path:

Terminal window
foxctl agent ask <agent-id> --question "..." --dispatcher jido

Be explicit about which path you are exercising — findings from the classic agent run path are not evidence about the Jido bridge unless that dispatcher was actually used.

Terminal window
foxctl agent resume <session-id> --prompt "Continue from the prior summary"

Resume continues a previous session with full context from the earlier conversation history. The agent retains its accumulated ConversationHistory across runs.

Terminal window
foxctl agent kill <agent-id>

Stops the agent’s daemon loop. The agent’s conversation history and mailbox state are preserved in storage unless explicitly cleaned.

Terminal window
# Show the agent tree for a session
foxctl agent hierarchy <session-id>
# List all agents
foxctl agent list

Conversation history is retained on the session state and reused across daemon turns, bounded by context and token limits. The --max-context-tokens flag prevents unbounded growth.

Task continuity surfaces provide structured summaries for cross-session context:

Terminal window
# Structured command for agents and scripts
foxctl context task-history-summary

For hook injection, use the wrapper script:

Terminal window
configs/hooks/task-continuity-summary.sh

Each role has a defined set of tools it can access:

ToolAll rolesresearchercoderoverseer
fs_read_file
fs_list_dir
code_search
think
context_search
smart_search
context_grep
repo_index_search
repo_index_expand
repo_index_open
repo_index_dag_grep
memory_query
session_recall
session_timeline
fs_write_file
agent_spawn
agent_list
agent_status
agent_kill
agent_hierarchy
agent_wait

Provider and model are configured at spawn time via --llm-provider and --llm-model, or through environment configuration. The auto-detection priority (first available key wins) is: openroutercerebrasgroqopenai.

Default models by provider:

ProviderDefault modelNotes
openrouteropenrouter/aurora-alphaFree tier, good tradeoff
cerebrasProvider defaultFast inference
groqProvider defaultFast inference
openaiProvider default
anthropicProvider default

Keep API keys in environment variables or secure secret mounts — never in inline prompt text.

The researcher role combines semantic search tools with file access for deep codebase investigation. The recommended spawn command:

Terminal window
foxctl agent spawn --role researcher \
--prompt "Research <topic>. Read the actual source files and include code snippets." \
--exec-mode autonomous \
--llm-provider openrouter --llm-model openrouter/aurora-alpha \
--max-auto-turns 3 --max-iterations 20

The researcher’s built-in strategy follows five phases:

  1. DISCOVER — Use context_search or smart_search to find relevant files
  2. READ — Use fs_read_file to read key source files (at least 3–5)
  3. GREP — Use code_search and context_grep for exact patterns
  4. DEEPEN — Use memory_query for past gotchas, session_recall for prior sessions
  5. GRAPH — Use repo_index_dag_grep for call and reference relationships
ModelTimeOutputNotes
openrouter/aurora-alpha~20s~12,890 charsBest tradeoff — free, fast, deep
minimax/minimax-m2.5~120s~8,761 charsSlower and shallower
Claude Code reviewer (Opus)~173s~24,901 charsDeepest but ~$15/run

With --max-auto-turns 3 --max-iterations 20, aurora-alpha produces approximately 52% of Opus depth at zero cost and 10× speed.

Different agent commands exercise different runtime paths. This table shows the current routing:

SurfaceCurrent pathNotes
agent runinternal/agent/daemon.RunForeground mailbox-driven runtime
agent askv2 AskServiceDispatcher can be mailbox or Jido-backed
agent ask-statusv2 projections + event storeReads run state and terminal callback data
agent spawnDaemon first, legacy fallbackCLI is not fully hard-cut to v2
agent listLocal agent store path in CLINot v2-service-only today
agent killMixed local/daemon managementv2 kill service exists but is not the only live path

When debugging nested tool execution, record which binary path invoked foxctl and which dispatcher/runtime path was active.