Skip to content

AI Agents

SAM supports five AI coding agents. Each runs inside a workspace container and communicates via the Agent Communication Protocol (ACP).

PropertyValue
ProviderAnthropic
API KeyANTHROPIC_API_KEY
OAuth SupportYes (Claude Max/Pro subscriptions)
Get a KeyAnthropic Console

Claude Code supports dual authentication: API keys (pay-per-use) and OAuth tokens (from Claude Max/Pro subscriptions via claude setup-token). Toggle between them in Settings.

PropertyValue
ProviderOpenAI
API KeyOPENAI_API_KEY
OAuth SupportYes (via ~/.codex/auth.json)
Get a KeyOpenAI Platform
PropertyValue
ProviderGoogle
API KeyGEMINI_API_KEY
Get a KeyGoogle AI Studio
PropertyValue
ProviderMistral
API KeyMISTRAL_API_KEY
Get a KeyMistral Console

Mistral Vibe is installed via uv (Python package manager) and requires Python 3.12.

PropertyValue
ProviderOpenCode (SST)
API KeyUses Scaleway credentials (SCW_SECRET_KEY)
Get a KeyScaleway Console

OpenCode uses Scaleway’s Generative APIs for inference. If you already have a Scaleway cloud provider credential configured, OpenCode can use that — no separate API key required.

  1. Go to Settings in the SAM web UI
  2. Open the Agents tab
  3. Add your API key (or OAuth token) for each agent you want to use. Connection and configuration for each agent are grouped together on a single card.
  4. Keys are encrypted at rest using AES-256-GCM

You can configure credentials for multiple agents simultaneously and switch between them per project.

Each project can set a default agent type that’s used when executing ideas. If no default is set, you’ll need to specify the agent when starting execution.

To set the default:

  1. Open the project settings
  2. Select your preferred agent from the dropdown
  3. Save changes

The agent selection follows this precedence:

  1. Explicit override on execution
  2. Project default agent
  3. Platform default (claude-code)

When running an agent, you can choose between two workspace profiles:

  • Builds the complete devcontainer from your project’s .devcontainer configuration
  • Includes all custom build steps, extensions, and dependencies
  • Startup time: 2-3 minutes depending on build complexity
  • Skips the devcontainer build entirely
  • Uses a minimal base image with core tools pre-installed
  • Startup time: 30-120 seconds faster than full profile
  • Best for quick conversations that don’t need custom environments

Agent output streams to your browser in real-time via WebSocket. You see code being written, commands being executed, and decisions being made as they happen.

You can fork a conversation from any message to explore an alternative approach:

  1. Hover over a message in the chat
  2. Click the Fork button
  3. SAM generates an AI context summary of the conversation up to that point
  4. A new session starts with the context and awareness of the previous conversation

Fork depth is limited to 10 levels (configurable via ACP_SESSION_MAX_FORK_DEPTH).

Speak your message or follow-up prompts using the microphone button. SAM transcribes audio using Whisper (via Workers AI) and submits the text.

Agent responses can be played back as audio using Deepgram Aura 2 (via Workers AI). TTS audio is cached in R2 for subsequent playback.

Each agent session follows this state machine:

pending → assigned → running → completed/failed/interrupted
  • Pending: Session created, waiting for workspace assignment
  • Assigned: Workspace ready, agent starting up
  • Running: Agent actively executing
  • Completed: Agent finished successfully
  • Failed: Agent encountered an error
  • Interrupted: VM heartbeat lost (detected after 5 minutes of silence)

Running agents have access to project-aware MCP tools:

ToolDescription
dispatch_taskSpawn a follow-up idea for execution
create_ideaCreate a new idea
update_ideaUpdate an idea’s title, content, priority, or status
list_ideasView project ideas
get_ideaRead idea details
search_ideasSearch ideas by keyword
link_ideaLink an idea to a chat session
unlink_ideaRemove an idea-session link
find_related_ideasFind ideas related to a session
list_linked_ideasList ideas linked to a session
list_sessionsView chat sessions
get_session_messagesRead conversation history (consecutive streaming tokens are concatenated into logical messages)
search_messagesSearch messages by keyword — uses FTS5 full-text search for completed sessions; keyword matching for active sessions
update_task_statusReport progress
complete_taskMark current work as done
request_human_inputAsk for user decision (blocks until answered)