λx.xDocs← app
DocsConceptsSpawning

Spawning

Spawning is how a directive becomes a running grunt. A Colonel writes a spawn request; the system turns it into a live Claude Code session that claims the directive and starts working.

The spawn flow

When a Colonel calls spawn_grunt, Vinculum writes a spawn request entry to the graph and fires a PostgreSQL NOTIFY on the spawn_requests channel. The spawner service — running as a systemd unit on the host — is listening on that channel via a persistent LISTEN connection.

The spawner picks up the notification, reads the spawn request to get the directive ID, role, and any context parameters, then activates a vinculum-grunt@<uuid>.service systemd unit. That unit runs a script that opens a tmux pane and launches Claude Code with the grunt system prompt pre-injected.

spawn flow
spawn_grunt(directive_id=42)
  │
  ├─ writes spawn_request entry to graph
  ├─ PG NOTIFY → spawn_requests channel
  │
spawner service (listening via LISTEN)
  ├─ reads spawn request
  ├─ systemctl start vinculum-grunt@<uuid>.service
  │
grunt unit
  ├─ opens tmux pane
  ├─ launches Claude Code with system prompt
  ├─ grunt claims spawn_uuid
  ├─ grunt claims directive
  └─ grunt works

The grunt boot protocol

Every grunt session runs the same four-step bootstrap before touching the directive:

  1. claim_spawn(spawn_uuid) — registers the tmux pane and role. First-writer-wins; if another session already claimed this UUID, the grunt stops.
  2. declare_focus(branch, session_label, session_color)— shows up on the dashboard so the Colonel can see what's running.
  3. claim_directive(entry_id) — atomic claim. If already claimed by another session, the grunt stops and writes a note.
  4. amend_directive(stage='claimed', payload.plan) — records intent in the directive tail so the Colonel knows what the grunt understood.

After bootstrap the grunt executes. It writes checkpoint amendments as it progresses and finishes with an implementation amendment listing every file touched.

Post-#2338 architecture

Prior to #2338, spawning went through a separate daemon process. That daemon has been removed. The spawner is now a direct systemd socket activation — simpler, more reliable, and no extra process to babysit.

Claim semantics

Both claim_spawn and claim_directiveare first-writer-wins with a database-level lock. There's no race. If two grunt sessions boot at the same time (edge case, but possible), exactly one will claim the directive; the other will get already_claimed and stop.

Stale claims — directives claimed more than 30 minutes ago with no progress — are auto-reaped. The next grunt to attempt a claim on a stale directive will succeed.

Self-hosted vs cloud

Cloud (vinculum.run)

On vinculum.run, the spawner service runs on Vinculum's infrastructure. When you call spawn_grunt, a grunt session spins up in Vinculum's cloud environment with access to your project's MCP tools. You don't need to do anything — it just works.

Self-hosted

On a self-hosted install, you run the spawner service yourself. It's a single systemd unit that you enable after setting up the database:

bash
systemctl enable --now vinculum-spawner.service

The grunt units also need to be in place. The install script sets these up, but you can inspect them at:

bash
ls /etc/systemd/system/vinculum-grunt@.service

See self-hosting for the full setup guide.

Watching spawns

The dashboard workers row shows all active grunt sessions and their status (booting, working, done, stale). The live feed shows spawn events and grunt bootstrap entries in real time.

You can also check spawn status from Claude:

text
> use list_active_grunts with project="my-app"

# Returns all running grunt sessions with
# their current directive and status.

Next: Directives

Now you know how grunts come alive. Next: what they're actually working on. Read about directives →