Substrate, not scaffolding.
Vinculum is substrate memory for AI — the typed graph parallel LLM sessions share, so they coordinate without you in the middle.
The graph is the memory. Sessions are ephemeral.
Vinculum is a typed graph database with an MCP-compatible tool surface. Claude sessions write structured entries — decisions, specs, checkpoints, questions, implementations — into a shared working memory. Other sessions read those entries. The real-time dashboard shows you what's happening across all active sessions.
The design philosophy is substrate over scaffolding. A scaffolding tool wraps Claude in predefined task templates and tells it what to do next. Vinculum doesn't do that. It provides a substrate — a persistent, shared, structured medium — that you and your Claude sessions use however your work demands. The intelligence stays with the model. The substrate stays out of the way.
The sessions are ephemeral. The graph is not. That's the architectural claim.
The human was the message bus. That needed to stop.
When you run more than one Claude session at a time, the human becomes the coordination layer. You copy output from one session into another. You maintain a mental model of who decided what. You relay context. This breaks at around two sessions, and it's coordination work that shouldn't require a human at all.
Vinculum routes that communication through a shared graph instead. Sessions read what other sessions wrote. Decisions land somewhere the next session can find them. The human gets to direct, not relay.
“Past three sessions, you stop being a strategic director and start being a clipboard.”
Why now.
The problem gets concrete.
Running six Claude sessions in parallel for Boise Gun Club — 181K business listings, 67 Docker containers, built solo for under $5K total infrastructure cost. The coordination overhead between sessions was real: copy-paste decisions, re-explain context, watch sessions drift. Something was missing.
Agent delegation becomes a primitive.
Multi-agent coordination becomes a first-class API primitive. AI sessions can delegate work to other AI sessions. The orchestration substrate exists. The memory substrate doesn't.
The spawn primitive is real.
Process and lifecycle management for long-running agents. Spinning up a worker on a remote host is a documented, supported operation. The spawn primitive is real. The memory those spawns share is still ephemeral.
The memory layer is the gap.
Every competitor answered "how do I run AI sessions in a better wrapper?" Nobody answered "how do those sessions remember anything across the team's work?"
Steve Duskett. Solo. Boise, Idaho.
I've been a full-stack developer for 15+ years. I built Vinculum because I needed it: I was running 6–8 parallel Claude sessions for my own software projects and spending too much time as the coordination layer between them.
The entire stack runs on a single OVH bare-metal machine in France: Postgres 17, Next.js 15, a custom MCP server, and a real-time dashboard. No Kubernetes. No microservices. No serverless cold starts. Fast, predictable, easy to reason about. Built solo. Runs lean.
You can see the kind of work I do at duskett.com and Innervate.Agency. Boise Gun Club is the track record — 181K listings, 67 containers, $5K total infrastructure cost, built solo.
Whalefall Media LLC is the legal entity. Idaho.
AGPL v3. No moat.
The Vinculum codebase is open source under the AGPL v3 License at github.com/whalefall-media/vinculum. Self-host it on your own hardware. The hosted service at vinculum.run runs the same code with managed auth, multi-tenant Postgres, and background intelligence on our infrastructure. The hosted tier exists for people who don't want to run a box — not as a moat around features.
If you find a bug or want to contribute, GitHub Issues and PRs are open.
Still early. Working fast.
Team tier is shipping next. After that: smarter background intelligence (clustering, semantic search, delta-aware compaction), webhook integrations, and a public API for building on top of the substrate.
The roadmap is public. If there's something you need, email me or open a GitHub issue. I read everything.