Same spec. Any LLM.
Bring your keys for Anthropic, OpenAI, Gemini — or use Cloudflare AI free. Switch mid-conversation. The contracts hold.
Write a spec — in plain language, on a canvas, or by remixing a template. SpecDriver runs it: orchestrates models, calls tools, coordinates sub-agents, persists memory. Bring your own LLM keys, or start free on Cloudflare AI.
Three things that make SpecDriver different. Watch them work.
Bring your keys for Anthropic, OpenAI, Gemini — or use Cloudflare AI free. Switch mid-conversation. The contracts hold.
Wire specialists together. A coordinator delegates, sub-agents return — the whole graph visible in real time.
Reasoning, tool calls, memory recall, results — all streaming live. Click any step to inspect.
Six rails of agent capability. All in one place. All swappable.
Provider, model, system prompt. Your spec. Swap providers any time without touching the contract.
Upload files or paste text. Inlined into the system prompt at run time — works for every provider.
Built-in web_fetch today. The model decides when to call them. Watch it happen in the trace.
Vector buffer with BGE embeddings. Recall the most relevant past exchanges automatically.
Any agent becomes a callable tool. Build coordinators, gatekeepers, specialist teams.
Mark sensitive actions. The agent asks before acting. Approve in chat, deny to override.
No config files. No deploy. No CLI. Just sign in and ship.
One click with GitHub. We never see your inference; you bring your own LLM keys (or start free on Cloudflare).
Pick a template — Quick Helper, Web Researcher, Memory Tester, Multi-Agent Orchestra — or write your own from scratch.
Chat with your agent. Switch the provider mid-conversation. The trace panel shows every reasoning step, tool call, and recalled memory.
Make any agent a sub-agent of another. Build orchestras. The Lobby visualizes the full constellation.
Free to start. No credit card. Spawn a templated agent in under 60 seconds.
Start free with GitHub