The Superpowers plugin hit 107,000 GitHub stars in five months. It has 8,600 forks. It launched in October 2025 and is already one of the fastest-growing developer tools repos on GitHub — faster than most established frameworks, faster than most design systems, faster than almost anything that wasn't backed by a major company's marketing budget.
That doesn't happen by accident. It happens when a tool solves a problem developers actually have, in a way that actually works.
The problem: undisciplined AI coding is slow, not fast. I know that sounds backward. But I learned it the hard way building Ding.
Ding is a project I've believed in for a long time — a stateless alerting daemon that I believe has strong use cases in the observability space, where I spent years working. The general idea is appealing and solid. The execution has been a challenge.
Like many developers, I've spent the past year and a half building my pet idea, unbuilding it, and rebuilding it from scratch with every new wave of AI agents. Every time a new model or tool drops, Ding becomes my test. Not because it's a toy project — because it's genuinely hard. It requires real architectural decisions, real concurrency handling, real config design. The kind of thing that exposes whether an agent actually thinks or just types fast.
The agents have been getting better. But I'd never seen anything build Ding as cleanly as I did with Superpowers.
Without structure, Claude races ahead on your first vague description, builds confidently in the wrong direction, and leaves you three hours deep into something that needs to be torn out. The speed is real. So is the waste.
If this problem sounds familiar at the team level: Builder.io is an agentic development platform that runs parallel AI agents with structured review gates built in — cloud containers, no local setup, free to start. Worth a look if you want this kind of discipline across the whole team, not just solo Claude Code sessions.
Superpowers fixes that. It's a Claude Code plugin that enforces a structured workflow before a single line of code gets written. Brainstorm first. Isolate your branch. Write a detailed plan. Then execute. Every step gates the next.
Here's what it's actually like to use — and where it still has rough edges.
Superpowers is a Claude Code plugin built by Jesse Vincent and the team at Prime Radiant. It ships a library of skills — markdown files with instructions, checklists, and process diagrams that Claude reads before taking action. The plugin is the container. The skills are the workflows it provides.
The mandatory sequence is four steps:
- Brainstorm — no code until you have a design document that a human has approved
- Git worktree — isolated branch so main is never touched
- Write a plan — a document that breaks the work into 2–5 minute tasks with exact file paths, exact commands, and complete code
- Execute — subagents implement each task; two-stage review after each one
Think of it like hiring a contractor. Most AI coding tools are like handing them a vague brief and a set of keys. Superpowers is the contract review, the blueprint sign-off, and the punch list — before anyone touches a wall.
Before you install it, you're probably thinking: this sounds slow. It isn't. Let me show you why.
This is the most important thing the plugin does, and the easiest to dismiss as overhead.
Before the Ding v1 implementation touched a single Go file, the brainstorm session produced a 424-line specification document. Not a rough outline — a detailed design that resolved every significant architectural decision before implementation began.
Three of those decisions would have been expensive to get wrong mid-build:
Per-label-set cooldowns, not per-rule cooldowns. The spec locked in that cpu_spike firing for host=web-01 doesn't block it from firing for host=web-02. That's the correct behavior for a multi-host alerting tool. It's also not the obvious first implementation. Without the spec, the cooldown tracker would almost certainly have been keyed per-rule, then needed a refactor when the first real multi-host use case appeared.
The stdout built-in notifier. The spec defined stdout as a special notifier name you can reference directly in a rule without declaring it in the notifiers: map. Simple decision, six fewer lines in the minimal config, made in the design doc. Without it, the plan would have scaffolded a full StdoutNotifier declaration from the start.
Hot-reload semantics. The spec defined that in-flight evaluations complete before the engine swap — write lock on swap, read locks during evaluation. Getting this wrong would have produced a data race that's nearly impossible to reproduce and debug. It was decided correctly in the spec and implemented correctly on the first pass.
The brainstorm also produced a clear out-of-scope list: compound conditions (AND, OR), native PagerDuty support, retry logic for failed webhooks. Every time an "it would be nice to add…" impulse came up during execution, the answer was right there in the spec. No discussion, no drift.
The questions feel slow. They are not slow. They're preventing three days of work from going in the wrong direction.
Most plans are vague. "Add validation to the config parser." Fine, but what does that mean, exactly?
The Superpowers plan format is different. The writing-plans skill produces plans with complete code, not pseudocode. For the Ding v1 config validation task, the plan contained the complete failing test:
func TestConfig_UnknownNotifier(t *testing.T) {
cfg := &Config{...}
err := cfg.Validate()
assert.ErrorContains(t, err, `rule "cpu_spike": alert references unknown notifier "nonexistent"`)
}Along with the exact go test command, the expected output ("FAIL — Validate not defined"), the complete Validate() implementation, the command to verify it passes, and the git commit message.
The v1 plan covered 17 files across the entire project, mapped in a file table at the top that showed exactly what each file was responsible for. The implementation produced 26 commits in sequence, each corresponding to a plan task.
A plan this specific is also reviewable. After the plan was written, a spec-compliance subagent reviewed it and caught that BenchmarkEngineSwap didn't match the spec's EngineReinit terminology — a naming inconsistency that would have produced a confusing benchmark result. That got fixed before execution began.
The lesson: a plan should be detailed enough that someone with no project context could implement it correctly. If you're writing vague instructions, that vagueness will become a bug.
The workflow is excellent for building features. It's not designed for fighting your environment.
Environment debugging is not in the plan. The Ding benchmark suite was planned for a generic Unix environment. Running it on macOS triggered 10 separate fix commits after the initial implementation:
date +%s%Ndoesn't exist on macOS BSD date. Every latency and startup script used nanosecond timestamps. All of them needed a portablens_now()function.eval "$start_cmd" &followed bykill $!doesn't reliably kill the target when bash keeps a subshell wrapper.$!was the subshell's PID, not Ding's PID. Ding kept running as an orphan, holding the port, causing the next startup attempt to fail. Fixed by addingpkill -9 -f "ding serve"to catch orphaned processes.docker run -dreturns a container ID on stdout, but$!is the PID of the docker CLI process. Ten consecutive Prometheus container startups failed because the first container was never stopped.
None of this was plannable in advance. macOS shell behavior isn't something you can reason about from a spec. The debugging was fundamentally iterative: run, watch it fail, form a hypothesis, test it, find a different failure.
When the environment is fighting you, step out of the plan workflow. Fix the environment. Then come back. The plan is for building features, not for firefighting.
Plans inherit spec errors. The plan is generated from the spec. If the spec is wrong, the plan is wrong in the same way.
The benchmark spec was written from reading the Ding documentation and example configs. The configs in the examples looked like this:
rules:
- name: high_cpu
metric: cpu_usage
condition: value > 95
alert:
type: webhook
url: http://localhost:9998/But the actual config parser required a separate notifiers: map:
notifiers:
bench-wh:
type: webhook
url: http://localhost:9998/
rules:
- name: high_cpu
metric: cpu_usage
condition: value > 95
alert:
- notifier: bench-whEvery benchmark script was wrong in the same way. When the tests ran, all 100 latency samples timed out because Ding rejected the configs on startup. The spec compliance reviewer caught this — but it caught it because someone noticed the tests were failing, not because it caught the spec error before it propagated.
The fix: before finalizing a spec that interacts with existing code, run a quick sanity check against the actual code. Read the validation functions, not just the example files. Specs written from documentation miss edge cases the code implements.
The spec is the source of truth for everything that follows. Every "is this right?" question gets resolved by reading the spec, not by guessing. When scope creep came up during Ding v1 — "it would be natural to add AND conditions here" — the answer was immediate: compound conditions are listed under "Out of Scope (v1)." No discussion. No drift.
The plan needs the test before the code. The writing-plans skill is built around TDD: write the failing test, verify it fails, write the minimal implementation, verify it passes, commit. That order matters. When you read the plan the skill generates, check that every task has a test step before the implementation step. If it doesn't, the plan is incomplete.
Check off tasks as you go. The plan document is your recovery mechanism if a session dies. The plan has checkboxes for each step — marking them isn't administrative work, it's the state log. In the Ding work, the git commit log served this function because each completed task was committed before the session ended. But in a session where tasks are partially complete when the session dies, unchecked checkboxes in the plan are the only way the next session knows where to resume.
Superpowers is a discipline enforcement plugin, not a feature delivery system. It doesn't make Claude faster at writing individual lines of code. It makes the overall process faster by preventing expensive mistakes and making the work resumable, reviewable, and correctable.
The three hard gates:
You cannot build what you haven't designed. The brainstorm gate is hard. Skip it and you'll implement the wrong thing. In the Ding project, the brainstorm decided per-label-set cooldowns, the hot-reload mutex semantics, and the stdout built-in notifier before implementation began. Getting those wrong mid-implementation would have required significant rework.
You cannot execute what you haven't planned. The plan gate is hard. Skip it and subagents drift. The Ding v1 plan specified 17 files and 26 tasks. The benchmark plan specified 13 files and 22 tasks. Subagents followed those plans without ambiguity.
You cannot ship what you haven't reviewed. The review gate is hard. Skip it and bugs slip through. The spec compliance review caught the config format error in the benchmark scripts before it became a multi-hour debugging mystery.
The system is honest about what it doesn't solve: environment debugging, session recovery from catastrophic interruptions, and spec errors that passed review because the spec itself was wrong about the code's actual behavior. For those, you step outside the workflow, fix the problem, and return. The workflow is a guide, not a straitjacket.
/plugin install superpowers@claude-plugins-officialStart a new Claude Code session. Describe something you want to build. Don't try to trigger the skills manually — they activate automatically. If you want to verify installation, say "I want to build a new feature for this project" and watch the brainstorming skill activate before Claude asks a clarifying question.
Find the Ding project discussed in this article at github.com/zuchka/ding. The spec documents (docs/superpowers/specs/), implementation plans (docs/superpowers/plans/), and benchmark results (benchmarks/results/latest.json) are all in the repository.
- Source: github.com/obra/superpowers (MIT)
- Discord: discord.gg/Jd8Vphy9jq
- Built by: Jesse Vincent and the team at Prime Radiant
At Builder.io, we've been thinking a lot about similar structured workflows inside Fusion — the idea that AI-assisted development works best when the process enforces discipline, not just capability. If you're experimenting with Superpowers or structured Claude Code workflows, I'd love to hear how it's going.
What is the Superpowers plugin for Claude Code?
Superpowers is a Claude Code plugin built by Jesse Vincent and the team at Prime Radiant. It enforces a structured four-step workflow — brainstorm, isolate a git worktree, write a detailed plan, then execute — before any code is written. It ships as a library of "skills": markdown files with instructions, checklists, and process diagrams that Claude reads before taking action.
How do I install the Superpowers plugin?
Run this command in a Claude Code session:
/plugin install superpowers@claude-plugins-officialOnce installed, the skills activate automatically. You don't trigger them manually — just describe what you want to build and the brainstorming skill will activate on its own.
What are the four steps in the Superpowers workflow?
- Brainstorm — Claude produces a design document you approve before any code is written
- Git worktree — work happens on an isolated branch so main is never touched
- Write a plan — a detailed document with exact file paths, commands, and complete code broken into 2–5 minute tasks
- Execute — subagents implement each task with a two-stage review after each one
Does the structured workflow slow things down?
No — it makes the overall process faster by preventing expensive mistakes. Skipping the brainstorm means implementing the wrong thing. Skipping the plan means subagents drift. The upfront questions feel slow; the hours they save mid-implementation are not visible. Every gate exists because the cost of getting it wrong downstream is much higher.
What is a "skill" in the Superpowers plugin?
A skill is a markdown file with instructions, checklists, and process diagrams. The plugin is the container; the skills are the workflows it provides. When you trigger an action like brainstorming or plan writing, Claude reads the relevant skill file before proceeding. The skill library is what separates Superpowers from just prompting Claude directly.
Can Superpowers handle environment debugging and unexpected failures?
Not by design. The workflow is built for feature development, not firefighting. When the environment is actively breaking things — wrong shell behavior, platform-specific quirks, infrastructure mismatches — step outside the plan workflow, fix the environment, then return. The plan is a guide, not a straitjacket.
What happens if the spec has an error?
Plans inherit spec errors. If the spec is wrong, the plan will be wrong in the same direction. The mitigation is to run a quick sanity check against the actual code before finalizing a spec — read the validation functions, not just the example files. Specs written from documentation alone can miss edge cases the code actually enforces.
Is the Superpowers plugin free and open source?
Yes. The plugin is available at github.com/obra/superpowers under the MIT license. There's also a community Discord at discord.gg/Jd8Vphy9jq.
Builder.io visually edits code, uses your design system, and sends pull requests.
Builder.io visually edits code, uses your design system, and sends pull requests.