Monday morning. You open your laptop and find 40 issues filed over the weekend, all unlabeled and unassigned. Three pull requests have been waiting since Thursday. The Friday deploy went out but nobody verified it held. You'll spend the entire day on triage instead of work.
Now imagine the same Monday. Every weekend issue is already labeled and assigned. The three PRs each have a review summary with inline comments covering security, performance, and style. A Slack message in #releases confirms the Friday deploy is clean. Claude did all of this while your laptop was closed, running on Anthropic's cloud infrastructure.
That's Claude Code Routines. This tutorial walks through all three trigger types (schedule, API, and GitHub) with real copy-paste prompt templates, a complete /fire endpoint example, and the gotchas that didn't make it into the news coverage.
If you want to see what happens when you scale this further, Builder 2.0 runs more than 20 Claude agents in parallel across content and engineering workflows. Routines keep working when your laptop is closed; Builder 2.0 goes further by keeping entire teams of agents running around the clock.
Claude Code Routines are saved configurations (a prompt, repositories, and connectors) that run automatically on Anthropic-managed cloud infrastructure. They activate via a recurring schedule, an HTTP API call, or a GitHub event. Unlike /loop (session-bound) and Desktop scheduled tasks (machine-bound), routines keep running when your laptop is off.
It helps to think of Claude Code's scheduling options as three separate layers:
| Layer | Runtime | Machine required? | Best for |
|---|---|---|---|
Routines | Anthropic cloud | No | Unattended, recurring, cross-repo work |
Desktop scheduled tasks | Your machine | Yes | Local filesystem, local tooling |
| Current session | Yes | Run now, within an active terminal |
A routine is made of three parts: (1) a prompt (the most important piece, since the routine runs without human approval at each step), (2) one or more repositories that Claude clones and works in, and (3) optional connectors (MCP integrations like Slack, Sentry, Linear, or GitHub) that give Claude access to external services.
One Claude Code routine can combine all three trigger types. A PR review routine can run on a schedule, fire via API call, and react to GitHub events, all from the same saved configuration.
Routines are in research preview. Behavior, limits, and the API surface may change before the feature reaches general availability.
Create a routine at claude.ai/code/routines by clicking New Routine, writing a name and prompt, connecting repositories, and selecting a trigger. The prompt is the most critical piece: routines run without approval prompts, so be explicit about what to do, which connectors to use, and what success looks like.
Three creation paths exist:
- Web UI at
claude.ai/code/routines— supports all three trigger types; the canonical path - CLI with
/schedule— creates schedule-only routines from within an active Claude Code session; add API or GitHub triggers afterward from the web - Desktop app (New Task > New Remote Task) — distinct from local Desktop scheduled tasks, which run on your machine
For web UI creation:
- Go to
claude.ai/code/routines, click New Routine - Give it a name (your reference only; Claude doesn't use it during runs)
- Write the prompt
- Connect one or more GitHub repositories
- Select a trigger type or combine multiple
- Remove any connectors the routine doesn't need; all connected MCP connectors are included by default
What separates a working autonomous prompt from a broken one is specificity. Routines run without human approval at each step, so the prompt carries the full cognitive load. Specify what "done" looks like: a Slack message, a draft PR, a labeled issue. Name which specific connectors to use; don't assume Claude knows your Slack workspace or Sentry project. Describe what to do when something unexpected happens.
A bad prompt: "Check for issues." A good one: "Read all GitHub issues opened today in {repo}, apply a label from [bug, feature, docs, question, needs-triage] to each, assign it based on which files it references, and post a summary to #dev-standup with the count and breakdown."
The schedule trigger runs a routine on a recurring cadence: hourly, daily, weekdays-only, weekly, or a custom cron expression with a minimum interval of one hour. Schedules are timezone-aware; enter the time in your local zone and it converts automatically. Runs may start a few minutes after the scheduled target; that stagger is small and consistent per routine.
Choose from four preset cadences: hourly, daily, weekdays (Monday through Friday), or weekly. For anything more precise, like every Tuesday at 9am or the first of each month, use a custom cron expression. Set it from the web UI or via /schedule update in the CLI. The minimum interval is one hour; sub-hourly expressions are rejected.
Design for "sometime overnight," not exact timing. If a routine needs to fire at precisely 23:00:00, the schedule trigger is the wrong tool. If a window works, it's the right one.
Here's a prompt template for nightly backlog grooming. Copy it, replace {repo} with your repository name, and adjust the Slack channel:
# Nightly backlog grooming
It's end of day. Read all GitHub issues opened today in {repo}.
For each issue:
- Apply the appropriate label from: bug, feature, docs, question, needs-triage
- Assign it to the relevant owner based on which files or directories it
references (check CODEOWNERS if one exists)
- If the issue is unclear or missing reproduction steps, leave a comment
requesting more information — don't label it until the reporter responds
After processing all issues, post a summary to #dev-standup in Slack:
- Total issues processed today
- Breakdown by label
- Any issues flagged as needing human attention
Keep the Slack message concise. Use bullet points. If zero issues were
filed today, post a single line: "No new issues today."With this running on weekdays, the team starts each morning with a labeled, assigned queue. The API trigger works differently: instead of a clock, an HTTP call starts the run.
The API trigger gives each routine a dedicated HTTP endpoint. POST to it with a bearer token and an optional freeform text field to pass runtime context: alert bodies, deploy metadata, or any string you want Claude to work with. The bearer token is shown exactly once when you generate it; store it immediately, since it cannot be retrieved after that.
The endpoint follows this pattern:
POST https://api.anthropic.com/v1/claude_code/routines/{trigger_id}/fireFull curl example with all required headers (none of the four are optional):
curl -X POST https://api.anthropic.com/v1/claude_code/routines/{trigger_id}/fire \
-H "Authorization: Bearer {your_token}" \
-H "anthropic-beta: experimental-cc-routine-2026-04-01" \
-H "anthropic-version: 2023-06-01" \
-H "Content-Type: application/json" \
-d '{"text": "Production alert: error rate on /api/checkout exceeded 5% threshold. Alert ID: ALR-4821. Environment: prod-us-east-1."}'On success, you get back a session ID and a URL:
{
"type": "routine_fire",
"claude_code_session_id": "session_01HJKLMNOPQRSTUVWXYZ",
"claude_code_session_url": "https://claude.ai/code/session_01..."
}Log that session_url. It links to the live run so you can watch what Claude is doing, review changes, or continue the conversation manually.
Three things to know before you wire this into production:
The text field is a literal string. Whatever you put in it arrives to Claude as plain text. If you send {"alert_id": "123"} in the text field, Claude reads the JSON notation as a string. Write it as human-readable prose ("Alert ID 123 fired in prod") rather than structured data.
Each token is scoped to one routine. Rotating one token doesn't affect others. Revoke it from the API trigger modal in the routine's edit form.
The beta header may rotate. experimental-cc-routine-2026-04-01 is currently required. Two of the most recent previous header versions continue to work temporarily; migrate when Anthropic ships a new dated header. Verify the current header in the Claude Code Routines documentation before shipping any integration.
A practical use case: wire your monitoring tool to call /fire when an error rate threshold is crossed, passing the alert body as text. The routine pulls the stack trace, correlates it with recent commits, and opens a draft PR with a proposed fix. Your on-call engineer reviews a PR instead of starting from a blank terminal at 2am.
The GitHub trigger flips the activation model: instead of your toolchain calling Claude, GitHub calls it automatically on repository events.
The GitHub trigger fires a new routine session on matching pull request or release events. It requires the Claude GitHub App installed on the target repository, separate from running /web-setup. Filter rules let you scope exactly which events activate the routine. Events beyond the per-routine hourly cap are dropped, not queued.
Supported events: pull_request (opened, closed, assigned, labeled, synchronized, or otherwise updated) and release (created, published, edited, or deleted).
Setup requires two separate steps; many people stop after the first one:
- Run
/web-setupin Claude Code to grant repository access for cloning (already done if you've used Claude Code with this repo) - Install the Claude GitHub App on the target repository to enable webhook delivery. Running
/web-setupdoes not install the GitHub App. Both are required. The UI prompts you, but it's easy to stop after step 1 and wonder why triggers aren't firing.
Filtering narrows which events activate the routine. Filter on: Author, Title, Body, Base branch, Head branch, Labels, Is draft, Is merged, From fork. All filter conditions must match for the routine to fire.
The regex operator gotcha: matches regex tests the entire field value, not a substring. To match any PR title containing "hotfix", write .*hotfix.*. Without the surrounding .*, the filter only matches a title that is exactly the word "hotfix" with nothing before or after it. For simple substring matching, use contains instead.
Branch permissions: By default, Claude can only push to claude/-prefixed branches. To push elsewhere, enable "Allow unrestricted branch pushes" in the routine settings. Commits and PRs appear under your personal GitHub identity, not a bot account.
Session model: Each matching GitHub event starts a fresh Claude Code session with no state carryover from previous runs. Write prompts that are self-contained per event.
Events at the cap are dropped. If your repo is high-volume, keep filters narrow. Events that arrive after the per-routine hourly cap is hit are gone until the window resets; they are not retried.
Here's a PR code review prompt template. Adapt the checklist to your team's actual standards:
# PR code review checklist
A new pull request has been opened. Review it against our team checklist.
## Security
- Any hardcoded secrets, API keys, or credentials in the diff?
- Any unvalidated user inputs that could enable injection attacks?
- Any new dependencies? If so, check for known CVEs.
## Performance
- N+1 query patterns in ORM calls?
- Missing database indexes for new query patterns in this PR?
- Large synchronous operations that should be async?
## Code style
- Follows our eslint configuration?
- Consistent naming with the rest of the codebase?
- Functions and variables named clearly enough to be self-documenting?
Leave an inline comment on each specific issue found. Give the line
number and the specific fix, not vague observations like "there might
be a performance issue here."
Post a top-level summary comment with a pass/fail for each category.
Human reviewers should focus on design decisions, not mechanical checks.Run this on pull_request.opened and reviewers stop spending attention on SQL injection checks and naming conventions. That frees up code review for the work that actually requires human judgment.
Claude Code Routines are purpose-built for AI-powered dev automation: they run Claude natively on Anthropic's cloud with no YAML required. GitHub Actions wins for language-agnostic CI/CD pipelines. n8n and Zapier win for connecting non-coding tools across hundreds of app integrations. Cron is best for simple local scripts that don't need AI reasoning.
| Routines | GitHub Actions | cron | n8n / Zapier | |
|---|---|---|---|---|
Runtime | Anthropic cloud | GitHub / self-hosted | Your machine | Cloud service |
Machine required? | No | No | Yes | No |
Triggers | Schedule, API, GitHub | GitHub events, schedule | Schedule only | 400+ app triggers |
Claude native | ✅ | ❌ | ❌ | ❌ (via API add-on) |
Code review / ops | ✅ | ✅ (needs scripting) | ❌ | ❌ |
Custom AI prompts | ✅ | ❌ (needs API calls) | ❌ | ✅ (extra cost) |
YAML required | No | Yes | No | No |
Best for | AI dev workflows | CI/CD pipelines | Simple recurring scripts | No-code app automation |
Routines and GitHub Actions complement each other. Use Actions for build, test, and deploy pipelines. Use Routines for the AI reasoning work around those pipelines: reviewing what got merged, triaging what failed, verifying what deployed.
n8n and Zapier win when you're connecting 10+ SaaS tools without writing code. Routines win when the job requires Claude to reason about developer artifacts: code diffs, issue descriptions, error logs, stack traces. These are different use cases, and the answer for most teams is both.
Cron still has a place. A 20-line bash script that runs nightly and produces clean output is a cron job. When the job needs judgment, reach for Routines.
Each plan has a daily run cap visible at claude.ai/code/routines and claude.ai/settings/usage. GitHub trigger events beyond the per-routine hourly cap are dropped, not queued, until the window resets. Organizations with metered usage enabled can continue on overage; others are rejected until the daily window resets.
Daily run cap: Every plan has one. Check claude.ai/settings/usage to see your current remaining runs. Anthropic hasn't published official per-plan numbers in the docs; don't build critical workflows around figures circulating on social media until they're confirmed.
GitHub hourly cap: Separate from the daily cap. Events that arrive after the hourly limit is hit are dropped. They're gone until the next window opens. Keep filter rules narrow so only genuinely relevant events consume your budget.
Metered overage: Team and Enterprise plans with extra usage enabled can continue running on overage billing when the daily cap is hit. Individual and non-metered plan users are rejected until the window resets. Enable extra usage from Settings > Billing on claude.ai.
Routine ownership is individual. Routines belong to your personal claude.ai account, not your team or organization. Commits and PRs appear under your personal GitHub identity. There's no team-sharing or co-ownership during the research preview. If teammates need the same routine, each one sets up their own copy.
All of the above applies to the current research preview and may change as the feature matures.
Do Claude Code Routines run when my laptop is off?
Yes, and that's the core differentiator. Routines execute on Anthropic-managed cloud infrastructure, not your local machine. Unlike Desktop scheduled tasks (machine-bound) and /loop (session-bound), routines keep running when your laptop is closed. Set a schedule or a GitHub trigger and close the lid.
What's the difference between Claude Code Routines and /schedule?
/schedule is a CLI shortcut for creating schedule-triggered routines from within a Claude Code session. It creates the same underlying routine object, but only supports the schedule trigger type. To add an API or GitHub trigger, edit the routine at claude.ai/code/routines afterward.
How many times can I run a Claude Code Routine per day?
Each plan has a daily run cap, but Anthropic hasn't published official per-plan numbers in the documentation at time of writing. Check claude.ai/settings/usage to see your current limit and remaining runs. Don't plan around unconfirmed figures.
What happens when a routine hits its event cap?
GitHub trigger events that arrive after the per-routine hourly cap is exceeded are dropped, not queued for the next window. Keep filter rules narrow so only the events that matter consume your hourly budget. Schedule-triggered and API-triggered runs follow the daily cap, not the hourly one.
Can I share Claude Code Routines with teammates?
Not currently. Routines belong to your individual claude.ai account. Pull requests and commits from a routine appear under your personal GitHub identity. There's no team-sharing, transfer, or co-ownership mechanism in the research preview.
Claude Code Routines shift Claude from a tool you invoke to one that works alongside you, running on a schedule, responding to API calls, and reacting to GitHub events on Anthropic-managed infrastructure. The three trigger types handle nearly any recurring dev workflow without CI/CD infrastructure or YAML.
Start with the schedule trigger and the backlog grooming template above. It's the lowest-friction way to see a routine complete a full end-to-end run. After the first nightly run finishes, you'll have enough intuition to write the prompt for your actual use case.
Head to your routines dashboard, click New Routine, and paste one of the templates. The first run teaches you more than any documentation. Check the Claude Code Routines documentation for the full API reference and limit updates as the research preview matures.
If you want to see what happens when you scale this further, Builder 2.0 runs more than 20 Claude agents in parallel across content and engineering workflows. Routines keep working when your laptop is closed; Builder 2.0 goes further by keeping entire teams of agents running around the clock.