Orchestrator setup in Claude Code
I've been using Claude Code for a few months now and at some point I got tired of giving it the same context over and over. Every time I'd ask it to build something, I'd have to remind it how the project is set up, what patterns we use, where the tests go. So I started messing with the agent system and ended up building something that actually works pretty well.
The setup
The idea is simple. Instead of one AI doing everything, you have an orchestrator that reads your request and delegates it to the right specialist. I have three agents right now:
- Developer handles all the actual code. Feature implementation, bug fixes, refactoring. It has full read/write access to source files but it does not write tests. That's someone else's job.
- Tester writes and runs tests. Unit tests, integration, E2E. It only touches test files. If it finds a bug in the source code, it reports back instead of fixing it directly. There's an auto-fix loop where it can ask the developer agent to fix things, capped at 2 rounds so it doesn't go in circles.
- UX Designer reviews UI implementations for accessibility and usability. This one is read-only. It can't edit any code. It just produces a report with findings prioritized by severity, and then the orchestrator decides what to do with it.
The orchestrator itself is defined in the project's CLAUDE.md. It has routing rules for when to use which agent, when to run them in parallel vs sequentially, and how to handle the handoffs between them.
How a feature gets built
Say I type /implement-feature user-auth. Here's what actually happens.
The orchestrator reads the plan file at features/user-auth/plan.md. This plan was created earlier using /plan-feature, which scans the existing codebase and produces a structured spec with requirements, technical design, implementation steps, testing strategy. Pretty standard stuff but it references actual files in the project so the agents have real context.
Then it kicks off the sequential workflow. Developer agent gets the plan and builds it. When it's done, the tester agent gets the list of files that were created or changed and writes tests against the spec. If tests fail because of a bug, the tester delegates back to the developer with a description of what broke. They go back and forth up to twice. After that it escalates back to the orchestrator.
If there's UI involved, the UX designer agent runs last. It goes through its checklist (ARIA labels, keyboard nav, color contrast, responsive layouts, loading states, all of that) and produces a report. Critical issues get sent back to the developer for fixing.
The whole thing runs as a pipeline. I just kick it off and check back later.
The memory thing
This is the part that actually surprised me. Each agent has a memory file that persists between sessions. /.claude/memory/developer.md, /.claude/memory/tester.md, /.claude/memory/ux-designer.md. The agents read their memory at the start of every task and append new learnings when they discover something noteworthy.
So the developer agent has sections for project conventions, patterns that worked well, and mistakes it made. The tester tracks testing setups, mocking strategies, flaky test fixes. The UX designer remembers design conventions and recurring accessibility issues.
Memory files are append-only. Agents never delete previous entries. Over time they accumulate context about the specific project. Like a junior dev who actually takes notes and reads them before starting work. The first time the developer agent figures out that the project uses a specific import pattern or that a certain approach caused a build error, it writes that down. Next time it won't make the same mistake.
Right now the memory files in my orchestrator project are mostly empty because it's a fresh setup. But I've seen them fill up on other projects and the difference is noticeable. Fewer repeated mistakes, less context I need to provide manually.
Bootstrapping a new project
One thing that annoyed me early on was that every new project needed the same boilerplate context. The orchestrator needs to know what framework you're using, where your routes live, what your test setup looks like. Typing all of that out manually felt wrong.
So I made an /init-orchestrator skill. You run it in a new project and it crawls through your package.json, tsconfig, Next.js config, database setup, CI files, whatever it can find. Then it writes a ## Project Information section into your CLAUDE.md with the detected tech stack, project structure, key scripts, environment variables, conventions.
It checks for Drizzle or Prisma, looks for Tailwind configs, finds your test runner, detects your package manager. If something doesn't exist it just skips that section. The output is pretty concise, just the facts. But it means the orchestrator and all the agents immediately know what they're working with without me having to write any of it.
If you run it again later it replaces the old section with fresh data. So if you add Playwright to a project that only had Vitest before, just re-run it and the config updates.
What I'd add next
Security would be the obvious one. A security auditor agent that reviews code for vulnerabilities, checks for leaked secrets, audits auth logic. Same pattern as the UX designer, read-only, produces a report. You'd slot it into the pipeline right after the developer finishes, before or alongside the tester.
Is it worth it
For small scripts and one-off fixes, no. Just talk to Claude directly. But if you have a project with established patterns that you're actively building features on, the orchestrator setup saves a lot of repetitive prompting. The agents learn your project's conventions and you stop having to explain the same things.
The whole config is just markdown files in a .claude folder. Agents are defined in .claude/agents/, memory in .claude/memory/, skills in .claude/skills/. No external tooling, no framework. Just structured prompts that Claude Code knows how to interpret.