← Back to Konde Blog
BEST PRACTICES

Agents 101 — building your first multi-agent workflow

A practical guide to chaining agents in Konde Studio. The four anti-patterns that make multi-agent workflows fall over, the three patterns that make them work, and the worked example we use internally for shipping product changes.

Multi-agent workflows have a credibility problem. Half the demos you see on X are theatre — three agents passing messages back and forth, accomplishing the same thing one agent could have done in half the tokens. The other half are genuinely useful but built so specifically for the demo that the moment you change a single requirement, the whole thing collapses.

We have been running real multi-agent workflows in Konde Studio for the past six months — for product development, for support triage, for content production. Here is what worked, what did not, and the patterns we extracted.

The four anti-patterns

If you remember nothing else from this post, remember these four. They are how multi-agent workflows fail in practice.

Anti-pattern 1 — too many agents

The most common failure mode is "one agent per role." Researcher → Planner → Writer → Editor → Publisher. Five hops, five context handoffs, five chances for one agent to misread the previous one's output. By the time the fifth agent runs, the original intent has drifted by 30%.

The fix: Use the smallest number of agents that gives you genuine specialisation. In our internal product workflow, we use three. Researcher (reads code, summarises). Implementer (writes code, runs tests). Reviewer (reads diff, flags issues). One agent could do all three, but the role separation forces a sharper review pass.

Anti-pattern 2 — async without checkpoints

The second most common failure: kick off an agent, walk away, come back to find it has been confidently wrong for an hour. Multi-agent workflows compound this — agent two builds on agent one's confidently-wrong output, agent three builds on agent two's, you discover the error at the end and the whole chain has to redo.

The fix: Insert human or programmatic checkpoints between agent hops. After Researcher emits its summary, something should validate it before Implementer reads it. The cheapest validator is often a one-line assertion ("does the summary mention the function we asked about?"); the most expensive is a human review. Pick the cheapest one that catches the failure mode you actually see.

Anti-pattern 3 — sharing too much context

You think more context is better. It is not. When you pass an entire conversation history from agent one to agent two, agent two has to re-read all of it, which (a) burns tokens and (b) gives it a wider attention surface to drift on. Agents are sharper when they read only what they need.

The fix: Define a contract between agent hops. Agent one emits a structured artifact (JSON, Markdown with specific sections, a file diff) that agent two consumes. The contract is the shape of the data, not the conversation. Conversation history stays with the agent that produced it.

Anti-pattern 4 — no termination condition

Agents will happily loop. "Did we accomplish the task? Should I keep trying?" Yes, keep trying is the default response when the spec is fuzzy. We had a workflow once where an agent retried a flaky test eighty-seven times before timing out.

The fix: Hard caps. Maximum iterations, maximum spend, maximum wall-clock time. When any cap trips, the workflow halts and a human gets pinged. Cheap insurance.

The three patterns that work

Pattern 1 — pipeline with typed handoffs

A linear chain of agents, each emitting a structured artifact the next one consumes. Researcher → Implementer → Reviewer. Each handoff is a JSON file with a known schema. If Researcher produces a malformed JSON, the workflow fails fast — no propagating garbage.

This is the workhorse pattern. It is what 70% of useful multi-agent workflows actually look like.

Pattern 2 — fan-out, judge-merge

Send the same task to three agents in parallel with different prompts or different models. A judge agent reviews all three outputs and either picks the best or merges them into a synthesis. Useful when the task is ambiguous and you want diversity (creative writing, design exploration, ideation). Not useful for deterministic work — three agents will give you three slightly different correct answers, and the judge wastes tokens picking between them.

Pattern 3 — recursive subdelegation

A planner agent decomposes a big task into smaller tasks, dispatches each to a worker agent, then synthesises the results. Like map-reduce for agents. Powerful when the task is genuinely decomposable; awkward when it is not, because the planner spends too long pretending to decompose something that is actually one job.

A worked example

Here is the workflow we use internally for shipping a product change.

Trigger: a Linear ticket lands tagged agent-eligible.

Step 1 — Researcher (Claude Haiku). Reads the ticket, the relevant code paths, the recent commit history. Emits a Markdown brief with three sections: what changes, what files, what tests. Cap: 30k tokens, 3 minutes.

Checkpoint: A human (rotating engineer-on-duty) eyeballs the brief. Approve or reject. Median review time: 90 seconds. About 15% of briefs get rejected and the workflow halts.

Step 2 — Implementer (Claude Sonnet). Reads the brief, opens the affected files, writes the change, runs the test suite. Emits a Git diff. Cap: 100k tokens, 15 minutes.

Checkpoint (programmatic): Tests must pass. If not, retry once with the test failures appended to the brief. If the second pass also fails, halt and ping a human.

Step 3 — Reviewer (Claude Opus). Reads the diff, reads the brief, flags issues. Emits a comment list. Cap: 50k tokens, 5 minutes.

Checkpoint (human): PR opens with the diff and the reviewer comments. A human merges or requests changes.

The whole chain typically runs in 8-12 minutes wall-clock. Roughly 40% of agent-eligible tickets ship without a single line of human-written code. The 60% that need human edits still ship faster than they would have without the agent pass.

What you should try first

If you are starting fresh, build a two-agent pipeline (Pattern 1) for one specific task you do repeatedly. Not three agents, not branching, not recursion. Just two agents, one handoff, structured contract. Get that working end-to-end before adding complexity.

The temptation to skip ahead to a fancy seven-agent symphony is strong. Resist it. Two agents that ship reliably beat seven that almost work.

Where to learn more

The Konde Studio agents panel lets you wire up these workflows visually — drag an agent, draw an arrow, define the handoff contract. Public docs at docs.konde.io/konde-studio/en/docs/agents. Examples lib lives in your Konde Studio install at Examples → Multi-agent.

If you build something interesting, post it on X with #kondebuilds. We highlight the best ones in our weekly digest.