Exploring Multi-Agent Systems with OpenCode: A Practical Journey
Introduction
The landscape of software development is evolving. AI assistants that once operated as single-purpose tools are beginning to work together as collaborative systems—each handling specific tasks while coordinating toward larger goals. This shift toward multi-agent architectures opens up new possibilities for how we approach development work.
In this article, I want to share my experience exploring OpenCode's multi-agent system. Rather than presenting a "solution," this is a practical journey of experimentation and observation. I'll discuss what I've learned about designing agent configurations, how I approach workflow orchestration, and share some real examples from my daily development practice. The goal is to offer insights that might be useful to others exploring similar systems.
The Multi-Agent Landscape
Before diving into specific configurations, I think it's valuable to understand the broader context. Several industry reports and surveys provide useful perspective on where this technology is heading.
Market Observations
Recent industry analysis suggests significant growth in AI agent adoption. According to Deloitte's 2025 Tech Predictions Report, the autonomous AI agent market is projected to reach substantial figures in the coming years. Enterprise adoption is accelerating, with a notable percentage of organizations experimenting with agentic AI pilots.
Stack Overflow's 2025 Developer Survey offers complementary insights, indicating that a majority of agent users report time savings and productivity improvements. Research papers have also documented efficiency gains when AI coding agents become part of regular workflows.
These trends suggest that multi-agent systems will become increasingly relevant in development practices. For those of us experimenting with these tools now, there's an opportunity to understand their strengths and limitations before they become widespread.
Protocol Developments
One area I've been following with interest is the standardization of communication protocols for AI agents. Just as the Language Server Protocol (LSP) created a common interface between editors and language tools, new protocols are emerging to standardize how agents interact with development environments and external services.
The Agent Client Protocol (ACP), introduced by Zed Editor in 2025, provides a standardized way for editors to connect with different agents. This addresses a practical challenge: when agents are tied to specific tools, switching between environments becomes cumbersome. ACP enables a more modular approach, where the agent and editor can evolve independently.
The Model Context Protocol (MCP), proposed by Anthropic in late 2024, has gained traction as a standard for connecting AI models with external data sources and tools. Its adoption by major platforms like VS Code and Cursor indicates a growing consensus around certain interface patterns.
I've integrated both protocols into my OpenCode configuration. The practical benefit is flexibility—being able to swap components or extend capabilities without major refactoring. This seems like a useful foundation for experimentation.
My OpenCode Configuration: A Technical Overview
This section describes the actual configuration I've been developing. I want to focus on the technical aspects rather than presenting this as an ideal solution. Your needs may differ, and I hope the patterns shown here can inspire your own experiments.
Configuration Structure
All OpenCode configurations are stored in a Nix repository, organized by function:
configs/
├── opencode.json # Main configuration file
├── agent/ # Individual agent definitions
│ ├── ask.md
│ ├── build.md
│ ├── plan.md
│ ├── mentor.md
│ ├── orchestrator.md
│ ├── unittest.md
│ ├── code-review.md
│ ├── debugger.md
│ ├── security.md
│ ├── refactor.md
│ ├── frontend.md
│ ├── backend.md
│ ├── client.md
│ ├── ops.md
│ ├── docs.md
│ ├── explore.md
│ └── migration.md
└── prompts/ # System prompts for core agents
├── ask.txt
├── build.txt
└── plan.txtThis structure emerged through iteration. Early versions were simpler, and I added agents as specific needs arose. The key insight here is that the configuration is always evolving—there's no "final" state.
Platform Synchronization
I work across NixOS and macOS, which presented a practical challenge: keeping configurations consistent. The sync script handles this automatically:
#!/usr/bin/env bash
# Synchronize OpenCode configuration from Nix repo to system
set -e
# Detect platform and locate Nix repository
if [ -d "/Users/eric/nix" ]; then
NIX_REPO="/Users/eric/nix"
elif [ -d "/home/eric/nix" ]; then
NIX_REPO="/home/eric/nix"
else
echo "Error: Nix repository not found" >&2
exit 1
fi
CONFIG_SRC_DIR="$NIX_REPO/configs"
CONFIG_DIR="$HOME/.config/opencode"
mkdir -p "$CONFIG_DIR"
cp -r "$CONFIG_SRC_DIR/"* "$CONFIG_DIR/"
echo "OpenCode configuration synchronized"This script runs as part of my regular workflow, ensuring both environments use the same configuration without manual intervention.
The Agent System: Design and Implementation
This section describes the multi-agent architecture I've been exploring. I want to be clear about the design philosophy and practical considerations—not to claim this is optimal, but to share what has worked in my experience.
Core Design Philosophy
The system is built around three principles that emerged from experimentation:
- Specialization: Each agent focuses on a specific domain with clearly defined responsibilities
- Permission boundaries: Different agents have different capability levels, preventing unintended actions
- Delegation pathways: Clear rules for when to use which agent
These principles developed through trial and error. Early versions had agents with overlapping responsibilities, which led to confusion. The current design emerged from observing where collaboration broke down and refining accordingly.
Agent Ecosystem Overview
The system comprises 17 agents in total—3 core agents and 14 specialized sub-agents:
Core Agents
The three core agents serve as entry points for different task types:
ask - Read-only consultant for conceptual questions:
- Permissions: read, grep, ls (no write, edit, or bash)
- Best for: Explaining concepts, reviewing without modification, general guidance
- Behavior: Provides information and explanations, delegates action-oriented tasks
plan - Architectural planning for complex tasks:
- Permissions: read-only (never modifies files)
- Best for: Designing implementation approaches, creating architectural roadmaps
- Behavior: Analyzes requirements, considers trade-offs, produces structured plans
build - Primary implementation agent:
- Permissions: Full access (read, write, edit, bash)
- Best for: Writing new code, fixing bugs, implementing features
- Behavior: Analyzes requirements, generates code, validates results
Specialized Sub-Agents
The 14 specialized agents cover specific domains:
Architecture & Quality (4 agents):
mentor: Provides architectural and performance guidance without writing codecode-review: Performs static analysis, style checks, and maintainability auditsrefactor: Restructures code while preserving functionalitysecurity: Scans for vulnerabilities and compliance issues
Development (4 agents):
frontend: Specializes in React, Vue, CSS frameworks, and accessibilitybackend: Handles API design, business logic, and database schemasclient: Focuses on SDK development and Kotlin/Kotlin Multiplatformops: Manages Docker, CI/CD, Nix, and deployment configurations
Operations & Testing (3 agents):
unittest: Creates and optimizes test suitesdebugger: Diagnoses runtime issues and logic bugsmigration: Handles framework upgrades and platform shifts
Discovery & Documentation (2 agents):
explore: Navigates codebases to identify patterns and dependenciesdocs: Creates documentation, READMEs, and API specifications
Coordination (1 agent):
orchestrator: Coordinates multiple agents for complex workflows
Permission System
The permission model implements graduated access levels:
This model developed from practical experience. Early versions had binary permissions (can/cannot edit), which led to situations where agents couldn't perform necessary actions. The graduated model provides more flexibility while maintaining safety boundaries.
Delegation and Workflow Patterns
Effective use of the multi-agent system requires understanding when to use which agent and how to coordinate them for complex tasks.
Delegation Framework
| Task Type | Complexity | Recommended Agent |
|---|---|---|
| Concept explanation | Low | ask |
| Simple code changes | Low-Medium | build |
| Domain-specific tasks | Medium-High | Specialized agent |
| Cross-domain projects | High | orchestrator |
This framework is a starting point, not a strict rule. The appropriate agent depends on the specific context and your assessment of the situation.
Delegation Decision Flow
Common Workflow Patterns
Through experimentation, I've identified several useful patterns:
Pattern 1: Feature Implementation Chain
This simple chain works well for straightforward feature implementation. Each agent validates the previous output.
Pattern 2: Bug Resolution Flow
For bug fixes, starting with diagnosis helps ensure the actual problem is understood before implementing a fix.
Pattern 3: Architecture Optimization Flow
Performance optimization benefits from explicit architectural evaluation before implementation.
Pattern 4: Cross-Platform Development
For projects with independent components, parallel execution can significantly reduce development time.
MCP Integration
Model Context Protocol (MCP) enables agents to interact with external services. Here's how I've configured this in practice.
MCP Services Overview
Service Descriptions
Jina MCP Server: Provides real-time web search, content extraction, and academic paper search capabilities. Useful when current information is needed beyond the codebase.
Context7: Offers up-to-date documentation for libraries and frameworks. Helps agents access accurate API references without relying on potentially outdated training data.
GitHub MCP: Integrates with version control workflows, enabling agents to create branches, manage issues, and handle pull requests.
Configuration Example
{
"mcp": {
"jina-mcp-server": {
"url": "https://mcp.jina.ai/v1",
"type": "remote",
"enabled": true
},
"context7": {
"type": "remote",
"url": "https://mcp.context7.com/mcp",
"enabled": true
},
"github": {
"type": "local",
"command": ["npx", "-y", "@modelcontextprotocol/server-github"],
"environment": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your-token-here"
},
"enabled": true
}
}
}Practical Examples
This section shares some real-world examples from my development practice. The goal is to illustrate how the multi-agent system works in practice, including both successes and limitations.
Example 1: Building a Task Management Application
Context: Developing a task management application with React frontend and FastAPI backend.
Approach:
- Planning Phase: Used
planto design the overall architecture,mentorto review key decisions - Development Phase: Ran
frontend,backend, anddocsagents in parallel - Integration Phase: Used
unittestfor test generation,code-reviewfor consistency checks - Deployment Phase: Used
opsfor CI/CD configuration
Observations:
- Parallel execution reduced development time significantly
- Cross-agent consistency checks caught several issues
- The orchestrator handled coordination effectively
Metrics (from this specific project):
| Metric | Value |
|---|---|
| Development time | 1.2 weeks |
| Test coverage | 89% |
| Critical bugs in production | 1 |
Example 2: Debugging a Concurrency Issue
Context: Investigating intermittent data race conditions in a production service.
Approach:
- Exploration: Used
explorewith different MCP services (Context7 for similar issues, Jina for research papers) - Diagnosis: Used
debuggerfor log analysis,mentorfor architectural review - Implementation: Used
refactorfor the fix,unittestfor stress testing - Validation: Used
code-reviewandsecurityfor validation
Observations:
- Multiple perspectives (code analysis, community knowledge, academic research) provided a more complete picture
- The layered validation approach caught edge cases
- Statistical confidence in the fix improved through comprehensive testing
Key Learnings
Through these examples and others, several patterns have emerged:
- Multiple perspectives help: Combining code analysis with external research often reveals insights single agents might miss
- Validation layers matter: Each phase having specialized validation improves overall quality
- Parallel execution efficiency: Independent components benefit significantly from parallel execution
- Coordination is key: The orchestrator's role in managing complex workflows is crucial
Best Practices
Based on my experience, here are some practices that have proven useful:
1. Agent Selection
- Use read-only agents (
mentor,explore) for analysis and guidance - Use full-access agents (
build,refactor) only when modification is needed - Always follow changes with
unittest - Use
securityfor any code handling sensitive data
2. Workflow Design
- Start complex tasks with
askorplanfor direction - Use parallel execution for independent components
- Use serial execution for dependent tasks
- End with
code-reviewfor quality assurance
3. Permission Management
- Review agent permissions before use
- Understand capability boundaries
- Use
orchestratorfor cross-domain coordination
4. Version Control
- Commit configuration changes to version control
- Document the rationale for significant modifications
- Track which workflow patterns work well
5. Cross-Platform Considerations
- Test configurations on all target platforms
- Automate synchronization where possible
- Keep documentation updated
Extending the System
The configuration is designed to be extensible. Here are practical examples of common extensions.
Adding a New Agent
- Create the agent definition file:
# configs/agent/my-custom-agent.md
---
description: Custom agent for specific tasks
mode: subagent
model: opencode/gemini-3-flash
temperature: 0.5
permission:
edit: allow
write: allow
bash: allow
---
# Role: Custom Agent
You are a specialized agent for [specific purpose].
## Core Directives
1. [Directive 1]
2. [Directive 2]
## Constraints
- [Constraint 1]
- [Constraint 2]- Register in
opencode.json:
{
"agent": {
"my-custom-agent": {
"description": "Custom agent description",
"mode": "subagent",
"model": "opencode/gemini-3-flash",
"prompt": "{file:./agent/my-custom-agent.md}",
"tools": {
"write": true,
"edit": true,
"bash": true
}
}
}
}- Sync and restart OpenCode:
~/nix/scripts/sync-opencode.shModifying System Prompts
Edit prompts in configs/prompts/:
ask.txt: Adjust consultant behaviorplan.txt: Modify planning approachbuild.txt: Change implementation methodology
Adding MCP Services
{
"mcp": {
"my-service": {
"type": "remote",
"url": "https://example.com/mcp",
"enabled": true
}
}
}Closing Thoughts
This exploration of OpenCode's multi-agent system has been a learning experience.
A little meta-joke: This article was actually written by a multi-agent system—in a way, it kind of wrote itself. The technology is still evolving, and there's much to discover about effective patterns and practices.
What I've shared here represents my current understanding—things that work in my context and might be useful starting points for others. The field is moving quickly, and I expect many of these approaches will evolve as the ecosystem matures.
Some observations from this journey:
- Multi-agent systems are tools, not solutions: Their value depends on how they're applied to real problems
- Experimentation is essential: What works in one context may not work in another
- The ecosystem is evolving rapidly: New protocols, patterns, and capabilities are emerging regularly
- Community knowledge is valuable: Sharing experiences helps everyone improve
I hope this article provides useful insights for those exploring similar systems. The goal isn't to prescribe a particular approach, but to contribute to the collective understanding of how multi-agent architectures can support development work.