Skip to content

Exploring Multi-Agent Systems with OpenCode: A Practical Journey

About 2875 wordsAbout 10 min

OpenCodeMulti-Agent SystemAI DevelopmentCross-Platform

2026-01-10

Introduction

The landscape of software development is evolving. AI assistants that once operated as single-purpose tools are beginning to work together as collaborative systems—each handling specific tasks while coordinating toward larger goals. This shift toward multi-agent architectures opens up new possibilities for how we approach development work.

In this article, I want to share my experience exploring OpenCode's multi-agent system. Rather than presenting a "solution," this is a practical journey of experimentation and observation. I'll discuss what I've learned about designing agent configurations, how I approach workflow orchestration, and share some real examples from my daily development practice. The goal is to offer insights that might be useful to others exploring similar systems.

The Multi-Agent Landscape

Before diving into specific configurations, I think it's valuable to understand the broader context. Several industry reports and surveys provide useful perspective on where this technology is heading.

Market Observations

Recent industry analysis suggests significant growth in AI agent adoption. According to Deloitte's 2025 Tech Predictions Report, the autonomous AI agent market is projected to reach substantial figures in the coming years. Enterprise adoption is accelerating, with a notable percentage of organizations experimenting with agentic AI pilots.

Stack Overflow's 2025 Developer Survey offers complementary insights, indicating that a majority of agent users report time savings and productivity improvements. Research papers have also documented efficiency gains when AI coding agents become part of regular workflows.

These trends suggest that multi-agent systems will become increasingly relevant in development practices. For those of us experimenting with these tools now, there's an opportunity to understand their strengths and limitations before they become widespread.

Protocol Developments

One area I've been following with interest is the standardization of communication protocols for AI agents. Just as the Language Server Protocol (LSP) created a common interface between editors and language tools, new protocols are emerging to standardize how agents interact with development environments and external services.

The Agent Client Protocol (ACP), introduced by Zed Editor in 2025, provides a standardized way for editors to connect with different agents. This addresses a practical challenge: when agents are tied to specific tools, switching between environments becomes cumbersome. ACP enables a more modular approach, where the agent and editor can evolve independently.

The Model Context Protocol (MCP), proposed by Anthropic in late 2024, has gained traction as a standard for connecting AI models with external data sources and tools. Its adoption by major platforms like VS Code and Cursor indicates a growing consensus around certain interface patterns.

I've integrated both protocols into my OpenCode configuration. The practical benefit is flexibility—being able to swap components or extend capabilities without major refactoring. This seems like a useful foundation for experimentation.

My OpenCode Configuration: A Technical Overview

This section describes the actual configuration I've been developing. I want to focus on the technical aspects rather than presenting this as an ideal solution. Your needs may differ, and I hope the patterns shown here can inspire your own experiments.

Configuration Structure

All OpenCode configurations are stored in a Nix repository, organized by function:

configs/
├── opencode.json          # Main configuration file
├── agent/                 # Individual agent definitions
│   ├── ask.md
│   ├── build.md
│   ├── plan.md
│   ├── mentor.md
│   ├── orchestrator.md
│   ├── unittest.md
│   ├── code-review.md
│   ├── debugger.md
│   ├── security.md
│   ├── refactor.md
│   ├── frontend.md
│   ├── backend.md
│   ├── client.md
│   ├── ops.md
│   ├── docs.md
│   ├── explore.md
│   └── migration.md
└── prompts/               # System prompts for core agents
    ├── ask.txt
    ├── build.txt
    └── plan.txt

This structure emerged through iteration. Early versions were simpler, and I added agents as specific needs arose. The key insight here is that the configuration is always evolving—there's no "final" state.

Platform Synchronization

I work across NixOS and macOS, which presented a practical challenge: keeping configurations consistent. The sync script handles this automatically:

#!/usr/bin/env bash
# Synchronize OpenCode configuration from Nix repo to system

set -e

# Detect platform and locate Nix repository
if [ -d "/Users/eric/nix" ]; then
  NIX_REPO="/Users/eric/nix"
elif [ -d "/home/eric/nix" ]; then
  NIX_REPO="/home/eric/nix"
else
  echo "Error: Nix repository not found" >&2
  exit 1
fi

CONFIG_SRC_DIR="$NIX_REPO/configs"
CONFIG_DIR="$HOME/.config/opencode"

mkdir -p "$CONFIG_DIR"
cp -r "$CONFIG_SRC_DIR/"* "$CONFIG_DIR/"

echo "OpenCode configuration synchronized"

This script runs as part of my regular workflow, ensuring both environments use the same configuration without manual intervention.

The Agent System: Design and Implementation

This section describes the multi-agent architecture I've been exploring. I want to be clear about the design philosophy and practical considerations—not to claim this is optimal, but to share what has worked in my experience.

Core Design Philosophy

The system is built around three principles that emerged from experimentation:

  1. Specialization: Each agent focuses on a specific domain with clearly defined responsibilities
  2. Permission boundaries: Different agents have different capability levels, preventing unintended actions
  3. Delegation pathways: Clear rules for when to use which agent

These principles developed through trial and error. Early versions had agents with overlapping responsibilities, which led to confusion. The current design emerged from observing where collaboration broke down and refining accordingly.

Agent Ecosystem Overview

The system comprises 17 agents in total—3 core agents and 14 specialized sub-agents:

Core Agents

The three core agents serve as entry points for different task types:

ask - Read-only consultant for conceptual questions:

  • Permissions: read, grep, ls (no write, edit, or bash)
  • Best for: Explaining concepts, reviewing without modification, general guidance
  • Behavior: Provides information and explanations, delegates action-oriented tasks

plan - Architectural planning for complex tasks:

  • Permissions: read-only (never modifies files)
  • Best for: Designing implementation approaches, creating architectural roadmaps
  • Behavior: Analyzes requirements, considers trade-offs, produces structured plans

build - Primary implementation agent:

  • Permissions: Full access (read, write, edit, bash)
  • Best for: Writing new code, fixing bugs, implementing features
  • Behavior: Analyzes requirements, generates code, validates results

Specialized Sub-Agents

The 14 specialized agents cover specific domains:

Architecture & Quality (4 agents):

  • mentor: Provides architectural and performance guidance without writing code
  • code-review: Performs static analysis, style checks, and maintainability audits
  • refactor: Restructures code while preserving functionality
  • security: Scans for vulnerabilities and compliance issues

Development (4 agents):

  • frontend: Specializes in React, Vue, CSS frameworks, and accessibility
  • backend: Handles API design, business logic, and database schemas
  • client: Focuses on SDK development and Kotlin/Kotlin Multiplatform
  • ops: Manages Docker, CI/CD, Nix, and deployment configurations

Operations & Testing (3 agents):

  • unittest: Creates and optimizes test suites
  • debugger: Diagnoses runtime issues and logic bugs
  • migration: Handles framework upgrades and platform shifts

Discovery & Documentation (2 agents):

  • explore: Navigates codebases to identify patterns and dependencies
  • docs: Creates documentation, READMEs, and API specifications

Coordination (1 agent):

  • orchestrator: Coordinates multiple agents for complex workflows

Permission System

The permission model implements graduated access levels:

This model developed from practical experience. Early versions had binary permissions (can/cannot edit), which led to situations where agents couldn't perform necessary actions. The graduated model provides more flexibility while maintaining safety boundaries.

Delegation and Workflow Patterns

Effective use of the multi-agent system requires understanding when to use which agent and how to coordinate them for complex tasks.

Delegation Framework

Task TypeComplexityRecommended Agent
Concept explanationLowask
Simple code changesLow-Mediumbuild
Domain-specific tasksMedium-HighSpecialized agent
Cross-domain projectsHighorchestrator

This framework is a starting point, not a strict rule. The appropriate agent depends on the specific context and your assessment of the situation.

Delegation Decision Flow

Common Workflow Patterns

Through experimentation, I've identified several useful patterns:

Pattern 1: Feature Implementation Chain

This simple chain works well for straightforward feature implementation. Each agent validates the previous output.

Pattern 2: Bug Resolution Flow

For bug fixes, starting with diagnosis helps ensure the actual problem is understood before implementing a fix.

Pattern 3: Architecture Optimization Flow

Performance optimization benefits from explicit architectural evaluation before implementation.

Pattern 4: Cross-Platform Development

For projects with independent components, parallel execution can significantly reduce development time.

MCP Integration

Model Context Protocol (MCP) enables agents to interact with external services. Here's how I've configured this in practice.

MCP Services Overview

Service Descriptions

Jina MCP Server: Provides real-time web search, content extraction, and academic paper search capabilities. Useful when current information is needed beyond the codebase.

Context7: Offers up-to-date documentation for libraries and frameworks. Helps agents access accurate API references without relying on potentially outdated training data.

GitHub MCP: Integrates with version control workflows, enabling agents to create branches, manage issues, and handle pull requests.

Configuration Example

{
  "mcp": {
    "jina-mcp-server": {
      "url": "https://mcp.jina.ai/v1",
      "type": "remote",
      "enabled": true
    },
    "context7": {
      "type": "remote",
      "url": "https://mcp.context7.com/mcp",
      "enabled": true
    },
    "github": {
      "type": "local",
      "command": ["npx", "-y", "@modelcontextprotocol/server-github"],
      "environment": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "your-token-here"
      },
      "enabled": true
    }
  }
}

Practical Examples

This section shares some real-world examples from my development practice. The goal is to illustrate how the multi-agent system works in practice, including both successes and limitations.

Example 1: Building a Task Management Application

Context: Developing a task management application with React frontend and FastAPI backend.

Approach:

  1. Planning Phase: Used plan to design the overall architecture, mentor to review key decisions
  2. Development Phase: Ran frontend, backend, and docs agents in parallel
  3. Integration Phase: Used unittest for test generation, code-review for consistency checks
  4. Deployment Phase: Used ops for CI/CD configuration

Observations:

  • Parallel execution reduced development time significantly
  • Cross-agent consistency checks caught several issues
  • The orchestrator handled coordination effectively

Metrics (from this specific project):

MetricValue
Development time1.2 weeks
Test coverage89%
Critical bugs in production1

Example 2: Debugging a Concurrency Issue

Context: Investigating intermittent data race conditions in a production service.

Approach:

  1. Exploration: Used explore with different MCP services (Context7 for similar issues, Jina for research papers)
  2. Diagnosis: Used debugger for log analysis, mentor for architectural review
  3. Implementation: Used refactor for the fix, unittest for stress testing
  4. Validation: Used code-review and security for validation

Observations:

  • Multiple perspectives (code analysis, community knowledge, academic research) provided a more complete picture
  • The layered validation approach caught edge cases
  • Statistical confidence in the fix improved through comprehensive testing

Key Learnings

Through these examples and others, several patterns have emerged:

  1. Multiple perspectives help: Combining code analysis with external research often reveals insights single agents might miss
  2. Validation layers matter: Each phase having specialized validation improves overall quality
  3. Parallel execution efficiency: Independent components benefit significantly from parallel execution
  4. Coordination is key: The orchestrator's role in managing complex workflows is crucial

Best Practices

Based on my experience, here are some practices that have proven useful:

1. Agent Selection

  • Use read-only agents (mentor, explore) for analysis and guidance
  • Use full-access agents (build, refactor) only when modification is needed
  • Always follow changes with unittest
  • Use security for any code handling sensitive data

2. Workflow Design

  • Start complex tasks with ask or plan for direction
  • Use parallel execution for independent components
  • Use serial execution for dependent tasks
  • End with code-review for quality assurance

3. Permission Management

  • Review agent permissions before use
  • Understand capability boundaries
  • Use orchestrator for cross-domain coordination

4. Version Control

  • Commit configuration changes to version control
  • Document the rationale for significant modifications
  • Track which workflow patterns work well

5. Cross-Platform Considerations

  • Test configurations on all target platforms
  • Automate synchronization where possible
  • Keep documentation updated

Extending the System

The configuration is designed to be extensible. Here are practical examples of common extensions.

Adding a New Agent

  1. Create the agent definition file:
# configs/agent/my-custom-agent.md
---
description: Custom agent for specific tasks
mode: subagent
model: opencode/gemini-3-flash
temperature: 0.5
permission:
  edit: allow
  write: allow
  bash: allow
---

# Role: Custom Agent

You are a specialized agent for [specific purpose].

## Core Directives
1. [Directive 1]
2. [Directive 2]

## Constraints
- [Constraint 1]
- [Constraint 2]
  1. Register in opencode.json:
{
  "agent": {
    "my-custom-agent": {
      "description": "Custom agent description",
      "mode": "subagent",
      "model": "opencode/gemini-3-flash",
      "prompt": "{file:./agent/my-custom-agent.md}",
      "tools": {
        "write": true,
        "edit": true,
        "bash": true
      }
    }
  }
}
  1. Sync and restart OpenCode:
~/nix/scripts/sync-opencode.sh

Modifying System Prompts

Edit prompts in configs/prompts/:

  • ask.txt: Adjust consultant behavior
  • plan.txt: Modify planning approach
  • build.txt: Change implementation methodology

Adding MCP Services

{
  "mcp": {
    "my-service": {
      "type": "remote",
      "url": "https://example.com/mcp",
      "enabled": true
    }
  }
}

Closing Thoughts

This exploration of OpenCode's multi-agent system has been a learning experience.

A little meta-joke: This article was actually written by a multi-agent system—in a way, it kind of wrote itself. The technology is still evolving, and there's much to discover about effective patterns and practices.

What I've shared here represents my current understanding—things that work in my context and might be useful starting points for others. The field is moving quickly, and I expect many of these approaches will evolve as the ecosystem matures.

Some observations from this journey:

  1. Multi-agent systems are tools, not solutions: Their value depends on how they're applied to real problems
  2. Experimentation is essential: What works in one context may not work in another
  3. The ecosystem is evolving rapidly: New protocols, patterns, and capabilities are emerging regularly
  4. Community knowledge is valuable: Sharing experiences helps everyone improve

I hope this article provides useful insights for those exploring similar systems. The goal isn't to prescribe a particular approach, but to contribute to the collective understanding of how multi-agent architectures can support development work.


References