Anthropic Unveils Agent Teams for Claude Code, Enabling Multi-Agent Collaboration with Peer-to-Peer Messaging

Feb 7, 2026 Source

Anthropic has introduced agent teams, a significant new capability for its Claude Code development tool that allows multiple AI instances to work together as coordinated units with direct peer-to-peer communication. The experimental feature, announced this week, represents the company's official entry into multi-agent orchestration for software development workflows.

How Agent Teams Differ from Sub-Agents


The new functionality marks a departure from Claude Code's existing sub-agent system, which has been available for some time. Sub-agents operate as quick, focused workers within a single session—receiving tasks, completing them, and reporting back to the main agent in a simple hub-and-spoke model.

Agent teams introduce a fundamentally different architecture. In this setup:

  • One session serves as the team lead that spawns additional teammates
  • Each teammate operates as an independent Claude Code instance with its own context window
  • Teammates can message each other directly, challenge findings, and self-coordinate
  • All agents share a unified task list with dependency tracking
  • Work progresses through peer-to-peer communication rather than centralized delegation

  • The separation of context windows is particularly significant. By distributing work across multiple agents, each instance maintains a narrow, focused context—addressing the performance degradation that occurs when large amounts of information crowd a single context window.

    Setup and Activation


    Agent teams ship as an opt-in experimental feature disabled by default. Users must enable it through one of two methods:

  • Set the environment variable `claude_code_experimental_agent_teams=1`
  • Add `"claude_code_experimental_agent_teams": "1"` to the `env` section of `settings.json`

  • Once activated, teams are created through natural language instructions rather than APIs or configuration files. A user might simply request: "Create an agent team to review this PR—spawn three reviewers, one focused on security, one on performance, and one on test coverage."

    Technical Architecture


    Under the hood, the team lead spawns teammates as complete, independent Claude Code instances. These teammates inherit project context including:

  • The project's `claude.md` file
  • MCP (Model Context Protocol) servers
  • Defined skills

  • However, teammates do not inherit the lead's conversation history—they begin fresh with only the project context and their assigned task description.

    Task coordination operates through dependency tracking, allowing parallel execution of independent tasks while sequencing dependent work. An inbox-based messaging system enables:

  • Lead-to-teammate direct messages
  • Lead broadcast messages to all teammates
  • Peer-to-peer messaging between any teammates

  • Visibility and Backend Options


    Two backend modes control how teammates execute:

    | Mode | Description | Requirements |
    |------|-------------|--------------|
    | In-process (default) | Runs in any terminal; no visibility into individual agent activity | Any terminal |
    | Tmux | Split-pane view showing real-time activity from all teammates | Tmux or iTerm 2 |

    The Tmux mode presents the team lead in the left pane with workers stacked on the right, offering visual monitoring of parallel operations. This mode does not function in VS Code's integrated terminal, Windows Terminal, or Ghostty.

    Primary Use Cases


    Anthropic has identified three strong scenarios for agent teams:

    Code Review and Analysis

    Multiple reviewers with specialized focuses—security, performance, test coverage—can examine code simultaneously through different lenses. This addresses a known limitation where single agents reviewing multiple aspects tend to degrade in performance and miss issues on later passes.

    New Feature Development

    Teammates can own separate components—frontend, API, tests—working in parallel without file conflicts. The shared task list and messaging system coordinate handoffs and dependencies.

    Debugging with Competing Hypotheses

    Perhaps the most distinctive application involves assigning teammates to investigate different explanations for the same bug simultaneously. One agent might examine database issues while another checks caching and a third investigates the API layer. Direct messaging allows agents to share evidence that eliminates competing hypotheses, converging on solutions faster than sequential investigation.

    Cost and Performance Considerations


    The multi-agent approach carries significant token usage implications. With each teammate maintaining an independent context window, costs scale directly with the number of active agents. A configuration with three teammates plus a lead effectively runs four simultaneous Claude Code sessions.

    For research, comprehensive reviews, and complex feature development, the trade-off of higher token costs for improved speed and quality may prove worthwhile. For routine tasks such as single-function refactoring, single-session operation remains more economically efficient.

    Current Limitations


    As an experimental feature, agent teams carry several documented constraints:

  • No session resumption for in-process teammates—using `/res` or `slre` commands will not restore teammate sessions, creating friction for longer-running work
  • Lead overreach—the team lead occasionally implements work itself rather than delegating to teammates, requiring explicit instructions to wait or use of delegate mode (Shift+Tab) to restrict the lead to coordination-only tools
  • Uniform spawn permissions—all teammates inherit the lead's permission mode at creation; per-teammate permission levels must be adjusted manually after spawning
  • Task definition sensitivity—poorly structured tasks with heavy interdependencies cause failures when teammates attempt parallel work on code that doesn't yet exist

Competitive Landscape


Agent teams enter a field where some parallel execution capabilities already exist. Community tools like Ralphie have offered multi-agent workflows using git worktrees, with additional features including branch-per-task isolation, automatic PR creation, and support for multiple AI backends including Open Code and Codex CLI.

Ralphie retains advantages for users requiring those specific capabilities or alternative AI models. However, agent teams distinguish themselves through native peer-to-peer messaging and built-in task coordination without external configuration.

Major competitors Cursor and GitHub Copilot currently lack comparable multi-agent orchestration features.

Evolution of Claude Code's Task Management


The release continues a clear progression in Claude Code's capabilities:

1. Todos—simple in-memory checklists
2. Tasks—persistent file-based task management with dependency tracking
3. Agent teams—multi-agent orchestration with peer-to-peer communication

This trajectory suggests a shift in how developers interact with AI coding tools—from pair programming with a single assistant toward managing teams of specialized agents that coordinate independently while pursuing defined objectives.

The experimental status, token costs, and occasional coordination issues indicate the feature remains a work in progress. For complex research, reviews, multi-module development, and adversarial debugging scenarios, however, early indications suggest agent teams can identify issues and solutions that single-agent approaches miss.