Building the Agentic Mesh: Mastering A2A Communication with adk-rust
Learn how to build a multi-agent system where specialized AI agents collaborate through the Agent-to-Agent (A2A) protocol using adk-rust.
Posted on: 2026-04-07 by AI Assistant

In the rapidly evolving landscape of 2026, the shift from single, monolithic AI models to distributed “Agentic Meshes” is the new standard. In this workshop, we’ll dive deep into creating a collaborative multi-agent system using the Agent-to-Agent (A2A) capabilities of adk-rust.
We will build two distinct agents that work in tandem:
- Chat Agent: The orchestrator that interacts with the user and delegates complex tasks.
- Research Agent: A specialist equipped with a web search tool to fetch the latest information.
1. Project Setup
First, scaffold a new project using the adk-rust CLI:
cargo adk new adk-a2a-workshop
cd adk-a2a-workshop
Configure your environment by adding your GOOGLE_API_KEY to a .env file at the root:
GOOGLE_API_KEY="your_google_api_key_here"
2. Implementing the Research Agent (Specialist)
The Research Agent is a specialist. Its only job is to perform deep-web searches and return high-fidelity data to the orchestrator.
In src/bin/research_agent_server.rs:
// Defining the Research Specialist
let google_search_tool = Arc::new(GoogleSearchTool::new());
let agent = LlmAgentBuilder::new("researcher")
.description("A specialist focused on web research.")
.instruction("Use the google_search tool to find accurate, up-to-date information.")
.model(model)
.tool(google_search_tool)
.build()?;
We expose this specialist as an A2A-capable server on port 8081:
let port = 8081;
let app = create_app_with_a2a(config, Some(&base_url));
axum::serve(listener, app).await?;
3. Implementing the Chat Agent (Orchestrator)
The Chat Agent acts as the entry point for the user. It doesn’t have its own search tool; instead, it knows how to delegate to the researcher.
In src/bin/chat_agent_server.rs:
-
Connect to the Remote Specialist: We create a
RemoteA2aAgentclient that targets our Research Agent’s endpoint.let research_agent_remote = Arc::new( RemoteA2aAgent::builder("researcher") .description("A remote specialist for web research.") .agent_url("http://localhost:8081") .build()?, ); -
Register the Remote Agent as a Sub-agent: The orchestrator treats the remote agent just like any other tool.
let agent = LlmAgentBuilder::new("chat-agent") .instruction("You are a helpful assistant. Delegate research to the 'researcher' specialist.") .model(model) .sub_agent(research_agent_remote) .build()?;
4. Running the Agentic Mesh
To see the A2A protocol in action, run both servers in separate terminal windows.
Terminal 1: Specialist
cargo run --bin research_agent_server
# Log: ResearchAgent A2A server listening on 0.0.0.0:8081
Terminal 2: Orchestrator
cargo run --bin chat_agent_server
# Log: ChatAgent server listening on 0.0.0.0:8080
5. Testing the Delegation Logic
Query the Chat Agent at http://localhost:8080. Ask a question that requires real-time data, such as: “What are the latest breakthroughs in Gemini 3’s reasoning engine?”
Under the hood:
- Chat Agent receives the query.
- The LLM identifies that it lacks real-time knowledge and calls the
researchertool. - A2A Protocol sends the request to the Research Agent at port
8081. - Research Agent executes the search and streams results back to the Orchestrator.
- Chat Agent synthesizes the final answer for the user.
Conclusion
By decoupling agents into specialized services, we build a scalable and modular AI architecture. This “Agentic Mesh” allows for easier updates, specialized model selection for different tasks, and higher overall system reliability.
Happy building the future!