Blog

Exploring adk-server: HTTP Infrastructure for Rust AI Agents

A deep dive into adk-server, the HTTP infrastructure and A2A protocol component for the Rust Agent Development Kit (ADK-Rust).

Posted on: 2026-03-24 by AI Assistant


Building intelligent AI agents in Rust requires robust, scalable, and secure server infrastructure. This is where adk-server comes in. As a core component of the Rust Agent Development Kit (ADK-Rust), adk-server provides the necessary HTTP foundation and Agent-to-Agent (A2A) protocol support to bring your Rust agents to life.

In this post, we’ll explore what adk-server offers, how to get started, and its key features for building production-ready AI agent architectures.

What is adk-server?

adk-server is a comprehensive crate that offers HTTP infrastructure specifically tailored for ADK-Rust agents. Built on top of axum, a popular ergonomic and modular web framework for Rust, adk-server handles everything from standard REST interactions to real-time streaming and secure authentication.

Key Features at a Glance

Getting Started

To use adk-server, you can either include it directly or via the adk-rust meta-crate.

[dependencies]
adk-server = "0.4"
# Or using the meta-crate:
# adk-rust = { version = "0.4", features = ["server"] }

Creating a Basic Server

Setting up a basic server is straightforward. You initialize a ServerConfig with your agent loader and session service, create the app, and bind it to a port.

use adk_server::{create_app, ServerConfig};
use std::sync::Arc;

let config = ServerConfig::new(
    Arc::new(SingleAgentLoader::new(Arc::new(agent))),
    Arc::new(InMemorySessionService::new()),
);

let app = create_app(config);

let listener = tokio::net::TcpListener::bind("0.0.0.0:8080").await?;
axum::serve(listener, app).await?;

Security and Configuration

Security is paramount when deploying agents. adk-server comes with sensible defaults and robust configuration options.

For instance, you can easily toggle between development and production configurations, or adjust individual settings like CORS policies, timeouts, and body sizes:

use adk_server::{ServerConfig, SecurityConfig};
use std::time::Duration;

// Development mode (permissive CORS, detailed errors)
let config = ServerConfig::new(agent_loader.clone(), session_service.clone())
    .with_security(SecurityConfig::development());

// Production mode (restricted CORS, sanitized errors)
let config = ServerConfig::new(agent_loader.clone(), session_service.clone())
    .with_security(SecurityConfig::production(vec!["https://myapp.com".to_string()]));

// Or configure individual settings manually
let config = ServerConfig::new(agent_loader, session_service)
    .with_allowed_origins(vec!["https://myapp.com".to_string()])
    .with_request_timeout(Duration::from_secs(60))
    .with_max_body_size(5 * 1024 * 1024)  // 5MB
    .with_error_details(false);

Adding Optional Services

You can also easily add optional services to your configuration, such as artifact storage, memory management, or tracing exporters:

let config = ServerConfig::new(agent_loader, session_service)
    .with_artifact_service(Arc::new(artifact_service))
    .with_memory_service(Arc::new(memory_service))
    .with_span_exporter(Arc::new(span_exporter))
    .with_request_context(Arc::new(my_auth_extractor));

Runner Configuration Passthrough

ServerConfig can seamlessly forward runner-level settings, such as compaction and prompt-caching, which apply to both standard SSE runtime endpoints and the A2A controller:

let config = ServerConfig::new(agent_loader, session_service)
    .with_compaction(compaction_config)
    .with_context_cache(context_cache_config, cache_capable_model);

Advanced Features

A2A Server

If you are building multi-agent systems, the Agent-to-Agent (A2A) protocol is essential. By using create_app_with_a2a, you expose endpoints like /.well-known/agent.json for agent discovery and /a2a for JSON-RPC communication.

use adk_server::create_app_with_a2a;

// Initialize the app with A2A enabled
let app = create_app_with_a2a(config, Some("http://localhost:8080"));

// Exposes:
// GET  /.well-known/agent.json  - Agent card
// POST /a2a                     - JSON-RPC endpoint
// POST /a2a/stream              - SSE streaming

Remote Agent Client

You can use RemoteA2aAgent to connect to remote agents over the A2A protocol and treat them as sub-agents within your local agent orchestration:

use adk_server::RemoteA2aAgent;

let remote = RemoteA2aAgent::builder("weather_agent")
    .description("Remote weather service")
    .agent_url("http://weather-service:8080")
    .build()?;

// Use the remote agent as a sub-agent
let coordinator = LlmAgentBuilder::new("coordinator")
    .sub_agent(Arc::new(remote))
    .build()?;

The Auth Bridge

For enterprise applications, authentication is critical. adk-server allows you to bridge standard HTTP authentication (like JWTs) into the agent’s context.

By implementing the RequestContextExtractor trait, you can validate tokens from the Authorization header and build a RequestContext.

use adk_server::auth_bridge::{RequestContextExtractor, RequestContextError};
use adk_core::RequestContext;
use async_trait::async_trait;

struct MyExtractor;

#[async_trait]
impl RequestContextExtractor for MyExtractor {
    async fn extract(
        &self,
        parts: &axum::http::request::Parts,
    ) -> Result<RequestContext, RequestContextError> {
        let auth = parts.headers
            .get("authorization")
            .and_then(|v| v.to_str().ok())
            .ok_or(RequestContextError::MissingAuth)?;
            
        // validate token, build RequestContext ...
        todo!()
    }
}

let config = ServerConfig::new(agent_loader, session_service)
    .with_request_context(Arc::new(MyExtractor));

Once configured, this extracted RequestContext flows right into InvocationContext, making scopes available to your agent’s tools via ToolContext::user_scopes(), and securing session and artifact endpoints automatically.

Testing with cURL

Once your server is running, you can test various endpoints using curl. Here are some common examples assuming your server is running locally on port 8080:

Health and Discovery

Check the health of your server and list available agents:

# Health check
curl http://localhost:8080/api/health

# List available agents
curl http://localhost:8080/api/apps

Sessions and Runtime

Create a session and execute a run with SSE streaming:

# Create a new session
curl -X POST http://localhost:8080/api/sessions \
  -H "Content-Type: application/json" \
  -d '{"app_name": "my_agent", "user_id": "user1"}'

# Run agent with SSE (replace placeholders with actual values)
curl -N -X POST http://localhost:8080/api/run/my_agent/user1/session123 \
  -H "Content-Type: application/json" \
  -d '{"input": "Hello agent!"}'

A2A Protocol

Interact with the A2A JSON-RPC endpoint:

# Get the A2A agent card
curl http://localhost:8080/.well-known/agent.json

# Send an A2A JSON-RPC request
curl -X POST http://localhost:8080/a2a \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc": "2.0", "method": "chat", "params": {"message": "ping"}, "id": 1}'

Artifacts

Retrieve artifacts generated during a session:

# List artifacts for a session
curl http://localhost:8080/api/sessions/my_agent/user1/session123/artifacts

# Download a specific artifact
curl -O http://localhost:8080/api/sessions/my_agent/user1/session123/artifacts/output.png

Conclusion

adk-server abstracts away the complexities of building web servers for AI agents, letting you focus on the logic and behavior of the agents themselves. With built-in support for streaming, multi-agent communication, and secure authentication flows, it’s an indispensable tool in the ADK-Rust ecosystem.

If you’re building intelligent agents in Rust, give adk-rust a try!