Terminal.skills
Skills/a2a-protocol
>

a2a-protocol

Builds Agent-to-Agent (A2A) servers and clients following Google's open protocol for agent interoperability. Use when the user wants to create an A2A-compliant agent, build an Agent Card, implement task management, connect agents across frameworks, set up agent discovery, handle streaming responses, implement push notifications, or orchestrate multi-agent workflows. Trigger words: a2a, agent to agent, agent2agent, a2a protocol, a2a server, a2a client, agent card, agent interoperability, agent collaboration, multi-agent, agent discovery, a2a sdk, a2a task.

#a2a#agents#interoperability#protocol
terminal-skillsv1.0.0
Works with:claude-codeopenai-codexgemini-clicursor
Source

Usage

$
✓ Installed a2a-protocol v1.0.0

Getting Started

  1. Install the skill using the command above
  2. Open your AI coding agent (Claude Code, Codex, Gemini CLI, or Cursor)
  3. Reference the skill in your prompt
  4. The AI will use the skill's capabilities automatically

Example Prompts

  • "Review the open pull requests and summarize what needs attention"
  • "Generate a changelog from the last 20 commits on the main branch"

Documentation

Overview

Implements the Agent2Agent (A2A) open protocol for communication between AI agents built on different frameworks. A2A enables agents to discover each other via Agent Cards, negotiate interaction modalities, manage collaborative tasks, and exchange data — all without exposing internal state, memory, or tools. Supports JSON-RPC 2.0 over HTTP(S), streaming via SSE, gRPC, and async push notifications.

Instructions

1. Core Concepts

  • A2A Client: Initiates requests to an A2A Server (on behalf of a user or another agent)
  • A2A Server (Remote Agent): Exposes an A2A-compliant endpoint, processes tasks
  • Agent Card: JSON metadata at /.well-known/agent.json describing identity, capabilities, skills, endpoint, auth
  • Task: Unit of work with lifecycle (submitted → working → input-required → completed/failed/canceled/rejected)
  • Message: Communication turn (role: "user" or "agent") containing Parts (text, file, or JSON)
  • Artifact: Output generated by the agent (documents, images, structured data)

2. Python SDK Setup

bash
pip install a2a-sdk              # Core
pip install "a2a-sdk[http-server]" # With FastAPI/Starlette
pip install "a2a-sdk[grpc]"      # With gRPC

3. Building an A2A Server (Python)

python
from a2a.types import AgentCard, AgentSkill, AgentCapabilities
from a2a.server.agent_execution import AgentExecutor, RequestContext
from a2a.server.events import EventQueue
from a2a.server.apps.starlette import A2AStarletteApplication
from a2a.server.request_handler import DefaultRequestHandler
from a2a.types import Message, TextPart, TaskState, TaskStatus
import uvicorn

agent_card = AgentCard(
    name="Research Assistant",
    description="Searches the web and answers questions with citations.",
    url="https://research-agent.example.com",
    version="1.0.0",
    capabilities=AgentCapabilities(streaming=True, pushNotifications=True),
    skills=[AgentSkill(
        id="web-search", name="Web Search",
        description="Search the web for current information",
        tags=["search", "research"], examples=["Find the latest news about AI regulation"],
    )],
    defaultInputModes=["text/plain"],
    defaultOutputModes=["text/plain", "application/json"],
)

class ResearchAgentExecutor(AgentExecutor):
    async def execute(self, context: RequestContext, event_queue: EventQueue):
        query = context.get_user_message().parts[0].text
        await event_queue.enqueue_event(
            TaskStatus(state=TaskState.working, message=Message(
                role="agent", parts=[TextPart(text="Searching...")]
            ))
        )
        result = await self._research(query)
        await event_queue.enqueue_event(
            TaskStatus(state=TaskState.completed, message=Message(
                role="agent", parts=[TextPart(text=result)]
            ))
        )

    async def cancel(self, context: RequestContext, event_queue: EventQueue):
        await event_queue.enqueue_event(TaskStatus(state=TaskState.canceled))

    async def _research(self, query: str) -> str:
        return f"Research results for: {query}"

# Start server — Agent Card auto-served at /.well-known/agent.json
agent_executor = ResearchAgentExecutor()
request_handler = DefaultRequestHandler(agent_executor=agent_executor, task_store=InMemoryTaskStore())
app = A2AStarletteApplication(agent_card=agent_card, http_handler=request_handler)
uvicorn.run(app.build(), host="0.0.0.0", port=8000)

4. Building an A2A Client (Python)

python
from a2a.client import A2AClient
from a2a.types import MessageSendParams, SendMessageRequest, Message, TextPart

client = await A2AClient.get_client_from_agent_card_url(
    "https://research-agent.example.com/.well-known/agent.json"
)

# Synchronous request
request = SendMessageRequest(params=MessageSendParams(
    message=Message(role="user", parts=[TextPart(text="Latest quantum computing developments?")])
))
response = await client.send_message(request)

if hasattr(response, 'status'):
    print(f"Task {response.id}: {response.status.state}")
    if response.status.message:
        print(response.status.message.parts[0].text)

# Streaming response
async for event in client.send_message_streaming(request):
    if hasattr(event, 'status') and event.status.message:
        for part in event.status.message.parts:
            if hasattr(part, 'text'):
                print(part.text, end="", flush=True)

5. Node.js SDK

bash
npm install @a2a-js/sdk
javascript
import { A2AServer, A2AClient, TaskState } from '@a2a-js/sdk';

// Server
const server = new A2AServer({
  agentCard: {
    name: 'Code Reviewer', description: 'Reviews code for bugs and best practices',
    url: 'https://code-reviewer.example.com', version: '1.0.0',
    capabilities: { streaming: true },
    skills: [{ id: 'review', name: 'Code Review', description: 'Analyze code for issues', tags: ['code', 'review'] }],
    defaultInputModes: ['text/plain'], defaultOutputModes: ['text/plain'],
  },
  async onMessage(context, eventQueue) {
    const userText = context.getUserMessage().parts[0].text;
    await eventQueue.enqueue({ status: { state: TaskState.WORKING, message: { role: 'agent', parts: [{ text: 'Reviewing...' }] } } });
    const review = await reviewCode(userText);
    await eventQueue.enqueue({ status: { state: TaskState.COMPLETED, message: { role: 'agent', parts: [{ text: review }] } } });
  },
});
server.listen(8000);

// Client
const client = await A2AClient.fromAgentCardUrl('https://code-reviewer.example.com/.well-known/agent.json');
const response = await client.sendMessage({
  message: { role: 'user', parts: [{ text: 'Review: function add(a,b) { return a + b; }' }] },
});

6. Multi-Agent Orchestration

python
# Sequential: research → write → review
research_agent = await A2AClient.get_client_from_agent_card_url("https://research-agent.example.com/.well-known/agent.json")
writer_agent = await A2AClient.get_client_from_agent_card_url("https://writer-agent.example.com/.well-known/agent.json")

research_result = await research_agent.send_message(SendMessageRequest(
    params=MessageSendParams(message=Message(role="user", parts=[TextPart(text="Research quantum computing breakthroughs 2025")]))
))
article = await writer_agent.send_message(SendMessageRequest(
    params=MessageSendParams(message=Message(role="user", parts=[TextPart(text=f"Write blog post: {research_result.status.message.parts[0].text}")]))
))

# Parallel fan-out
import asyncio
results = await asyncio.gather(
    query_agent(agent_a, "Analyze market trends"),
    query_agent(agent_b, "Analyze competitor products"),
    query_agent(agent_c, "Analyze customer feedback"),
)

7. A2A vs MCP

A2AMCP
PurposeAgent-to-agent communicationAgent-to-tool communication
ActorsAgent ↔ AgentAgent ↔ Tool/Data source
TasksStateful, long-running, asyncStateless function calls
Use whenDelegating to another autonomous agentCalling a specific tool/API

Examples

Example 1: Customer Support Router

Input: "Build an A2A server that acts as a customer support router. It receives customer queries and delegates to specialized agents: billing-agent, technical-agent, and sales-agent based on the query content."

Output: A2A server with Agent Card listing routing as its primary skill, message handler that classifies queries, A2A client connections to 3 downstream agents, task forwarding with context preservation, aggregated response, and fallback to human handoff.

Example 2: Code Pipeline Agents

Input: "Create a multi-agent code pipeline: code-writer generates code, test-writer creates tests, code-reviewer reviews both. Each is an independent A2A server. Build an orchestrator."

Output: 3 A2A server implementations each with Agent Card and execution logic, orchestrator client with sequential pipeline (write → test → review), streaming updates, and error handling with feedback loops on rejection.

Guidelines

  • Serve the Agent Card at /.well-known/agent.json — this is the standard discovery endpoint
  • Use descriptive skill definitions — other agents use these to decide whether to delegate to you
  • Always handle the input-required state for human-in-the-loop scenarios
  • Use streaming for tasks that take more than a few seconds
  • Implement task cancellation — long-running tasks must be cancellable
  • Use push notifications for tasks that may take minutes or hours
  • Keep agents focused — one agent, one capability domain
  • Use structured data (JSON Parts) for agent-to-agent, text Parts for human-readable responses
  • Implement authentication on your A2A endpoint — declare the scheme in your Agent Card
  • A2A is for agent collaboration; use MCP for tool integration within a single agent
  • Pin SDK versions — the protocol is evolving (currently v0.3.0)

Information

Version
1.0.0
Author
terminal-skills
Category
Development
License
Apache-2.0