Convert any AI agent into an A2A Protocol server in 3 lines
Project description
A2A Adapter
Convert any AI agent into an A2A Protocol server in 3 lines.
A Python SDK that makes any agent framework (n8n, LangGraph, CrewAI, LangChain, OpenClaw, Hermes Agent, Claude Code, Codex, Ollama, or a plain function) compatible with the A2A (Agent-to-Agent) Protocol.
from a2a_adapter import N8nAdapter, serve_agent
adapter = N8nAdapter(webhook_url="http://localhost:5678/webhook/agent")
serve_agent(adapter, port=9000)
That's it. Your agent is now A2A-compatible with auto-generated AgentCard, task management, and streaming support — all handled by the A2A SDK.
Features
- 3-line setup —
import,create,serve - Built-in adapters — including n8n, LangChain, LangGraph, CrewAI, OpenClaw, Hermes, Claude Code, Codex, Ollama, and more
- Streaming — auto-detected for LangChain and LangGraph
- Auto AgentCard — generated from adapter metadata, served at
/.well-known/agent-card.json(and legacy/.well-known/agent.json) - SDK-First — delegates task management, SSE, push notifications to the A2A SDK
- Extensible —
register_adapter()for third-party frameworks - Minimal surface — implement
invoke(), get a full A2A server
Installation
pip install a2a-adapter # Core (includes n8n, callable)
pip install a2a-adapter[crewai] # + CrewAI
pip install a2a-adapter[langchain] # + LangChain
pip install a2a-adapter[langgraph] # + LangGraph
pip install a2a-adapter[all] # Everything
Quick Start
n8n Workflow
from a2a_adapter import N8nAdapter, serve_agent
adapter = N8nAdapter(webhook_url="http://localhost:5678/webhook/agent")
serve_agent(adapter, port=9000)
LangChain (with streaming)
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from a2a_adapter import LangChainAdapter, serve_agent
chain = ChatPromptTemplate.from_template("Answer: {input}") | ChatOpenAI(model="gpt-4o-mini")
adapter = LangChainAdapter(runnable=chain, input_key="input")
serve_agent(adapter, port=8002) # Streaming auto-detected!
LangGraph (with streaming)
from a2a_adapter import LangGraphAdapter, serve_agent
graph = builder.compile() # Your LangGraph workflow
adapter = LangGraphAdapter(graph=graph)
serve_agent(adapter, port=9002)
CrewAI
from a2a_adapter import CrewAIAdapter, serve_agent
adapter = CrewAIAdapter(crew=your_crew, timeout=600)
serve_agent(adapter, port=8001)
OpenClaw
from a2a_adapter import OpenClawAdapter, serve_agent
adapter = OpenClawAdapter(thinking="low", agent_id="main")
serve_agent(adapter, port=9008)
Ollama (local LLM)
from a2a_adapter import OllamaAdapter, serve_agent
from a2a_adapter.integrations.ollama import OllamaClient
client = OllamaClient(model="llama3.2")
adapter = OllamaAdapter(client=client, name="My Local LLM")
serve_agent(adapter, port=10010)
Hermes Agent
Hermes is not installed by this package. Clone the repo and put it on PYTHONPATH, run hermes setup for credentials and ~/.hermes/config.yaml, then:
Model strings are Hermes’s usual provider/model form (same as hermes config set model), not necessarily identical to the bare model field in Anthropic’s HTTP API.
from a2a_adapter import HermesAdapter, serve_agent
adapter = HermesAdapter(
model="anthropic/claude-sonnet-4",
enabled_toolsets=["hermes-cli"],
name="Hermes",
description="Tool use, persistent memory, streaming",
)
serve_agent(adapter, port=9010)
See examples/hermes_agent.py for a runnable template (including PYTHONPATH setup).
Claude Code
from a2a_adapter import ClaudeCodeAdapter, serve_agent
adapter = ClaudeCodeAdapter(working_dir="/path/to/project")
serve_agent(adapter, port=9010)
Security note: By default,
skip_permissionsisFalse. Without it (and without a pre-configured Claude Code permissions file), tool-use calls may not proceed in unattended mode. For trusted, sandboxed environments only:adapter = ClaudeCodeAdapter(working_dir="...", skip_permissions=True)Or via env var:
A2A_CLAUDE_SKIP_PERMISSIONS=1
Codex
from a2a_adapter import CodexAdapter, serve_agent
adapter = CodexAdapter(working_dir="/path/to/project")
serve_agent(adapter, port=9011)
Security note: By default,
bypass_approvalsandskip_git_checkareFalse. For trusted, sandboxed environments only:adapter = CodexAdapter(working_dir="...", bypass_approvals=True, skip_git_check=True)Or via env vars:
A2A_CODEX_BYPASS_APPROVALS=1andA2A_CODEX_SKIP_GIT_CHECK=1
Custom Function
from a2a_adapter import CallableAdapter, serve_agent
async def my_agent(inputs):
return f"Echo: {inputs['message']}"
adapter = CallableAdapter(func=my_agent, name="Echo Agent")
serve_agent(adapter, port=9005)
Custom Adapter Class
For full control, subclass BaseA2AAdapter:
from a2a_adapter import BaseA2AAdapter, serve_agent
class MyAdapter(BaseA2AAdapter):
async def invoke(self, user_input: str, context_id: str | None = None, **kwargs) -> str:
return f"You said: {user_input}"
serve_agent(MyAdapter(), port=8003)
Architecture
A2A Caller (other agents)
│ A2A Protocol (HTTP + JSON-RPC 2.0 / SSE)
▼
┌──────────────────────────────────────────────┐
│ A2A SDK (DefaultRequestHandler, TaskStore) │ ← handles protocol
├──────────────────────────────────────────────┤
│ AdapterAgentExecutor (bridge layer) │ ← adapts interface
├──────────────────────────────────────────────┤
│ Your Adapter (invoke / stream) │ ← YOUR CODE HERE
├──────────────────────────────────────────────┤
│ Framework (n8n / LangChain / CrewAI / ...) │
└──────────────────────────────────────────────┘
Design principle: Adapters answer ONE question — "given text, return text." Everything else (task management, SSE streaming, push notifications, AgentCard serving) is handled by the A2A SDK.
See ARCHITECTURE.md for detailed design documentation, and DESIGN_V0.2.md for the v0.2 design rationale.
API Reference
Core
| Function | Description |
|---|---|
serve_agent(adapter, port=9000) |
One-line server startup |
to_a2a(adapter) |
Convert adapter to ASGI app |
build_agent_card(adapter) |
Auto-generate AgentCard from metadata |
load_adapter(config) |
Factory: create adapter from config dict |
register_adapter(name) |
Decorator: register third-party adapters |
BaseA2AAdapter (implement this)
| Method | Required | Description |
|---|---|---|
invoke(user_input, context_id, **kwargs) |
Yes | Execute agent, return text |
stream(user_input, context_id, **kwargs) |
No | Yield text chunks (streaming) |
cancel() |
No | Cancel current execution |
close() |
No | Release resources |
get_metadata() |
No | Return AdapterMetadata for AgentCard |
Adapter Support
| Framework | Adapter | Streaming | Auto-detected |
|---|---|---|---|
| n8n | N8nAdapter |
- | - |
| LangChain | LangChainAdapter |
Yes | hasattr(runnable, "astream") |
| LangGraph | LangGraphAdapter |
Yes | hasattr(graph, "astream") |
| CrewAI | CrewAIAdapter |
- | - |
| OpenClaw | OpenClawAdapter |
- | - |
| Hermes | HermesAdapter |
Yes | Always |
| Ollama | OllamaAdapter |
Yes | Always |
| Claude Code | ClaudeCodeAdapter |
Yes | Always |
| Codex | CodexAdapter |
- | - |
| Callable | CallableAdapter |
Optional | streaming=True param |
Input Handling
All adapters support a 3-priority input pipeline:
input_mapper(highest) — custom function(raw_input, context_id) -> dictparse_json_input— auto-parse JSON strings to dictinput_key(fallback) — map text to{input_key: text}
Config-driven Loading
from a2a_adapter import load_adapter
adapter = load_adapter({
"adapter": "n8n",
"webhook_url": "http://localhost:5678/webhook/agent",
"timeout": 60,
})
Third-party Adapters
from a2a_adapter import register_adapter, BaseA2AAdapter
@register_adapter("my_framework")
class MyFrameworkAdapter(BaseA2AAdapter):
async def invoke(self, user_input, context_id=None, **kwargs):
return "Hello from my framework!"
# Now loadable via config:
adapter = load_adapter({"adapter": "my_framework"})
Advanced: ASGI Deployment
For production deployments with Gunicorn/Hypercorn:
from a2a_adapter import N8nAdapter, to_a2a
adapter = N8nAdapter(webhook_url="http://localhost:5678/webhook/agent")
app = to_a2a(adapter) # Returns Starlette ASGI app
# Deploy with: gunicorn app:app -k uvicorn.workers.UvicornWorker
Migration from v0.1
v0.2 is backwards compatible — v0.1 code still works but emits deprecation warnings.
| v0.1 (deprecated) | v0.2 (recommended) |
|---|---|
BaseAgentAdapter |
BaseA2AAdapter |
load_a2a_agent(config) |
load_adapter(config) |
build_agent_app(card, adapter) |
to_a2a(adapter) |
serve_agent(card, adapter) |
serve_agent(adapter) |
N8nAgentAdapter |
N8nAdapter |
3-method override (to_framework + call_framework + from_framework) |
Single invoke() method |
Examples
The examples/ directory contains working examples for each adapter:
python examples/n8n_agent.py # n8n
python examples/langchain_agent.py # LangChain (streaming)
python examples/langgraph_server.py # LangGraph (streaming)
python examples/crewai_agent.py # CrewAI
python examples/openclaw_agent.py # OpenClaw
python examples/ollama_agent.py # Ollama (local LLM)
python examples/claude_code_agent.py # Claude Code
python examples/codex_agent.py # Codex
python examples/custom_adapter.py # Custom BaseA2AAdapter
python examples/single_agent_client.py # Test any running agent
See examples/README.md for details.
Testing
pip install a2a-adapter[dev]
pytest # All tests
pytest tests/unit/ # Unit tests only
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
Quick start:
- Fork & clone
pip install -e ".[dev]"- Make changes + add tests
pytestto verify- Submit a PR
License
Apache-2.0 — see LICENSE.
Built with care by HYBRO AI. Powered by the A2A Protocol.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file a2a_adapter-0.2.11.tar.gz.
File metadata
- Download URL: a2a_adapter-0.2.11.tar.gz
- Upload date:
- Size: 88.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
34fd561801f198a7c2e03c8c55e655497b90d4872cf3c65c9fe343609bcc1330
|
|
| MD5 |
ef33c0f80f966ce1619729454d666652
|
|
| BLAKE2b-256 |
42a5c5c81d48d4ce65b8690feba21b64d60487a1db0c3b26eb9d1aec8eedd8f1
|
Provenance
The following attestation bundles were made for a2a_adapter-0.2.11.tar.gz:
Publisher:
publish.yml on hybroai/a2a-adapter
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
a2a_adapter-0.2.11.tar.gz -
Subject digest:
34fd561801f198a7c2e03c8c55e655497b90d4872cf3c65c9fe343609bcc1330 - Sigstore transparency entry: 1438572630
- Sigstore integration time:
-
Permalink:
hybroai/a2a-adapter@c7643c4bdb877dac8b4105b72d73b636add27597 -
Branch / Tag:
refs/tags/v0.2.11 - Owner: https://github.com/hybroai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@c7643c4bdb877dac8b4105b72d73b636add27597 -
Trigger Event:
release
-
Statement type:
File details
Details for the file a2a_adapter-0.2.11-py3-none-any.whl.
File metadata
- Download URL: a2a_adapter-0.2.11-py3-none-any.whl
- Upload date:
- Size: 94.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
24495bceb6b7aa3eeb9a50a664ab7aa5d33015510dd3823441f2e8fdc96c64a2
|
|
| MD5 |
84e2aa7f1e8a8f40cb043f256dbcd3ec
|
|
| BLAKE2b-256 |
295f79d01c404b76f2c0e0f5f453b42eb24bd8c0234b809f094b3a5502f9b3c1
|
Provenance
The following attestation bundles were made for a2a_adapter-0.2.11-py3-none-any.whl:
Publisher:
publish.yml on hybroai/a2a-adapter
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
a2a_adapter-0.2.11-py3-none-any.whl -
Subject digest:
24495bceb6b7aa3eeb9a50a664ab7aa5d33015510dd3823441f2e8fdc96c64a2 - Sigstore transparency entry: 1438572634
- Sigstore integration time:
-
Permalink:
hybroai/a2a-adapter@c7643c4bdb877dac8b4105b72d73b636add27597 -
Branch / Tag:
refs/tags/v0.2.11 - Owner: https://github.com/hybroai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@c7643c4bdb877dac8b4105b72d73b636add27597 -
Trigger Event:
release
-
Statement type: