Read the latest blog: Every Agent, One Governance Layer: Introducing Lemma A2A Learn more

Every Agent, One Governance Layer

Introducing Lemma A2A

Sam Liu - Software Engineer at Thread AI

April 20, 2026

Governing a growing fleet of cross-platform enterprise AI agents.

Agents no longer just assist. They execute: writing to financial systems, processing compliance filings, orchestrating operations across dozens of integrations. At scale, that execution requires coordination across agents that were never designed to work together.

The A2A protocol solves one important piece of this. It gives agents a common language to discover each other, delegate tasks, and exchange results across frameworks. But a protocol defines how agents talk. It doesn't define whether they should, under what constraints, with what audit trail, or what happens when something goes wrong mid-execution.

That gap is where production deployments stall. It is why we are natively integrating A2A into Lemma, and why we believe the orchestration layer is what makes multi-agent interoperability safe for production.

Lemma A2A
Safely onboard agents from any platform by leveraging the key pillars of controlled autonomy: Control, Governance, and Reliability
A quick primer: A2A and MCP

Two open protocols are shaping how enterprise AI systems connect. Understanding the distinction between them is essential context for how Lemma operates as an enterprise-grade orchestrator.

A2A: Governed by the Linux Foundation, A2A is the "common language" of the AI world. It standardizes how autonomous agents, regardless of who built them, discover each other's skills, delegate tasks, and exchange results via a JSON manifest called an AgentCard.

Model Context Protocol (MCP): introduced by Anthropic, MCP addresses a complementary problem of agent-to-tool communication. MCP standardizes how AI agents connect to external data sources, APIs, databases, and tools. Lemma already supports MCP natively for tool connectivity.

The Lemma Strategy: Use MCP for tools and A2A for agents. Lemma integrates both natively, allowing you to build a modular AI stack where you can swap out third-party specialists without breaking your core business logic.

Lemma AI Stack
Lemma provides the orchestration and governance layers to manage your enterprise agent registry
Protocols Define Communication. They Don't Define Safety.

A compliance officer does not just want agents that can communicate across platforms. They want to know exactly which agent accessed which data, why a handoff occurred, and that every step is traceable back to an approved workflow.

A2A makes interoperability possible. It does not make it safe.

The governance requirements for multi-agent systems in production are inseparable from the orchestration layer. Scoped permissions, audit trails, fault-tolerant execution, human review checkpoints: all these are essential in production-grade orchestration infrastructure.

Lemma Workers: Governed Agents by Design

Lemma's core primitive is the Worker, a versioned, observable unit of execution that combines deterministic logic, AI reasoning, API calls, and human-in-the-loop checkpoints into a single governed workflow.

Every Worker is built on three properties:

1

State-driven logic

Each State represents a single step, an action, a pause for review, or a branching decision.

2

Durable context

Every execution maintains its own JSON-structured context, preserving state across the full lifecycle of a complex operation.

3

Governed transitions

Every state change, from an AI-generated decision to a human approval, is captured in a versioned execution trace, providing the auditability required for enterprise-grade compliance.

A Lemma Worker already has everything it needs to be a well-governed A2A agent: defined capabilities, scoped permissions, full observability, and a robust execution model. What it hasn't had, until now, is the ability to advertise those capabilities via a standard protocol and to discover and delegate to agents outside the Lemma ecosystem.

A2A as a Configuration Layer, Not a Rebuild

We are treating A2A as a configuration layer on top of existing Workers, not a new framework.

Publishing Workers as A2A agents

Any Lemma Worker can be made A2A-compatible through an opt-in configuration. When enabled, the platform auto-generates an AgentCard, the A2A discovery manifest, from the Worker's existing metadata: its name, description, input/output schemas, supported authentication, and workflow capabilities. There's no need to rebuild the Worker or wrap it in a new framework.

The mapping is natural:

A2A Protocol Entity

Lemma Primitive

Agent

Worker

Agent Card

Worker Metadata

Skill

Workflow

Task

Worker Run

Artifact

Worker Run Output

input_required

Handoff (Human In the Loop)

This last mapping is particularly significant. A2A's input_required state was designed for scenarios where an agent needs human input to proceed, but few platforms have a mature mechanism for actually handling that pause. Lemma's Handoff architecture, built as a first-class primitive for compliance-critical workflows, provides exactly this: structured pause points that surface context to domain experts, capture approvals, and resume execution with full audit trails.

When an external system calls a Lemma Worker via A2A and the workflow reaches a human review step, that interaction flows through the same governed Handoff pipeline used by every other Lemma workflow.

Consuming external A2A agents

On the client side, we're adding A2A as a new type (a2a_agent) alongside Lemma's existing integration options (openapi, grpc, mcpserver). Workflow builders will be able to incorporate external A2A agents into Lemma workflows without custom integration code. Credentials are managed through Lemma's existing credential system, and long-running agent tasks are handled via the same durable execution model that powers all Lemma workflows.

This means a single Lemma workflow can orchestrate an ADK-built agent running on Agent Engine, any third-party agent from an AI Agent Marketplace, and a Lemma Worker performing internal processing - all within the same governed execution trace.

A2A and MCP
Orchestrate several agents on a single Lemma workflow with full traceability
An Agent Registry Built for Trust, Not Just Discovery

We are building an Agent Registry that treats internal Workers and external A2A agents as peers. But history shows that simple discovery is not enough.

UDDI failed in the early 2000s because it was a static directory, not a trust-validated ecosystem. MCP marketplaces have faced the same problem without centralized authority for security and identity.

Lemma's registry moves beyond passive discovery to an active trust architecture:

Verified Identity

Verified Identity

Lemma uses the A2A AgentCard to resolve an agent's skills and auth requirements before it ever appears in a workflow.

Reputation through Traceability

Reputation through Traceability

Lemma builds a living record of every external agent's reliability and compliance history based on past performance.

Convergent Governance

Convergent Governance

Discovery, quality assessment, and enterprise governance converge in a single pane of glass.

Agent Registry
Verify permissions, past versions, and the activity of every agent.
Why governance must be built into the orchestration layer

There's a reason we're implementing A2A at the platform level rather than building a standalone A2A adapter. The governance requirements for multi-agent systems in production are inseparable from the orchestration layer.

Consider what happens when a Lemma workflow delegates a subtask to an external A2A agent:

1

Control

The workflow defines exactly which agents can be invoked, with what inputs, and under what authentication context. Lemma's scoped, time-bound access model means the calling workflow receives only the permissions required for the interaction, and those permissions expire when the task completes. The external agent never gains access to data or systems beyond the scope of the specific request.

2

Governance

Every A2A interaction - the request, the response, any intermediate handoffs - is captured in the same execution trace as the rest of the workflow. Compliance teams get a single pane of glass, not a fragmented view across multiple systems. When an auditor asks how a decision was made, the answer includes the full chain: which agent was called, what data was sent, what came back, and whether a human reviewed the result.

3

Reliability

If an external A2A agent fails or times out mid-task, Lemma's fault-tolerant execution model handles it with intelligent retry policies, state preservation, and graceful degradation, rather than silently dropping the interaction or cascading the failure through downstream steps.

These aren't add-ons. They're architectural properties of the Lemma platform that extend naturally to A2A interactions because A2A is integrated at the infrastructure layer.

Control, Governance, Reliability
Thread AI pillars that enable Controlled Autonomy
The bigger picture: orchestration as the enterprise agent governance layer

The agent landscape is converging on a layered architecture. MCP handles the tool layer. A2A handles agent-to-agent communication. Between "agents can talk" and "agents can safely execute critical business processes together" sits the orchestration and governance layer.

As enterprises move from tens to hundreds of agents, the questions that matter most are not protocol questions. They are governance questions: Who approved this agent to operate? What data can it access? What happens when an automated decision needs to be explained to a regulator?

These are the questions Lemma was built to answer. A2A integration means those answers now extend to every agent in the enterprise ecosystem, regardless of where it was built or what framework it runs on.

The A2A protocol gives agents a common language. Lemma gives enterprises the governance infrastructure to deploy that language safely in production. Reach out to our team to discuss how Lemma can address your agent interoperability requirements.

Compliance

CJIS

GDPR

HIPAA

SOC 2 Type 2

Made In NY badge

©️ 2025 Thread AI, Inc.

131 Varick Street. Suite 1029. New York, NY 10013

Vector