Skip to main content
Research

Beyond MCP: Why Agent Protocols Need to Be as Fluid as Agents Themselves

OSSA Team
5 min read

Beyond MCP: Why Agent Protocols Need to Be as Fluid as Agents Themselves

In early 2026, Anthropic donated the Model Context Protocol (MCP) to the Linux Foundation's AI & Agents Interoperability Framework (AAIF), joined by OpenAI, Block, Google, Microsoft, AWS, Bloomberg, and Cloudflare. MCP has been adopted across 60,000+ AGENTS.md files and is supported by every major IDE and agent framework.

This is genuinely good news. MCP solved a real problem: giving LLMs a standardized way to connect to external tools. The "USB-C for AI" metaphor is apt. Before MCP, every tool integration was a custom wiring job.

But here is the uncomfortable truth: USB-C is not the internet. A connector protocol is not an agent protocol. And the gap between what MCP provides and what autonomous agents actually need is growing wider every month.


What MCP Does Well

Credit where it is due. MCP established:

  • A standard interface for tool invocation (functions, parameters, return types)
  • A server/client architecture that decouples tools from models
  • A registry pattern that makes tools discoverable within a session
  • A transport layer that works across local and remote connections

For human-in-the-loop workflows — where a developer uses an IDE and the LLM calls tools on their behalf — MCP is excellent. It is the right abstraction at the right level.


Where MCP Stops

The problem emerges when agents become autonomous. When Agent A needs to discover, evaluate, trust, and collaborate with Agent B — without a human mediating — MCP has no answers for:

Identity: Who is this agent? MCP has no concept of agent identity. A tool server is anonymous. There is no GAID, no verifiable credential, no trust chain.

Discovery: How do I find agents with capabilities I need? MCP discovery is session-scoped. There is no federated registry, no cross-platform search, no DNS-resolvable identity.

Trust: Should I trust this agent's outputs? MCP has no attestation model. No compliance verification. No audit trail.

Governance: What is this agent allowed to do? MCP defines what tools can do, not what they should do. There are no governance constraints, no policy enforcement, no boundary definitions.

Lifecycle: What happens when an agent version changes? MCP has no versioning semantics for agent capabilities. No deprecation model. No migration path.

MCP treats agents like power tools. The future needs agents treated like colleagues — with identities, reputations, contracts, and accountability.


The Security Problem Is Already Here

The gap is not theoretical. Research from arXiv:2506.13538 found that 7.2% of MCP servers have known security vulnerabilities. A separate study (arXiv:2603.00195) identified 6,487 malicious tools across MCP registries — tools that exfiltrate data, inject prompts, or escalate privileges.

This is what happens when you have a connector protocol without an identity and trust layer. MCP tells you how to call a tool. It does not tell you whether you should.

In an autonomous multi-agent system, where agents discover and invoke tools without human approval, this is not a minor gap. It is an existential risk. One compromised MCP server in a chain of 10 agents can poison every downstream decision.


Knowledge Graphs + LLMs > Static Tool Registries

The deeper architectural issue is that MCP's tool registry model is fundamentally static. You define tools. You register them. Agents consume them. The registry is a list.

But agent capabilities are not lists. They are graphs. An agent that can "process invoices" actually has a web of related capabilities: OCR, data extraction, validation against schemas, currency conversion, compliance checking, ERP integration. These capabilities have dependencies, version constraints, governance rules, and trust requirements.

Knowledge graphs model this naturally. Instead of a flat registry of tool definitions, a graph encodes:

  • Capability relationships (tool A depends on tool B)
  • Trust chains (tool A is attested by organization C)
  • Governance constraints (tool A cannot be used in jurisdiction D)
  • Performance history (tool A has 99.7% success rate for task type E)

Combined with LLM-powered semantic reasoning, an agent can navigate this graph dynamically — finding the right capabilities for the current context, evaluating trust, respecting governance — instead of iterating through a static tool list.

This is what fluid agent protocols look like. Not rigid tool registries, but dynamic capability graphs that agents can reason over.


OSSA + DUADP: The Contract and Discovery Layers

OSSA and DUADP are not replacements for MCP. They are the layers that MCP is missing.

OSSA provides the contract layer. The OSSA manifest defines agent identity, capabilities, governance, and interoperability in a structured, validatable format. It answers the questions MCP cannot: who is this agent, what is it allowed to do, and who vouches for it.

DUADP provides the discovery layer. The Universal Agent Discovery Protocol enables federated, cross-platform agent discovery with trust verification. It answers the question MCP cannot: how do I find the right agent for this task, and how do I know I can trust it.

The architecture is complementary:

MCP  → Tool connectivity  (HOW to call)
OSSA → Agent contracts     (WHAT and WHO)
DUADP → Agent discovery     (WHERE and WHETHER)

MCP remains the transport. OSSA wraps it with identity and governance. DUADP makes it discoverable and verifiable. Together, they form a complete agent protocol stack.


What the AAIF Needs to Address

The formation of AAIF is the right move. Having OpenAI, Anthropic, Google, Microsoft, AWS, Bloomberg, Block, and Cloudflare in one room is exactly what the ecosystem needs. But the agenda must go beyond connector standardization:

  1. Agent identity — A universal identifier scheme (OSSA proposes the GAID)
  2. Trust verification — Attestation models that work across organizational boundaries
  3. Governance enforcement — Policy languages that travel with the agent, not the platform
  4. Discovery federation — Cross-platform agent discovery that does not require a centralized registry
  5. Lifecycle management — Versioning, deprecation, and migration for agent capabilities

These are hard problems. They are also solved problems in other domains — PKI, DNS, OAuth, OpenAPI all tackled analogous challenges for their respective layers of the web. The patterns exist. They need to be adapted for agents.


The Fluidity Imperative

Agents are fluid. They adapt, learn, evolve, and compose dynamically. Their protocols must match.

A protocol that requires static registration, manual configuration, and session-scoped discovery is a protocol designed for tools, not agents. The next generation of agent protocols needs to be as dynamic, discoverable, and trust-aware as the agents themselves.

MCP was the beginning. The OSSA specification and DUADP are the next layers. The research continues.

MCPprotocolsAAIFagent-securityDUADPinteroperability