Imagine needing a different power adapter for every room in your house. You plug your phone into the kitchen wall, and it fries. You take your laptop into the bedroom, and nothing works. Every outlet runs on a slightly different voltage. Every device has its own plug.
That’s what Large Language Model development in AI feels like right now.
As an industry, we want to and are perhaps too slowly building toward a future where AI agents talk to each other, models can be switched in and out, and developers don’t need to rebuild their stack every time a better model shows up.
Instead, we’re stuck with bespoke APIs, proprietary function-calling formats, incompatible memory layers, and no shared safety or evaluation standards.
If the future of AI is truly composable and ecosystem-driven, then interoperability isn’t a nice-to-have. It’s foundational. And it needs to be open source.
Most people won’t complain about “interoperability.” They’ll suffer through it.
Imagine this:
—Developers rewriting app logic for every model switch. Testing GPT-4 vs. Claude? Start over. Curious about that new open source model? Hope you have three weeks.
—Product teams flying blind on model comparisons. Want to A/B test latency vs. cost vs. safety? Good luck getting apples-to-apples data when every API speaks its own dialect.
—Enterprise buyers trapped by integration debt. Sure, a different vendor’s model is 40 percent cheaper and twice as fast, but migration would take six months and break everything.
—Users stuck with whatever their app developer could afford to integrate, not what’s best for their needs.
This isn’t just friction. It’s an innovation tax.
I’ve seen both sides of this equation. Platforms are incredibly powerful for rapid innovation and user experience. But when they become the only way to do something, innovation stagnates and users lose choice.
The healthiest ecosystems balance platform innovation with open protocols. AWS, Google Cloud, and Azure differentiate on services, but they all implement open APIs and standards.
These ecosystems run on a three-layer pact:
1. Open protocols that set the rules of the road.
2. Public, stable APIs let each cloud race ahead on features without breaking callers.
3. A common open-source engine room (Linux + Kubernetes + container standards) means skills, tools and even full workloads move more easily and faster.
That’s why a micro-service built on AWS can usually land on GCP or Azure after a weekend of Terraform tweaks.
LLMs need the same balance. Let model providers compete on speed, accuracy, safety and price. But don’t let them compete on incompatibility.
There are early efforts to solve this problem. One promising development is Anthropic’s Model Context Protocol, which creates a unified standard for how AI applications pull in external data and connect to tools. This means frameworks like LangChain can work with different models without needing custom integrations for each one.
While MCP has broader community adoption, we need more initiatives like MCP, not fewer.
Google’s A2A protocol (recently donated to the Linux Foundation) is another example. AGNTCY started by CISCO (now under Linux Foundation) is at an even higher level, working to put together an open reference architecture for the agentic web. It’s a valuable signal that interop is being taken seriously, and it’s not enough.
While MCP shows how open interop can work for data connections, we still need ecosystem-wide efforts to define shared expectations across LLM behavior, safety evaluation and composition patterns.
Mozilla.ai is tackling this challenge with initiatives like mcpd (which simplifies MCP server deployment) and their “any-*” fabric — tools like “any-llm” and “any-agent” that provide unified access to different models and systems. Like MCP and A2A, these efforts focus on standardizing interactions across the AI stack rather than forcing developers to build custom integrations for each component.
The widespread adoption of MCP also introduces real security challenges that the AI community can’t afford to ignore. When AI agents gain standardized access to external data sources, APIs, and system tools, they inherit all the attack vectors that come with those integrations. A compromised agent could potentially access sensitive databases, execute unauthorized API calls, or manipulate connected systems at scale.
Unlike traditional software where security boundaries are explicit, AI agents operate with natural language instructions that can be manipulated through prompt injection, social engineering, or adversarial inputs. This means we need to apply rigorous engineering practices from day one to prevent security nightmares in the future.
Addressing these security risks requires the same foundation that solves the interoperability problem: open, inspectable standards that the entire community can scrutinize and improve.
The modern web succeeded because its foundational standards — HTML, CSS, HTTPS, REST — weren’t just agreed upon. They were:
—Open: Anyone could read, implement, and extend them.
—Neutral: No single vendor controlled the specification.
—Inspectable: You could see exactly how things worked.
—Composable: Pieces fit together in unexpected ways.
That same openness needs to exist across the entire LLM stack. From function calling formats to prompt passing to memory management to safety auditing.
Because interoperability without openness in these other layers still becomes better lock-in.
This issue is primarily about who gets to define how AI systems work together. Today’s LLM interop initiatives are dominated by the companies with the most to gain from controlling standards. That’s natural, but insufficient. We need diverse voices (researchers, developers, civil society, academics) shaping these interfaces.
The standards that stick aren’t the most technically perfect ones. They’re the ones that reflect the values and needs of the people who actually use them.
Broader open-source efforts like ONNX facilitate model portability across traditional ML frameworks, while newer initiatives are tackling LLM-specific challenges. Open-source agentic frameworks like CrewAI are simplifying how multi-agent systems coordinate LLM-powered tasks, and libraries such as LiteLLM are emerging as thin, developer-friendly layers that provide a unified API for querying multiple LLMs.
Open source isn’t just a licensing decision. It’s what enables a community to define how tools work together. It’s what allows multiple model providers to compete on quality, not lock-in.
And it’s what gives developers and users freedom of choice.
Open abstractions make room for new ideas. They let small teams build great tools without needing insider access to private APIs. They unlock real evaluation. They make safety auditable.
Most important, they make it possible to switch without starting over.
If we don’t solve for open interoperability now, we’ll build the next generation of the internet on closed, brittle systems. Instead of a thriving ecosystem of AI models, tools, and agents working together, we’ll end up with walled gardens, opaque safety logic, and innovation bottlenecks.
We’ve been here before with browsers, operating systems and platforms that resisted standardization until developers pushed back hard enough. The web didn’t become usable because one company made it so. It became usable because the connective tissue was open.
We need to recognize that interop isn’t just a technical problem. It’s a governance problem. A design problem. A power problem. It’s about making it possible to switch, compose, and extend without being trapped.
The interfaces we define today will shape the AI ecosystem for years to come. And the way we define them will determine who benefits. So yes, interoperability matters. But open source interoperability is what will make this infrastructure trustworthy.
And if AI really is the next layer of our digital world, then we owe it the same foundation we gave the web: shared, open, and built to last.
To make this real, we need the open-source ecosystem to engage now. We need diverse actors defining interoperability layers that reflect public values, safety insights, and usability needs beyond commercial interest.
The builders showing up today get to write the rules. If you’re working with AI, interop is your problem too. Because the alternative isn’t just technical debt. It’s digital feudalism.