Part IV: Practice And Reflection
Chapter 17
Crystal Ball: Where Standards and Open Source Are Headed
A word of warning before we begin: this chapter is unlikely to age well. Predictions about technology, markets, and governance models have a poor track record, and there's no reason to believe these will be different. What follows is an honest assessment of where the forces seem to be pointing as of this writing — informed by two decades of pattern recognition, but subject to all the limitations of trying to see around corners.
Read it as a framework for thinking about the future, not as a forecast you should bet on.
17.1 Five Threads That Got Me Thinking
Several unrelated developments, when pulled together, paint a picture of where standards may be headed.
Code-first standards. Open source code is moving from being an implementation of a standard to being the standard itself. This has been happening for a while, but it's accelerating. The launch of the Agentic AI Foundation at the Linux Foundation is a recent example. Its work on the Model Context Protocol (MCP) should have been done as a spec developed with standards best practices. Instead, it's using a traditional open source approach — and it's one of the fastest-growing projects at the Linux Foundation. The tension isn't about legal structure or development model. It's that too many companies want a seat on the steering committee.
AI-generated code is production-ready. We've crossed a threshold. With the release of recent large language models, we've moved from "AI slop" to high-quality, production-ready code. One major side effect: maintainers are being overwhelmed by AI-generated pull requests — particularly around security — at a volume that human-centric governance wasn't designed to handle.
Machines are reading documentation, not people. In a conversation with the CEO of a company in the API space, he mentioned that traffic to his docs site is down over 90% in recent months — and he's hearing the same from across the industry. Developers are no longer reading documentation. They're asking AI to implement code to interact with the API, and the AI figures it out.
Reverse-engineering without specs. A developer wanted to control his robot vacuum with a video game controller. There was no documented API. He used AI to figure out the interface, build the code, and enable the controller. In the process, he discovered a security vulnerability that gave him access to the cameras and sensors of 25,000 vacuums globally. What does this mean for standards if an AI can figure out interoperability on its own?
From configuration files to self-healing code. This was an eye-opener. A developer created a system where instead of changing variables in configuration files to customize a setup, you tell the AI what you want — "switch my chat interface from WhatsApp to Slack" — and the AI figures out how to interoperate with Slack, writes new code, replaces the old code, and deploys automatically. And it's self-healing: if interop breaks, the AI detects it, figures out why, writes a fix, and redeploys. The code is automatically rewriting itself and adapting to changes.
17.2 Pulling the Threads Together
Traditionally, standards development was slow, and the goal of most interoperability standards was to achieve long-term stability. That stability was necessary to support large-scale implementations, encourage adoption, and avoid breaking changes.
That approach will likely remain valid for large-scale infrastructure — telecom protocols, power grid standards, automotive safety communications. The cost of getting those wrong is measured in lives and dollars. You're not going to let an AI dynamically rewrite a 5G base station's protocol stack in production.
But what about more specialized or application-level areas? What happens when the "standard" is open source code, the docs are generated by AI, read by AI, implemented by AI, deployed by AI, and fixed by AI when things break or change? If the implementation layer is fluid enough to adapt to breaking changes in near real-time, do you need stable, slow-moving standards?
The large-scale plumbing — protocols, transport mechanisms, languages — will likely continue to follow traditional models. It's the service level where things fragment. The roads are going to look pretty much the same. The vehicles on the road are going to be very different. It might be more like an automated middleware system, where AI figures out how to connect various endpoints dynamically.
17.3 Where the Lines Might Be
The question isn't whether AI changes standards. It's where the lines fall between what still needs human-governed standards and what doesn't.
Behavioral and regulatory standards — ISO 9000-type management standards, safety standards, AI governance frameworks, codes of practice — are fundamentally about human judgment, societal values, and regulatory intent. AI can help draft them, but the substance requires human deliberation. It's hard to automate "what level of risk is acceptable" or "what does fairness mean in this context." These aren't interoperability problems. They're policy problems. They're durable.
Large-scale infrastructure standards — telecom, power grids, automotive safety — need stability and the governance mechanisms that come with formal standards processes. The cost of failure is too high for dynamic adaptation. These are durable too.
Software and application-level interface standards — this is where AI hits hardest. If an AI can read your API, write an adapter, test it, deploy it, and fix it when it breaks, what exactly is the standard adding? For this category, the value may shift from how to interoperate to what you can rely on when you do. The AI can figure out the interface. It can't figure out the SLA, the patent commitment, or the liability allocation.
The more a standard is about technical interfaces, the less durable it may be in an AI world. The more it's about governance, liability, and rights, the more durable it is.
17.4 The IP Question Gets Harder
Here's something I've struggled with for a while. Conceptually, standards should be the patent-safe zone. They have patent commitments covering the entire standard, and they deliberately don't get into implementation details. Open source should be the patent-dangerous zone — the code is visible, it's all about implementation, and patent commitments are generally more limited.
But the reality is almost exactly backwards. There's significant patent litigation around standards — at least in wireless and codecs — and next to none around open source. Patent attorneys prefer looking at specs because specs describe what is being done in human-readable terms. Code tells you how, and mapping that to patent claims is painstaking work.
Does AI change that equation? Maybe, in two directions.
First, AI might make finding and proving infringement in code — whether open source or AI-generated — significantly easier. If an AI can read a codebase and map functionality to patent claims faster than a team of attorneys, the practical shield that open source has enjoyed starts to erode.
Second, AI might enable systematic design-around. If you can rapidly generate and test thousands of alternative implementations, you could systematically avoid patented claims. That changes the economics of design-around from expensive and slow to cheap and fast.
But there's a wrinkle. We typically advise engineers not to review patents because knowledge of a patent can increase the risk of a willful infringement finding and enhanced damages. The standard for willfulness has evolved — the Supreme Court's 2016 decision in Halo Electronics v. Pulse Electronics moved away from a rigid test toward a more flexible, conduct-based inquiry — but the practical advice remains cautious. If an AI is trained on patent data and generates code with awareness of what to avoid — is that willful infringement? Does the AI's "knowledge" get imputed to the user? The answer isn't clear, but the question is going to land on someone's desk sooner than we think.
17.5 The Governance Question
If AI removes the need for some categories of interoperability standards, where does governance go?
One possibility: the governance moves to the AI model itself. Whoever controls the dominant model controls the interface. Does it work better with its favored partners? Are we going from standards-based due process to open source benevolent dictators to an AI ghost in the machine? At least the benevolent dictator had a name and a mailing list.
Another possibility: standards come back in a different form. Not as interface specifications, but as behavioral standards for AI-generated systems. Think of building codes. A building code doesn't tell you where to put the kitchen. It tells you the load-bearing walls need to hold a certain weight and the electrical needs to meet certain specifications. Within those parameters, you can do whatever you want. The inspector doesn't care about your floor plan. They care about the parameters.
A model where AI generates whatever custom code it wants — but has to stay within defined parameters like API contracts, security baselines, data handling rules, and performance thresholds — might be more enforceable than what we have now. The AI can continuously validate compliance rather than relying on a human to read a 400-page spec and hope they got it right.
17.6 What Companies Will Do
Companies will always defend their business models. The power utility companies fought for years to keep Internet Protocol companies off electricity meters through standards. Phone companies fought to keep their equipment rental business. But few companies navigate major technology transitions well.
Standards organizations also need to examine their processes.If the pace of AI-driven development continues to accelerate, the traditional multi-year standards development cycle may be too slow for some categories of work. Organizations that can't adapt their processes risk becoming irrelevant — not because standards don't matter, but because the market moved while the committee was still debating scope.
17.7 What This Means for Practitioners
The IP frameworks discussed in this book remain foundational. RAND, royalty-free, necessary claims, exclusions, non-asserts — these concepts don't become obsolete because AI changes the landscape. The new artifacts may require new applications of these frameworks, but the principles endure.
Governance complexity will increase, not decrease. Adding AI governance concerns to existing standards and open source governance creates new layers of decision-making, new stakeholder dynamics, and new regulatory pressures.
The definitional battles are coming. What counts as "open" for AI? What constitutes a "standard" when the code is the spec? What are the patent implications of AI-generated implementations? Standards practitioners will be asked to weigh in on these questions, and the answers will have commercial and regulatory consequences.
But just because there's not a traditional project to control doesn't mean there isn't control. Someone trains the model. Someone curates the training data. Someone decides what "good interop" looks like when the model generates an adapter. The control doesn't disappear — it just gets laundered through a layer of abstraction that makes it harder to see. And harder to see means harder to govern.
The future is uncertain. The principles in this book are not. Apply them with judgment, adapt them as the landscape changes, and don't be surprised when the details turn out differently than anyone — including the author — predicted.