When Intelligence Becomes Water: Claude Mythos and the Future of Software
What moats look like in a world where foundational models are omnipresent
For those of you who are new, I am Andrej, CEO and co-founder of Lumos, and I send out thoughts about AI, Agents & Cybersecurity to amazing people (that’s you!).
What a week in cyber. Anthropic accidentally shipped the 512,000-line source code of Claude Code to npm because someone forgot to exclude a debug file. Days earlier, they had leaked their unreleased Mythos model through a misconfigured CMS. Mercor, a $10 billion AI startup that trains models for OpenAI and Anthropic, confirmed that hackers stole 4 terabytes of data through a compromised open-source library. And the Axios npm package, downloaded over 100 million times a week, was found carrying a Remote Access Trojan.
The Mythos leak hit especially hard. CrowdStrike dropped 7%. Palo Alto Networks fell 6%. Okta tumbled 7%. Stifel’s Adam Borg said the model could “elevate any ordinary hacker into a nation-state adversary,” which as we’ve recently seen with the Stryker hack, is a serious claim.
I think the market reaction, while directionally interesting, missed the actual point. The question isn’t whether cybersecurity software will become irrelevant when foundational models get this powerful. It won’t. The question is what kind of software remains valuable in a world where intelligence is omnipresent.
My thesis: the market correction is warranted, but not because Mythos will swallow software. It’s warranted because most platforms that exist today are designed for a world of scarce intelligence, and it’s genuinely unclear which incumbents will own the future. The companies that redesign around abundant intelligence will thrive. The ones that bolt AI features onto yesterday’s architecture will not.
Two weeks before any of this happened, Sam Altman said: “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.” That framing has a historical parallel that I think explains a lot. It involves a shaft, the electrical revolution, and a thirty-year mistake.
The Factory with the Shaft

In the 1890s, a typical factory was organized around a single massive steam engine in the basement. That engine powered one enormous drive shaft that ran through the building, with horizontal shafts branching off on every floor. Every machine in the factory connected to these shafts via leather belts and pulleys. The entire physical structure of the building, its shape, its layout, its multi-story design, the placement of every machine, was dictated by one constraint: proximity to the power source. A single factory could have a mile or more of shafting running through its ceilings, wasting a lot of energy to friction and consuming more floor space than the machines they powered.
Then electricity arrived. And here is what’s fascinating about what happened next.
Phase 1: The Swap. The factory owner ripped out the steam engine and dropped in one big electric motor in the same spot. The shaft, the belts, the dark narrow multi-story building, all of it stayed the same. He swapped the energy source and called it progress. By 1900, nearly two decades after electricity was commercially available, less than 5% of American factory power came from electric motors, and the factories that had adopted it saw almost no productivity gains because they were still organized around the shaft.
Phase 2: Motors on Each Machine. It took a full generation before someone asked the obvious question: why is there a shaft at all? When you can put a small motor on each machine, the machine doesn’t need to be near the shaft. It doesn’t need to be on a particular floor. It can go anywhere. The belts disappeared. Individual machines gained autonomy. Productivity started to improve because each station could operate independently. But the building itself, its multi-story layout, its narrow floor plan, its basic workflow, was still the same structure that had been designed for steam.
Phase 3: The New Factory. The real transformation came when a new generation of architects threw out the building entirely. Albert Kahn designed Ford’s Highland Park plant as a sprawling, single-story, half-million-square-foot structure encased in glass. Layout followed workflow. Machines were arranged by the sequence of production, not by proximity to a power source. Productivity growth jumped from roughly 1.5% to over 5% per year through the 1920s. The University of Brighton described the shift well: electrification replaced “gears, shafts and belts” with “more abstract ordering principles such as the ‘sequence of work’ and the ‘route of manufacture.’”
I’d say the revolution wasn’t electricity. It was rebuilding the entire factory, free from the confines of the shaft.
Three Eras of AI, Mapped
Now map this onto what we’re living through. The cost of querying a GPT-3.5-level model fell from $20.00 to $0.07 per million tokens in under two years, a 280x cost reduction. Intelligence is rapidly becoming a utility input.
The question that actually matters is whether you’re swapping the engine or redesigning the factory.
Era 1: One big motor (2022 to 2024). ChatGPT arrives. Everyone bolts an AI chatbot onto their existing workflows. You copy-paste text into a chat box, get a better draft, copy-paste it back into your doc. The underlying work doesn’t change. This is dropping a new energy source into the same factory, but keeping all the workflows that came with the shaft.
Era 2: A motor on each machine (2025 to 2026). Claude Code, Manus, and their peers put agents inside your actual tools, and those agents complete entire tasks end-to-end: writing and shipping code, reviewing pull requests, debugging production issues, drafting and sending documents. Gartner projects 40% of enterprise apps will embed agents by end of 2026, up from less than 5% in 2025. The belts are gone, and the gains are real. But the factory layout, the way work flows through your organization, is still the old one.
Era 3: The new factory (2027 and beyond). Fleets of specialized agents working in parallel, coordinating with each other, with humans directing at a strategic level. The human’s job shifts from doing the work to designing the system, setting constraints, and handling exceptions. This is the single-story factory floor organized around workflows. We haven’t seen this yet. While it’s clear this is the future of software, it’s unclear which companies will get there for every software category.
Two Moats That Endure, and Everything Else
When intelligence is water, cheap, abundant, and metered, what creates lasting value? Here is the question I keep coming back to: in which world would even an omnipotent foundational model, one that can reason, code, and hack better than any human, still not be enough on its own? I’ve been thinking about this constantly, and I think it comes down to two durable moats and a set of advantages that are rapidly commoditizing.
Moat 1: Context (The Organization’s Operating Knowledge)
Drop the world’s smartest engineer into a company they’ve never seen before. They don’t know which team owns which system, or why a particular configuration exists, or what happened last time someone changed it. They don’t know that this database has a fragile dependency nobody documented, and that the last person who touched this workflow caused a two-day outage. That engineer isn’t just unproductive in their first weeks. They’re actively dangerous if they start making changes without that context.
This is exactly what happens with powerful AI models operating on generic information. A model that can reason brilliantly but doesn’t understand how your specific organization works will produce output that sounds right, but is wrong in ways that matter. HBR found earlier this year that AI ROI is determined not by model capability alone, but by how precisely intelligence is grounded in the organization’s operating context. A developer on Hacker News captured this nicely: “Swapping between models takes 5 minutes. Getting the context right is where all my time goes.”
Foundation Capital recently wrote about an idea they call the “context graph,” and I think it captures something important. Most enterprise software today stores current state: here is what this customer record or this codebase looks like right now. But agents need more than that. They need to know why things look the way they do. Why was this exception granted? Who approved that discount, and based on what precedent? Why does this team handle approvals differently from that team? That reasoning is the connective tissue between data and decisions, and today it almost never gets captured in any system. It lives in Slack threads, in people’s heads, in the institutional memory that walks out the door when someone leaves.
A context graph makes that reasoning durable and searchable. Instead of just storing what happened, it stores why it happened: what inputs were considered, what rules applied, what precedents existed, and who made the call. When an agent needs to act, it can look at how similar situations were handled before. And because every new decision adds another trace to the graph, the system compounds. It gets smarter with every interaction, not just with every model upgrade. The moat shifts from “having data” to structuring the decision history that no foundational model could ever contain on its own.
Moat 2: The New UX (Orchestrating Fleets of Intelligence)
Today’s AI interaction is like writing instructions on a piece of paper, sliding it under the door, and waiting. The chat interface is the command line of the agent era. We haven’t invented the GUI yet.
As we move from Era 2 to Era 3, the fundamental design challenge shifts. It’s no longer about directing a single agent. It’s about orchestrating a fleet: which agents handle what, how they share state, how you decompose complex goals into agent-sized tasks, and critically, how you calibrate trust. When do you let the agent run autonomously and when do you intervene? How do you set boundaries? How do you review exceptions? Gartner saw a 1,445% surge in multi-agent system inquiries for a reason: this is the problem everyone is running into.
The companies that figure out how to make humans and fleets of agents work together effectively will own the next era of enterprise software. It’s a narrower surface than today’s sprawling dashboards, but it’s where all the value concentrates.
What’s Fleeting
Everything else that feels like a moat today is rapidly commoditizing. Integrations are being swallowed by protocol standards (Anthropic’s MCP, Google’s A2A, the Linux Foundation’s AAIF). Memory is being given away free by foundation model providers. Classic UX patterns are losing relevance as agents handle more work in the background. Individual AI features like summarization and risk scoring are table stakes the moment a better model ships.
Any software product whose primary moat is one of these is in a race it will lose. The platforms that endure will be the ones that build deep context layers and design the new orchestration UX, then plug in whatever foundational intelligence is cheapest and most capable at any given moment.
Identity for Humans and Machines
Let me apply this to the domain I spend every day in: identity.
Identity has always been the connective tissue of the enterprise. It answers the most basic question in security: who has access to what, and should they? But the scope of that question just expanded by an order of magnitude, because it’s no longer just humans who need identity governance. AI agents need it too, and they might need it more urgently.
When a human has overly broad admin permissions they never actually use, the risk just sits there. The access exists on paper, but in practice nothing bad happens because that person never exercises those permissions. When an AI agent inherits those same permissions through an MCP integration or an API key and operates autonomously at machine speed, that changes completely. An over-privileged agent can read, write, and delete across systems faster than any human could intervene. Figuring out what agents are allowed to do, what permissions they actually need, and preventing catastrophic errors at machine speed is the single biggest security challenge of the next two years, and it’s fundamentally an identity problem.
And there’s a beautiful recursive property here. The better you solve identity for AI agents, the more confidently organizations can deploy those agents, which accelerates AI adoption, which creates more demand for identity management. Identity isn’t just a security problem in the agentic era. It’s an enablement problem. The companies that make enterprises safe deploying fleets of agents will unlock the entire Era 3 transformation.
Are You Swapping the Engine, or Redesigning the Factory?
If you are evaluating security platforms today (or any enterprise platform, really), the question to ask is whether you’re buying a better motor or a fundamentally different factory.
The electricity story tells us something important. The revolution was never about power. It was about what became possible when you stopped organizing everything around the constraints of power delivery. The factory that reorganized around workflow didn’t just get more productive. It became a fundamentally different kind of place: brighter, safer, and built around the people who worked there instead of the machine that powered it.
Intelligence is becoming water. The companies that redesign their architecture around this reality will define the next decade. The ones that keep the shaft will wonder why the new motor didn’t change anything.
In some cases, the incumbents will adapt and own this future. In others, new players will emerge. At Lumos, we’re building the new factory for identity. Let me know if you want to chat about that!
With positive vibes,
Andrej
www.lumos.com


