OpenAI and Anthropic and Pentagon
  • BlogBlog

Wars at home and abroad

  • FormatThomas Arnett
  • FormatMarch 19, 2026

As the war in Iran unfolds, an adjacent battle of corporate destinies is playing out on the US home front. 

For eight months, the AI company Anthropic provided its technology to the Pentagon under a contract that included two restrictions: no mass surveillance of Americans and no fully autonomous weapons. Then, in late February, the Trump administration demanded that those restrictions be removed. Anthropic refused. The Pentagon canceled the contract. 

But the Pentagon didn’t stop there. Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk”—a label previously reserved for foreign adversaries like China’s Huawei—and ordered every military contractor to cut commercial ties with the firm. Hours later, OpenAI announced it had signed a deal to take Anthropic’s place. Three days later, Sam Altman sent a memo to his employees acknowledging the deal might have been hasty. 

As this drama played out, more than 30 OpenAI and Google employees filed a brief in support of Anthropic. Meanwhile, consumers also sent their own signal: more than 1.5 million users left ChatGPT in less than 48 hours following the announcement, many turning instead to Claude, Anthropic’s AI assistant, and catapulting it to number one on the iPhone App Store for the first time.

Much of the debate has focused on who made the right call. Should Anthropic, indeed, have held the line on restrictions the Pentagon had originally accepted? Or does a private company like Anthropic overstep its role in attempting to constrain the federal government in matters of national security? Should OpenAI have been more cautious about what it was enabling? Is the government overreaching by trying to destroy an American company that refuses to accept the administration’s terms?

But those questions miss the deeper story: Why did Anthropic refuse when OpenAI agreed? The answer reveals forces shaping AI’s future that most analyses overlook. This isn’t about Dario Amodei’s values versus Sam Altman’s values. It’s about two companies with nearly identical technology and shared roots diverging into fundamentally different organisms. And we can predict exactly why.

A tale of two companies

Two years ago, Anthropic and OpenAI looked nearly identical. Both were building frontier AI models with similar architectures. Both had raised billions from similar investors. Both drew their talent from Silicon Valley. Both spoke publicly about the societal risks of their technologies and proclaimed commitments to AI safety and responsible development. 

If you had asked observers in 2023 which company would accept Pentagon terms without explicit safeguards and which would refuse, predictions would likely have been all over the map. 

But today, the outcome that played out was far from random.

To understand why, it helps to revisit an idea from Clayton Christensen, the Harvard Business School professor whose theory of Disruptive Innovation helped shape Silicon Valley’s thinking about technological change. Christensen argued that companies don’t fail or thrive simply because of business know-how or technical capability. They either sink or swim, in large part, because of the value networks they operate within: the customers they serve, the investors who fund them, the revenue models they depend on, the competitors they respond to, and the regulators that constrain them. These forces quietly shape what organizations prioritize, even when they have the capacity to do something different.

A classic example comes from the 1980s, when Digital Equipment Corporation dominated computing by selling powerful machines to corporate customers. When personal computers emerged, DEC had the technical know-how to compete. But its customers expected powerful machines for professional applications; its sales force was built for enterprise deals; and its investors demanded returns that PCs couldn’t initially deliver. DEC didn’t fail because it misunderstood the future. It failed because its value network made pursuing that future economically irrational.

Fast forward forty years—the companies have changed, the stakes have changed, but the patterns in the underlying forces are remarkably similar. If you want to understand why Anthropic and OpenAI made different choices when facing the same Pentagon ultimatum, don’t look just at the capabilities of their latest models or the statements from their CEOs. Look at the underlying value networks.

Value networks explain the Pentagon standoff

Once you map the value networks of these AI companies, their Pentagon responses make sense. 

OpenAI currently operates within a value network built for hypergrowth at consumer scale: over $150 billion in venture capital, commitments for over $1.4 trillion in computing infrastructure, over 900 million users (but with only about 1% paying for subscriptions), and $17 billion in annual losses. 

That value network initially created pressure to monetize the massive user base, compete visibly with Google and Meta, and deliver returns to investors betting on dominant consumer AI. 

But the consumer model is failing. OpenAI can’t monetize free users effectively, and its paid subscribers—the ones who justified the expense because ChatGPT helped with work—are jumping to Anthropic’s superior business productivity tools. The company is now pivoting desperately toward enterprise and coding markets where Anthropic has already established dominance. 

The Pentagon contract represents the convergence of two pressures: OpenAI’s investors, who’ve committed a trillion dollars to computing infrastructure, can’t afford to see the company blacklisted from government work and locked out of an entire category of high-value contracts. And OpenAI can’t miss a critical foothold in the exact market where it’s behind and bleeding customers to its rival.

Meanwhile, Anthropic operates within a fundamentally different value network. The company has raised over $65 billion, but from a mix of venture capital and growth equity under different terms. Roughly 80% of revenue comes from enterprise contracts and API partnerships, with only 20% from consumer subscriptions. The company loses $3 billion annually—still unsustainable, but a third of OpenAI’s burn rate. 

Most critically, Anthropic built mission-protective governance from the beginning: a Long-Term Benefit Trust with authority to override commercial pressures. Its enterprise customers—technology companies, consulting firms, government contractors—pay premium prices specifically because Anthropic positions itself as the careful, principled alternative. Those customers might revolt if the company abandoned those principles in its highest-profile contract.

The divergent outcomes weren’t about Dario Amodei’s personal convictions versus Sam Altman’s pragmatism. Anthropic’s governance structure gave leadership the authority to refuse terms that a conventional venture-backed company couldn’t afford to reject. The Long-Term Benefit Trust exists precisely to enable decisions where commercial pressure points one direction, but the mission points another. 

OpenAI’s employee revolt and Altman’s hasty memo reveal genuine internal tension. But when you’re losing $17 billion annually with over 900 million users to monetize, you can’t sustain positions that mission-protective governance might enable. Value network pressures were overwhelming.

We’re watching corporate evolution in real time.

The path forward

The Pentagon standoff is just the first major signpost in a broader story of divergent evolution.

As AI systems become more powerful and more deeply integrated into critical infrastructure, the differences in value networks will compound. Companies serving consumer markets at a massive scale will continue making different choices than companies serving enterprise customers paying premium prices for safety guarantees. Companies with mission-protective governance will diverge further from those operating under conventional venture capital pressures. 

These aren’t random variations in corporate philosophy. They’re predictable outcomes of the forces shaping what each organization can actually do.

If you want to understand where AI is headed, the model releases and CEO statements will mislead you. The real story is in the value networks. They’re determining which futures are even possible—which applications get built, which safeguards get maintained, which compromises get made. The question isn’t which company has the right values, but rather, which value networks create the conditions for the outcomes we need.

This analysis draws on Thomas Arnett’s recent paper: What actually determines AI’s impact on humanity? Incentives, value networks, and the forces shaping AI’s future.

Author

  • Thomas Arnett
    Thomas Arnett

    Thomas Arnett is a senior research fellow for the Clayton Christensen Institute. His work focuses on using the Theory of Disruptive Innovation to study innovative instructional models and their potential to scale student-centered learning in K–12 education. He also studies demand for innovative resources and practices across the K–12 education system using the Jobs to Be Done Theory.