Will artificial intelligence be good or bad for humanity? It’s one of the most common—and most misleading—questions of the AI age.
The future of AI won’t be determined primarily by how powerful the technology becomes, or by what company leaders say they intend. It’ll be determined by incentives: who funds AI companies, who their customers are, how competition works, and which trade-offs organizations are rewarded—or punished—for making.
In this report, senior research fellow Thomas Arnett applies Clayton Christensen’s theory of value networks to today’s leading AI labs. The analysis maps how capital markets, revenue models, governance structures, competitive pressures, and regulation shape the priorities of companies such as OpenAI, Anthropic, Google, Meta, and xAI.
The key finding is counterintuitive but essential: many of the risks people fear most from AI don’t stem from negligence or malicious intent. They arise when companies behave rationally inside systems that reward speed, scale, and dominance over caution and long-term alignment.
Rather than asking whether AI will “save” or “destroy” humanity, this report argues we should ask a more grounded question: what forces are steering its development right now, and how might those forces change? Understanding those invisible pressures is the most reliable way to anticipate where AI is actually headed.
