The debate around artificial intelligence has moved beyond hype into something far more serious, and Sam Altman is now saying it plainly: the United States is not prepared for what is coming.
The OpenAI chief has urged policymakers to urgently prepare for the arrival of “superintelligence,” a form of AI that could outperform humans across most domains, warning that the technology will reshape economies, labour markets and national security in ways that current systems are not designed to handle.
At the centre of this push is a detailed policy blueprint released by OpenAI, which frames the transition to advanced AI as a shift comparable to the Industrial Revolution or the New Deal era. The document does not just highlight risks. It openly acknowledges that AI could generate enormous wealth while simultaneously destabilising jobs, industries and existing economic structures.

Altman’s position is not subtle. He is effectively arguing that the current economic model, built around human labour, may not survive intact in an AI driven world. As automation scales, traditional sources of income tax could shrink, forcing governments to rethink how wealth is generated and distributed.
To address this, OpenAI is proposing a set of radical interventions. These include taxing companies that replace human workers with AI systems, creating a publicly owned investment fund that distributes AI generated profits to citizens, and even exploring shorter workweeks without reducing pay. The underlying idea is simple but disruptive: if machines generate the wealth, society must redesign how that wealth reaches people.
Beyond economics, the warning extends into security. Altman has identified cyberattacks and biological threats as immediate risks in a world where powerful AI tools can be misused by bad actors. The concern is not theoretical. Advanced systems could lower the barrier to executing sophisticated attacks, making existing defence systems inadequate.
There is also a deeper governance problem. Current regulatory frameworks are fragmented and largely reactive, while AI development is accelerating rapidly across private companies and global competitors. OpenAI itself has previously called for international oversight mechanisms similar to nuclear watchdogs, recognising that superintelligent systems cannot be managed by individual countries alone.
At the same time, the company’s proposals reveal a tension that is hard to ignore. OpenAI is both a leading developer of advanced AI and a vocal advocate for regulating it. Critics argue that this dual role raises questions about whether such policy frameworks are designed purely for public interest or also to shape the competitive landscape in favour of early movers.
Still, the urgency behind the message is difficult to dismiss. AI is no longer just automating routine tasks. It is beginning to encroach on cognitive work, decision making and creative processes. Experts warn that this could lead to widespread job displacement, particularly in knowledge based industries that were once considered secure.

The economic implications are significant. If productivity rises sharply due to AI while employment declines, inequality could widen unless new systems redistribute gains effectively. This is why OpenAI’s proposal places heavy emphasis on social safety nets, public investment mechanisms and broader access to AI tools, aiming to prevent wealth concentration within a small group of companies.
Ultimately, Altman’s warning is less about distant science fiction and more about immediate policy failure. The technology is advancing faster than institutions can adapt, and the gap between capability and governance is widening.
The message to Washington is clear: prepare now or react later under pressure.