At a time when OpenAI is seeking substantial funding—between $5 billion and $7 billion—to expand its capabilities, OpenAI CEO Sam Altman broadly proclaims superintelligent artificial intelligence (AI) could emerge in as little as “a few thousand days.” The AI timeline, which Altman estimates could be as short as three years, has sparked skepticism among experts and the public alike.
“Deep learning worked,” said Altman in a blog post titled “The Intelligence Age”, published on September 23.
Altman argued that the evolution of AI will not merely enhance human capabilities but fundamentally transform society. He envisions a future where individuals have access to their own “personal AI teams,” capable of assisting with everything from education to healthcare. This shift, he claims, will boost productivity and creativity, allowing people to overcome “impossible” tasks.
He suggests that just as society has evolved after the Industrial Revolution and the Information Age, it will similarly navigate the challenges posed by AI. He believes that progress is driven not just by genetic evolution but by the intelligence embedded in our societal systems.
Altman also acknowledges the fears of job loss and inequality introduced by AI, predicting that everyone’s roles will involve more coordination and decision-making at a higher level of abstraction. He states, “My role is to figure out what we’re going to do… We will all have access to a lot more capability… but we’ll make decisions about what should happen in the world.”
Balancing Optimism with Real-world Challenges
But of course, Altman’s optimism quickly contrasts with critics’ warnings that superintelligent AI could worsen existing inequalities and disrupt job markets.
Gary Marcus, an AI expert and critic, posted on X (formerly Twitter) an excerpt from the essay with a few annotations in which he described Altman’s vision as empty and more promotional than practical. He argued against Altman that “deep learning worked” only to some extent as AI agents continue to face hallucinations and failures in reasoning.
“This is all empty promise—we can’t improve unless we actually address the real challenges,” he stated.
Critics also point out that while AI has the potential to solve complex problems, it may also lead to increased reliance on technology and a decline in essential skills among individuals. The idea of personal AI assistants managing daily tasks in an automated world seems to devalue privacy and human interaction.
Grady Booch also voiced his frustration in a post on X, stating, “I am so freaking tired of all the AI hype: it has no basis in reality and serves only to inflate valuations, inflame the public, garnet headlines, and distract from the real work going on in computing.”
Navigating the Impacts of AI Advancements
Booch’s opinion resonates with ongoing discussions about the current state of computing infrastructure. The rapid push for advanced AI technologies often overlooks significant challenges, such as high infrastructure costs associated with training large models.
There’s also the looming threat to economic and environmental sustainability as new AI chips become power-hungry and contribute to heat pollution. The energy demands of AI systems are substantial; for instance, training a single AI model can emit over 626,000 pounds of carbon dioxide equivalent, which is comparable to the emissions from nearly five times the lifetime emissions of an average American car.
Moreover, some observers speculate that his optimistic predictions may be aimed at attracting investors by painting a bright picture of his company’s future.