Intelligence Too Cheap to Meter: Sam Altman’s Vision for the AI-Powered Future
Sam Altman’s recent town hall with AI builders offered more than just product updates and roadmap hints—it provided a window into how OpenAI is thinking about the fundamental reshaping of work, creativity, and human capacity in an age of abundant intelligence. As someone who has spent decades exploring how organizations prepare for uncertainty through scenario planning, I find Altman’s framing particularly instructive, not for what it predicts, but for the uncertainties it reveals and the strategic choices it forces us to confront.
The Jevons Paradox and the Shape-Shifting Engineer
Altman opened by addressing a question that haunts every software engineer watching AI capabilities accelerate: Will cheaper code creation destroy demand for engineers, or will it unleash unprecedented demand? His answer invokes the Jevons paradox, the economic principle that increased efficiency often increases total consumption rather than reducing it.
But the more revealing insight came in his acknowledgment that “what it means to be an engineer is going to super change”. This isn’t about job titles or headcounts; it’s about the fundamental nature of the work. Engineers will spend less time typing and debugging code and more time “getting computers to do what they want” and “figuring out ways to make these useful experiences for others”.
This transformation mirrors previous technological shifts in software development, from assembly language to high-level languages to modern frameworks. Each abstraction layer expanded who could participate while simultaneously increasing the complexity of what could be built. The difference now lies in the pace and scope of change, and in the fact that the abstraction layer itself reasons and creates.
The Persistent Problem of Go-to-Market
One of the most honest moments in the conversation came when a builder asked about the new bottleneck: not building products, but finding people who care about them. Altman’s response was refreshingly direct—this has always been hard, and AI making building easier just makes the contrast more stark.
This observation deserves emphasis. In my work with organizations developing AI strategies, I consistently encounter the assumption that technological capability automatically translates to business value. It doesn’t. The hard work of understanding customer needs, building distribution channels, creating differentiated value, and capturing attention remains fundamentally human work—even as AI tools begin to augment sales and marketing functions.
Altman framed this reality in terms that should resonate with anyone building in the AI space: “Even in a world of incredible abundance, human attention remains like this very limited thing”. This is the binding constraint in an age of computational abundance, and it won’t yield to better language models alone.

The Architecture of Agency
When asked about OpenAI’s product vision for multi-agent orchestration, Altman’s answer revealed a healthy humility: “We don’t know what the right interface for all of this is going to be”. Some users will want complex multi-agent setups with “30 computer screens”; others will prefer calm, voice-based interactions with minimal intervention.
This uncertainty about interface and interaction design represents one of the most significant open questions in enterprise AI deployment. Organizations rushing to implement agent-based systems should recognize that we’re still in an exploratory phase. The “overhang of what these models are capable of relative to what most people can figure out how to get out of them is like huge and growing”.
For builders, this suggests opportunity. Someone will create the tools that bridge this capability gap—that help knowledge workers become genuinely productive with frontier models. OpenAI will try, but different users will have different preferences, and the solution space is vast enough to accommodate multiple approaches.
Deflationary Intelligence and Economic Restructuring
Altman made a provocative claim that deserves sustained attention: AI will be “massively deflationary”. By the end of 2026, he suggested, $100-1000 of inference cost could produce software that previously required teams working for a year. This isn’t incremental improvement; it’s a phase change in the economic structure of knowledge work.
The implications extend beyond software. With progress in robotics and other domains, we’re facing “massively deflationary pressure in the economy”—except in areas where social or governmental policy prevents it (Altman’s example: building houses in San Francisco).
This deflationary scenario raises profound questions about wealth distribution, labor markets, and social structure. Altman acknowledged the risk: “You can imagine worlds in which AI really concentrates power and wealth”. Preventing this outcome “feels like it needs to be one of the main goals of policy”.
As scenario planners, we should model multiple futures here: one in which abundant intelligence creates broadly distributed prosperity, and one in which it accelerates concentration and inequality. The difference won’t be determined by the technology alone, but by the policy frameworks and institutional responses we construct around it.
The Evolution of Software as a Living System
One of Altman’s most intriguing observations came from his personal experience with code-generation tools: “I no longer think of software as this static thing.” When he encounters a problem, he expects the computer to write code immediately to solve it. This expectation—that software continuously evolves and customizes itself to individual users—points toward a fundamentally different relationship with our tools.
We may continue using familiar interfaces (word processors, for example) because “we get like very used to our interfaces and it’s very important that like that button is in the same place”. But those interfaces will increasingly adapt to how each person uses them, creating personalized versions that diverge from any standard configuration.
This vision of “constantly evolving” software that “converges just for us” has profound implications for enterprise software strategy, knowledge management, and organizational learning. Static, one-size-fits-all enterprise systems may become as obsolete as mainframe terminals.
The Durability Question and Competitive Differentiators
For builders wondering how to create durable businesses when model updates can replace startup features overnight, Altman offered a useful heuristic: “Will your company be happy or sad if GPT-6 is like a wildly impressive update?”
Build things where you’re hoping the model gets better, not where you’re praying it doesn’t. This seems obvious in retrospect, but it represents a fundamental inversion of traditional startup strategy, in which defensive moats typically involve proprietary capabilities. In the AI era, the moat may be your ability to leverage continuously improving foundational models faster and more effectively than competitors.
The traditional rules of building successful startups—distribution, network effects, sticky user experiences—haven’t changed. What’s changed is the speed at which you can build and iterate, and the nature of the technical capabilities that constitute differentiation.
The Scientific Frontier and the Limits of Automation
Toward the end of the conversation, a scientist asked whether AI would eventually take over “the full research process”. Altman’s answer was measured: a version of GPT-5 used internally is producing scientific progress that researchers describe as “no longer super trivial”, but we’re still “reasonably long way away” from full automation of research.
The comparison to chess is instructive. First, AI beat humans. Then human-plus-AI beat AI alone. Then AI alone became dominant again. Altman expects a similar trajectory for research, though he acknowledged that “there seems to be something about creativity, intuition” that remains distinctively human—at least for now.
This raises a central question for knowledge work strategy: In which domains and for how long will human-AI collaboration outperform AI alone? The answer will vary by field, task type, and the extent to which we can structure verification and feedback loops.

Strategic Implications for Organizations
Several strategic insights emerge from Altman’s remarks that organizations should incorporate into their AI planning:
- Cost and Speed Tradeoffs: OpenAI has focused intensely on reducing cost, but Altman acknowledged they haven’t prioritized speed equally. As agent-based workflows become more common, some users will pay a premium for 100x faster output. Organizations should consider whether their use cases optimize for cost or latency.
- The Ideas Bottleneck: As creation costs plummet, idea quality becomes the binding constraint. Tools that help generate, evaluate, and refine ideas—not just execute them—will be critical. Organizations should invest in cultivating idea generation capabilities, not just execution infrastructure.
- Interface Diversity: There won’t be one “right” way to interact with AI systems. Organizations should experiment with multiple interface paradigms rather than standardizing prematurely
- Adaptive Foundations: Despite concerns about “lock-in,” Altman believes systems will increasingly function as general-purpose reasoning engines that can quickly adapt to new tools and environments. This suggests that flexibility and adaptability should outweigh premature optimization for current model capabilities.
The Uncertainties That Matter
What strikes me most about this conversation is not the confident predictions but the honest acknowledgment of uncertainty. OpenAI doesn’t know the right interface. They’re still figuring out how to balance writing quality against reasoning capability. They’re uncertain about the timeline for autonomous long-running workflows.
These uncertainties are not weaknesses; they are critical decision points where different futures diverge. Organizations building strategies around AI should explicitly map these uncertainties: Will intelligence continue to get cheaper? Will speed or cost matter more for your use cases? Will human creativity remain distinctive, or is it just another capability awaiting sufficient scale?
The answers will determine not just which products succeed, but how work itself is organized, how value is created and distributed, and what it means to build and create in an age of abundant artificial intelligence.
For more serious insights on AI, click here.
For more serious insights on strategy and scenarios, click here.
Did you enjoy the Intelligence Too Cheap to Meter: Sam Altman’s Vision for the AI-Powered Future? If so, like, share or comment. Thank you!
For more serious insights on strategy, click here.
All images via ChatGPT from a prompt by the author.

Leave a Reply