
The MIT NANDA Report Challenge: AI’s ROI Problems Call for a Revisit of Solow’s Productivity Paradox, The Serendipity Economy, and Finding Value Beyond Productivity
Technological revolutions rarely arrive on schedule. They come with fanfare, bold promises, and heavy investment, but their impact often hides in the shadows before it bursts into the open. Artificial intelligence now finds itself in this awkward in-between space. Boardrooms are abuzz with talk of generative AI strategies; enterprises are pouring billions into pilots and partnerships; employees are experimenting with chatbots and copilots. Yet in measurable business terms, most of these efforts appear to fail to deliver returns on the investments.
Is AI Creating a New Productivity Paradox? Or is Productivity Still Too Narrow a Definition of Value?

A recent report from MIT’s NANDA initiative, The GenAI Divide: State of AI in Business 2025 (Challapally, Aditya, Pease, Chris, Raskar, Ramesh Chari, Pradyumna. 2025)[i], found that only 5% of generative AI projects are producing tangible business results, while the other 95% linger in pilot purgatory or collapse without ever touching the balance sheet. That number is as shocking as it is sobering. But for those who recall the debates around information technology in the late twentieth century, it should also feel familiar.
In 1987, Robert Solow observed, “You can see the computer age everywhere but in the productivity statistics” (Solow, 1987)[ii]. At the time, firms had invested for two decades in computers and networks, but national productivity growth had slowed to a crawl. It seemed absurd: how could a technology so visible and powerful leave so little trace in the economic data? That paradox, named for Solow, persisted for years before finally resolving in the late 1990s, when IT-enabled processes and the internet combined to fuel a surge in productivity.
There is another frame to apply as well, one born in the digital era: The Serendipity Economy (Rasmus, 2011)[iii]. I argue that much of the value created in networked systems is not immediate, predictable, or easily measured. Instead, value emerges over time, often in unexpected ways, as connections multiply and opportunities arise. Combining Solow’s paradox with the Serendipity Economy offers a powerful way to understand why AI adoption looks like failure today—and why, in fact, it is more likely sowing the seeds of transformation.
Ten Issues with the MIT NANDA Initiative Report
The GenAI Divide: State of AI in Business 2025
- Exaggerated zero-return claim – The “95% of organizations get no ROI” figure overstates results from a small, biased sample and too short a timeframe.
- Weak disruption index – Industry “disruption” is measured with loose proxies (market volatility, executive turnover) that may not be caused by AI.
- Adoption vs. transformation confusion – Individual productivity gains through shadow AI are dismissed, contradicting claims of low transformation.
- Simplistic barrier framing – The report blames lack of learning/memory as the barrier, ignoring compliance, governance, and legacy integration issues.
- Generic vs. enterprise tool contradiction – Consumer LLMs are described as both outperforming enterprise tools and being inadequate for critical tasks.
- Build vs. buy causation error – Claims that buying succeeds twice as often rest on correlation from interviews, not robust market evidence.
- Speculative investment data – Budget allocation estimates come from hypothetical survey exercises, not actual financial data.
- Inconsistent job impact narrative – The report alternately minimizes layoffs while citing trillions in latent automation exposure.
- Inflated Agentic Web vision – Predictions of autonomous systems negotiating contracts and restructuring workflows lack grounding in present capabilities.
- Timeframe too short – Six months is insufficient to evaluate enterprise AI ROI, leading to premature judgments of failure.
The Solow Productivity Paradox: Computers Without Productivity
In the decades after World War II, U.S. productivity soared, driven by industrial retooling, suburban expansion, and rising education levels. But by the early 1970s, growth slowed dramatically. Between 1973 and 1995, labor productivity grew at roughly 1.4% per year, down from the postwar average of 2.8%[iv]. The decline perplexed economists, especially as firms were spending more and more on information technology.
Corporate America had embraced computers with zeal. Mainframes automated payroll. PCs arrived on desks. Databases replaced filing cabinets. IT budgets ballooned. If any technology should have propelled productivity, it was computing. Yet the numbers stubbornly refused to show it.
Solow’s remark crystallized the paradox. It wasn’t that computers weren’t everywhere—they were. The problem was that their presence did not translate into measurable economic gains. Analysts offered explanations. Perhaps the improvements were mismeasured; service-sector efficiencies, for example, were notoriously hard to capture. Perhaps there was a lag; new technology takes time to diffuse, and organizations need years to adapt their processes. Or perhaps firms were misusing IT, simply digitizing existing routines rather than redesigning them.
All three explanations proved true. By the mid-1990s, as organizations reengineered workflows, deployed enterprise software, and harnessed the internet, productivity growth accelerated. From 1995 to 2005, nonfarm business productivity in the U.S. grew at 2.5% annually[v]. The paradox had not been permanent. It was a stage in the adoption cycle of a general-purpose technology, one that requires complementary innovation before its power becomes visible.
MIT NANDA Report Findings: The AI Divide
The MIT NANDA report suggests AI now occupies the same paradoxical stage. The study documents an enterprise landscape saturated with experimentation but thin on results.
Executives report near-universal trials of tools like ChatGPT and Microsoft Copilot. Employees across industries are experimenting with generative assistants. But when asked about measurable business impact, the story changes. Only 5% of custom enterprise AI solutions reach production, while 95% stall at the pilot stage (Baily, M. & Gordon, R. J., 1989 )[vi].
The divide is stark. Technology and media firms, whose products or content can easily incorporate generative AI, show structural disruption. Other sectors, from healthcare to manufacturing, remain in what MIT calls an “experimentation trap.” Projects proliferate, but few touch costs, revenues, or customer behavior in ways that matter.
Interviews in the report reveal familiar frustrations. AI tools forget context, fail to improve over time, and bolt awkwardly onto legacy workflows. Employees often reject official platforms, turning instead to personal accounts of consumer AI services—what MIT terms a “shadow AI economy.” Firms chase glamorous customer-facing applications, but the clearest returns so far come from back-office automation, the unglamorous work of processing documents and streamlining support.
The result is a sobering picture: high expectations, heavy spending, low returns. In numbers, AI looks less like a revolution and more like a disappointment.
Why History Rhymes
The echoes of the Solow paradox are unmistakable. Then as now, a general-purpose technology promised transformation but underwhelmed in the short term.
Computers automated old processes before they enabled new ones. AI today drafts emails and summarizes documents, but has yet to reconfigure the structure of industries. Firms once digitized paper forms without changing workflows; firms now deploy AI pilots without rethinking operations. In both cases, the productivity payoff requires more than new tools—it requires organizational adaptation.
The productivity curve for such technologies often takes the shape of a J: initial stagnation or even decline as investments rise and organizations adapt, followed by eventual acceleration once complementarities lock in (Brynjolfsson, Erik, Daniel Rock, and Chad Syverson, 2021)[vii]. It can be argued that, despite AI’s rapid innovation, it has not yet reached the plateau of that curve. Rapid innovation has not yet been matched by equivalent changes within organizations and their leaders and employees.
But there is another way to interpret this gap, one that goes beyond lag or measurement error. The Serendipity Economy suggests that the value of AI may already be accruing, just not in the ways or places traditional metrics expect. AI is embedded across most enterprise apps, enabling a wide range of copilot and assistive situations, and much of that is lost in the noise. The failure of large, custom applications does indeed have much to do with organization structure and human adoption of new workflows. But AI, like the PC, is more subtle.
The daily value of AI is hiding in plain sight. Enterprise applications, such as Microsoft 365 and Salesforce, now embed AI to assist with writing, scheduling, and analysis. Employees supplement these with “bring your own AI” on phones and tablets or with side-loaded assistants that help co-create content and crunch data. Yet these gains rarely surface in traditional accounting. The ledgers capture the cost of acquiring AI, not the benefits. The instruments simply aren’t tuned to measure distributed wins: an autocomplete that saves seconds for millions of workers; a rewritten paragraph that prevents confusion and clarifies intent; a meeting summary that not only avoids duplication but sparks an idea for a colleague returning from vacation. These small increments compound, but they disappear into the noise of quarterly reports.
The authors of the MIT NANDA Report put the blame, on the front page of the report, on the failure of AI to learn. Considering the massive investments in AI training to date, we should not assume that tools built in a vacuum will perform well in all real-world situations. Unlike deterministic systems that must generate a precise answer and can therefore be tested (and yet, still bugs persist), the enormous number of use cases to which a generalized LLM can be applied outstrips any builder’s ability to test. In fact, AI vendor “acceptance” criteria for a new release are tested against expected outputs; LLMs are released knowing they cannot be tested against every possible use case.
Companies are not piloting or beta testing AI applications; they are piloting and beta testing AI. Just as personal computers struggled to find their purpose before the advent of spreadsheets and desktop publishing, the failures of today are not lost on the makers of AI. Yes, these experiences should be viewed as a challenge and as valuable feedback on our training approaches. However, significantly, AI vendors are not addressing back-office accounting problems. Businesses are learning together how to best utilize AI. AI’s failures in business environments are as much a failure of imagination in how to employ a new tool, as in how to apply AI’s non-deterministic responses to previously completely deterministic systems.
AI’s failures in business environments are as much a failure of imagination in how to employ a new tool, as in how to apply AI’s non-deterministic responses to previously completely deterministic systems.
Few IT employees, and even fewer IT leaders, were trained to incorporate AI in its current incarnation as part of their mental models. Learning how to use a tool that can, in many ways, be any tool, can, and should be, daunting. Organizations should not step back from AI, but they should reconsider how they introduce it, looking beyond its relatively inviting chat front-ends, to probe its capabilities. In my experience, good results come not from getting an AI interaction right quickly, but from arguing, reframing, restating, feeding back content to multiple models and iterating on responses. That is not “productive” in the traditional sense, but it is useful in arriving at better answers, and that should perhaps be where AI is applied first and best–not at making use faster at the mundane, but at making use better at the profound.
The NANDA team and businesses working to incorporate AI into their workflows need to recognize that they often direct AI to areas that have proven intractable to traditional technology. The documentation of those historical lessons forms the foundation for AI’s training set. While AI can synthesize novel patterns, it needs to be asked not only which tools it has available through an MCP call, but to understand the goals and the context of a business problem and suggest ways to automate it more effectively.
AI excels at document and form processing, summarization, transcription, translation, and the management of many of the autonomic functions of PCs, resulting in lower energy costs and extended hardware longevity. Yes, AI can be fragile in some situations, often because we are putting it into edge cases, hoping it will reason its way out of the corners we painted, rather than asking it to rearchitect the space.
As much as the current issues with AI rhyme with the rise of general-purpose computing, AI has, unlike computer hardware, the ability to help us solve the problems it exposes, which can accelerate solutions once we learn how to work effectively with it.
Productivity and AI: Seeing Patterns in the Past
- Promises exceed outcomes. IT once was supposed to revolutionize white-collar work overnight. AI now is posited to reinvent knowledge work in real time. Yet both phases begin with hype and underdeliver on ROI.
- The need for organizational adaptation. IT’s productivity returns came only after firms redesigned supply chains, merged data, and retrained staff. Similarly, AI requires orchestration—data pipelines, governance, change management—for value to surface.
- Cultural resistance hampers adoption. Early IT adoption faced user reluctance; today’s AI adoption falters among stakeholders wary of usability, transparency, or job threats.
- Traditional metrics fail to capture digital work. Productivity data undercounted IT-driven improvements in services. Likewise, AI’s impact—on creativity, problem-solving, customer insight—remains hard to quantify.
The Serendipity Economy: Hidden Value in Plain Sight
The Serendipity Economy argues that in a networked, digital world, value often arises in indirect, unpredictable ways (Rasmus, 2011)[viii]. Applied to AI, this means that 95% of projects labeled as failures may in fact be laying the groundwork for unmeasured or future gains.
An AI experiment may not yield immediate cost savings, but it may generate insights into customer behavior that can later inform a profitable strategy. A chatbot might flop in service delivery but succeed in producing training data for future models. A failed pilot might teach employees how to work with AI, knowledge that proves crucial in a subsequent, more successful initiative.

Serendipity reframes the productivity paradox. Where Solow saw computers without measurable productivity, serendipity sees hidden value accumulating beneath the surface. What is invisible to quarterly reports may later reveal itself as transformative.
An Overview of The Serendipity Economy
- Realization may far outpace creation. Implementing an AI model now may not yield ROI—but it may enable new thinking or insights later.
- Unplanned outcomes matter. A shallow chatbot pilot could highlight patterns in customer inquiries that inspire product innovation.
- New value surfaces in external contexts. Internally, a pilot may underperform; externally, ecosystem partners may unlock revenue or collaboration.
- Networks amplify value. AI deployed across teams may spur unexpected cross-pollination of ideas.
- Ecosystems reconfigure unpredictably. Shifts in regulation, market behaviors, or technologies may suddenly make early pilots look prescient.
Read Welcome to the Serendipity Economy
In essence, 95% of AI projects labeled “failures” may actually be silent incubators of future advantage. Like seeds buried underground, they store learning, data, and infrastructure. That latent potential could be realized when conditions align and opportunities emerge, similar to how electricity took decades to transform manufacturing and services.
Managing the Paradox with Serendipity in Mind
For enterprises, the implication is clear: success with AI is not only about driving immediate ROI but about cultivating conditions where serendipitous value can emerge.
That means broadening measurement beyond efficiency. It involves tracking signals such as employee engagement, new ideas, and cross-functional collaboration sparked by the use of AI. It means encouraging experimentation across networks of people and systems, not confining pilots to narrow silos. It means designing AI investments as infrastructure for learning, not just as tools for cost-cutting.
Patience is part of the strategy, but not passive patience. Serendipity requires vigilance. Firms must watch for unintended outcomes, capture them, and build on them. They must notice when a tool built for one purpose is repurposed for another, when employees invent unexpected uses, or when customer responses suggest new opportunities.
This is how the paradox resolves: not by waiting passively for productivity to show up, but by actively managing the messy middle where serendipity flourishes.
For enterprise leaders, this insight shifts strategy:
- Widen performance metrics. Track not just ROI but emergent signals—employee experiments, idea generation, reuse of systems. Look for value over time, not just at moments. Get finance involved early to deply novel ways to capture value creation in traditional organizaitons.
- Build networks, not silos. Spread pilot use across teams, share learnings, and connect innovation across functions. Watch how these connections interact to create value.
- Persist with purpose. Long-shot pilots shouldn’t be automatically abandoned. They may reveal high-value insights in time.
- Favor ecosystems over one-offs. The MIT NANDA report shows external partnerships yield higher deployment success than internal builds (Challapally , Aditya, Pease , Chris, Raskar, Ramesh Chari , Pradyumna. 2025). Build deep, rich connections that increase the opportunity for serendipity.
- Capture serendipity intentionally. Set up systems—like pilot tracking, cross-functional forums, reuse logs—to capture surprising outcomes and build on them.
The paradox resolves when organizations treat AI not just as fuel for efficiency, but as generative support for learning and innovation.
Conclusion: The Long Game, Reframed
MIT’s NANDA report indicates AI is underperforming—95% of pilots show no measurable impact, and billions have been spent with little to show for it. But Solow’s paradox reminds us that this is often how revolutions start. Computers used to be everywhere except in productivity data. Eventually, they transformed the economy.
The Serendipity Economy explains why. Value isn’t always immediate or predictable. It emerges through networks, over time, through accidental use, and moments of insight. If organizations understand this, they can see today’s AI paradox not as a failure but as a stage in the journey.
The lesson is clear: for AI to succeed, to realize its potential value, enterprises must adopt a long-term perspective. They must invest in AI with patience and imagination, encourage serendipity, and expand their approach to tracking financial returns. With the vast amount of resources dedicated to training AI, the returns can’t just be about productivity; they need to find patterns that unleash the emergent capabilities of humans and machines―and most importantly, the two of them together.
Scenario Thinking: For AI to realize its full value, it will require people to adopt and leverage it, seeing beyond productivity to how AI can reframe old problems with perhaps even radical approaches that we have a hard time imagining, but for which we are well-suited to articulate the questions. However, because AI relies on people and businesses to adopt it, we may be witnessing the early stages of pushback driven by fear, misunderstanding, or ideology. However, that is the topic of a post that explores a different scenario, one in which the uncertainties we see today align in a different way to create another future.
References with Links
[i] Challapally, Aditya, Pease, Chris, Raskar, Ramesh Chari, Pradyumna. 2025. The GenAI Divide: State of AI in Business 2025. MIT NANDA Project Report. https://nanda.media.mit.edu
[ii] Solow, Robert M. “We’d better watch out”, New York Times Book Review, July 12, 1987. https://www.standupeconomist.com/pdf/misc/solow-computer-productivity.pdf
[iii] Rasmus, Daniel W., Welcome to the Serendipity Economy. 2011. SeriousInsights.net.
[iv] Baily, M. & Gordon, R. J. 1989, ‘Measurement issues, the productivity slowdown and the explosion of computer power’, National Bureau of Economic Research Reprint Series 1199. https://www.brookings.edu/wp-content/uploads/1988/06/1988b_bpea_baily_gordon_nordhaus_romer.pdf
[v] Oliner, Stephen, D., and Daniel E. Sichel. 2000. “The Resurgence of Growth in the Late 1990s: Is Information Technology the Story?” Journal of Economic Perspectives 14 (4): 3–22. https://www.aeaweb.org/articles?id=10.1257/jep.14.4.3
[vi] Challapally , Aditya, et al. The GenAI Divide: State of AI in Business 2025. MIT NANDA Project Report
[vii] Brynjolfsson, Erik, Daniel Rock, and Chad Syverson. 2021. “The Productivity J-Curve: How Intangibles Complement General Purpose Technologies.” American Economic Journal: Macroeconomics 13 (1): 333–72. https://www.nber.org/system/files/working_papers/w25148/w25148.pdf
[viii] Rasmus, The Serendipity Economy, 2011. SeriousInsights.net
For more serious insights on AI, click here.
For more serious insights on management, click here.
Leave a Reply