“The Future of AI Isn’t What You Think” from Foxit Featuring Daniel W. Rasmus
I had an opportunity to chat with Charles Verhey, Group Manager, Digital Media Production & Web Content • Marketing at Foxit, to explore the future of AI. Here’s our YouTube conversation, followed by an overview.

Foxit Daniel W. Rasmus Interview Overview
AI keeps getting sold as if it has one destiny: an inevitable curve up and to the right. That story may be comforting, but it is also wrong. The futures of AI are multiple, contested, and highly dependent on forces beyond the technology itself—governance, energy, geopolitics, economics, and culture.
My conversation with Charles pushed hard against the single-line narrative and leaned into scenario thinking as a better frame for understanding what happens next.
From straight lines to branching futures
Forecasts love certainty. Slide decks still describe AI adoption as a smooth progression from pilots to production to transformation. The reality is closer to a set of branching paths.
Some futures see AI as mundane infrastructure, embedded quietly in workflows. Others see it constrained by regulation, energy limits, or public backlash. In darker variants, AI becomes an instrument of social control inside highly centralized regimes. In more open ones, it supports pluralism and experimentation.
All of those futures are plausible. Treating only one of them as “the” future is a choice, not an inevitability.
Scenario planning as discipline, not theater
Scenario planning was developed to think about nuclear strategy and long-range uncertainty. It does not try to predict “the right future.” It identifies major drivers and unresolved uncertainties, then turns them into a handful of contrasting worlds. Strategies, investments, and operating models are tested across those worlds to see where they hold and where they break.
That is the critical distinction: scenarios are not oracles. They are stress tests.
For AI, the uncertainties are obvious: compute costs, regulatory regimes, public trust, data access, concentration of power in a few vendors, energy availability, and more. Combine those into coherent worlds, and very different AI trajectories appear:
- AI as a regulated utility.
- AI as a semi-chaotic commercial platform economy.
- AI as state-controlled infrastructure.
- AI is a stalled project, limited by energy or capital.
Organizations that behave as if only one trajectory exists will prove less resilient than those that design for several.
From hype to portfolios of AI bets
Marketing already uses AI as garnish. If a toothbrush or razor can be labeled “AI-powered,” it will be. That doesn’t mean the capability is meaningful or economically defensible.
A more useful view treats AI not as a monolith but as a portfolio of use cases with different levels of evidence:
- Domains with proven ROI and operational learning.
- Domains where AI “should” work, but the evidence is thin or anecdotal.
- Frontier experiments where first movers might gain an advantage but face higher risk and cost.
Sorting initiatives into those buckets changes the conversation. Instead of a vague mandate to “do more with AI,” decision-makers can ask which bets deserve scaling, which should remain exploratory, and which have slipped into theater.
The danger is linear thinking: assuming every proof-of-concept inevitably becomes transformative, when many should be quietly retired.
Regimes shape AI
AI does not land in a neutral environment. Political systems shape its role.
In more autocratic contexts, AI naturally gravitates toward surveillance, behavioral scoring, information control, and narrative management. It becomes infrastructure for compliance and coercion.
In more open contexts, the same technical capabilities can support adversarial debate, policy analysis, pluralistic media, and broad experimentation. Those uses are not guaranteed; they depend on governance choices, regulatory frameworks, and institutional culture.
Debates about AI ethics often focus on model behavior, but the larger questions sit at the level of regime and institution: who owns the systems, who sets the objectives, who can contest their decisions, and under what rules.
The invisible power of background technologies
The technology that changes daily life most dramatically often becomes nearly invisible. Microwaves. GPS corrections derived from relativity. Water and sewage systems. Static utilities subtly compress time, distance, and disease into something more manageable.
AI may follow that path. The most significant contributions could arrive not as headline-grabbing assistants but as:
- More reliable logistics.
- Less wasteful energy management.
- Better triage in healthcare and social services.
- Smarter, more adaptive infrastructure.
This framing also highlights neglected priorities. For all the talk about AI-generated content and productivity hacks, basic questions like global access to clean water, resilient grids, and food security remain underexplored relative to their importance. AI applied to those structural issues matters more than another round of marketing optimization.
Culture, fiction, and feedback loops
Popular culture has already shaped AI. Voice assistants, tablets, and ambient computing owe as much to Star Trek and other speculative futures as they do to whiteboard exercises in product teams.
That influence now moves in both directions. The lived experience of AI—bugs, hallucinations, delightful moments, intrusive misfires—feeds back into the next generation of stories. Science fiction will increasingly grapple with AI not as an abstract superintelligence but as a messy, occasionally useful, occasionally frustrating part of daily experience.
Those stories in turn will influence expectations, regulations, and product roadmaps. Culture and infrastructure are entangled.
Leadership work in an AI-saturated landscape
Taken together, several implications stand out.
First, AI is better understood as an uncertainty field than as a destiny. The question is not “What will AI do?” but “What range of AI-shaped futures should an organization be ready for?”
Second, the language of “deployment” is no longer adequate. AI systems are becoming part of infrastructure. That shift demands governance: continuous monitoring, incident response, ethics embedded in operations, and a portfolio view of risk and reward across systems and vendors.
Third, the bias toward linear narratives is itself a risk. When forecasts assume smooth compounding benefits from AI, plans become brittle. Every resolved uncertainty generates new ones—on regulation, energy, labor, markets. Planning that ignores that dynamic is optimistic, not strategic.
Finally, value judgments matter. AI can help answer big, structural questions, or it can be pointed at trivial or harmful objectives. It can optimize resource allocation, or it can amplify polarization and manipulation. Those choices are not made by models; they are made by people deciding what to fund, what to measure, and what to reward.
AI will not hand over a single clear future. It will expose organizations to more futures, faster. Scenario thinking, disciplined portfolios of use cases, and serious governance are the tools that keep those futures from becoming purely accidental.
For more Serious Insights news, click here.

Leave a Reply