
Enterprise AI Insights from the Field: Success Factors and Waiting for ROI
I had the pleasure of listening to leaders discuss their firsthand experiences with AI deployments and how organizations are implementing AI. I learned from Saanya Ojha of Bain Capital Ventures, who spoke with Moveworks President Varun Singh, and from several thought leaders during a KMWorld–hosted webinar that brought together TigerGraph (Victor Lee, PhD), Semedy (Charles Lagor), and TopQuadrant (Steve Hedden).
I’ll keep the takeaways simple. They all align with my experience: design for success, including how to measure ROI. If you don’t think about what success may look like, you will likely miss it.
The MIT NANDA report continues to reverberate, though it commits a sin that AI is often accused of: not understanding context. Even though most pilots never make it into production, only a handful generate measurable ROI, that doesn’t mean organizations aren’t learning or gaining value. We remain earlier in the AI experiment. The lesson is not that AI has failed, but that too many organizations are approaching AI without grounding projects in business reality. Just because AI appears new, doesn’t mean IT should abandon practices like design, change management and deployment onboarding.
Success for now requires narrow scopes, integrating AI into workflows, strengthening data foundations, and adopting design principles that sustain accuracy and governance at scale.
Knowledge graphs—combined with AI through techniques like GraphRAG—add another critical dimension: the ability to contextualize, govern, and scale AI so that outputs align with enterprise needs . Together, these insights offer a design playbook for organizations seeking durable returns from AI.
Enterprise AI Insights from the Field

Narrow Problems, Measurable Impact
Enterprises that succeed with AI begin by targeting tightly scoped problems. Blackstone’s automation project, which saved investors one to two hours of work per day, generated ongoing operational value. In contrast, high-visibility projects, such as an AI-powered Olympic sneaker or an AI-inspired Coke flavor, generated buzz but had little lasting impact.
The enterprise AI insights from the field included the observation that Incrementalism compounds. Minor improvements should be codified, integrated, and scaled. Building on small wins will prove more likely to generate ROI than big projects aimed at transformation. Without grounding, large projects won’t deliver. While they can be successful, large projects require the same discipline and patience as any other large undertaking. Organizations that seek big, quick wins will likely be disappointed more often and at a much higher cost compared to those that focus on well-scoped, quick wins that they can build upon, even if those smaller projects don’t produce transformational savings..
Integration, Not Sideloads
AI that requires employees to “go elsewhere” rarely sticks. Chatbots that bolt onto systems without integration become curiosities rather than tools. The principle is simple: embed AI into the systems and workflows people already use.
AI integration requires mapping current processes, codifying organizational knowledge, and identifying where AI can remove friction. A procurement desk that once required a staffer to manually look up laptop order status in an ERP system can be automated, freeing up resources for higher-value work. However, this only happens when AI is integrated into existing workflows, not when it sits as an optional sidebar.
Data Foundations: From Vitamins to Painkillers
AI does not compensate for poor data. If anything, it magnifies deficiencies. Organizations that skipped investment in data governance and integration are finding their cracks exposed. Data, once seen as a hygiene factor, “vitamins” with no obvious ROI, has become essential, a “painkiller” without which AI fails.
Unstructured content, siloed CMS repositories, and incomplete metadata block effective AI. Knowledge graphs provide a structure for connecting content, metadata, and governance, transforming unstructured data into usable inputs.
Knowledge Graphs: Safe, Accurate, Scalable
AI struggles with context. Embeddings capture statistical similarity, but they do not reason about relationships. Knowledge graphs bring the missing dimension: connections. By modeling entities and their relationships, graphs give AI a way to retrieve information in context, reduce hallucinations, and deliver more relevant answers.
Designers should treat knowledge graphs not as a one-off artifact but as part of a knowledge ecosystem. That ecosystem must:
- Support multiple roles—providers, builders, auditors, and consumers.
- Manage lifecycle states for entities (draft, review, publish).
- Enforce ontologies and constraints so data complies with governance.
- Maintain interoperability across terminologies and taxonomies.
- Track provenance, versions, and dependencies.
- Scale to millions of nodes without sacrificing transparency.
This design discipline ensures that AI outputs remain trustworthy, regulated, and contextually appropriate.
Governance and Guardrails
Accuracy and safety are not optional. Without governance, chatbots can pull from repositories like SharePoint and reveal sensitive salary data or provide inappropriate advice. AI without constraints becomes a liability. Embedding governance metadata and policies into content ensures that guardrails follow the data downstream into every AI application.
This isn’t just compliance—it’s operational discipline. Guardrails keep AI aligned with business objectives and regulatory boundaries, reducing risk while enabling scale.
ROI Beyond Revenue
Leaders often pursue AI use cases in sales and marketing because the ROI is easy to articulate in terms of revenue. Yet, the real leverage often lies in back-office and mid-office functions, such as HR, claims processing, customer support, procurement, and internal operations. These areas are characterized by redundancy and fragmentation that AI can help eliminate.
ROI should not be framed narrowly as revenue growth. It must also include cost reduction, cycle-time improvement, error avoidance, and employee capacity expansion.
Key Design Recommendations
- Scope narrowly, scale deliberately: Start with bounded problems that deliver incremental value and expand from there.
- Integrate into workflows: AI must live where work is done, not as an add-on.
- Invest in data as infrastructure: Treat governance and metadata as essential design elements, not afterthoughts.
- Adopt knowledge graphs for context: Use graphs to align data, enforce ontologies, and ground AI outputs.
- Embed governance: Ensure policies, constraints, and provenance travel with the data.
- Measure ROI broadly: Look beyond revenue; target efficiency, reliability, and employee value.
Enterprise AI Insights from the Field: Final Thought
AI does not fail in enterprises because it lacks capability. It fails because organizations often misdesign the environment into which AI is introduced, prioritizing vendor promises over hard-learned lessons from other technologies. A narrow scope, workflow integration, strong data foundations, and knowledge graph ecosystems are not optional; they are the conditions under which organizations realize ROI. Enterprises that adopt these design principles will find success, even if it takes time, and ultimately sustain value as AI continues to mature.
For more serious insights on AI, click here.
For more serious insights on management, click here.
Did you enjoy Enterprise AI Insights from the Field? If so, like, share or comment. Thank you!
Leave a Reply