

The interview with Sean Iannuzzi reinforces the shift I’m seeing in the AI market from abstract promises of “AI transformation” to the gritty, operational realities of deploying intelligent systems in real enterprises. Over the last year, I’ve explored governance, orchestration, and institutional readiness as the real bottlenecks to AI value; Sean’s perspective from NewRocket’s Global Agentic AI Center of Excellence reinforces that story with field evidence from incident triage, integration patterns, and agent operations. Rather than another speculative conversation about agents, this interview sharpens the focus on minimum viable workflows, bounded autonomy, and the organizational changes required to make “AI as digital workforce” both dependable and accountable.
In that sense, this interview extends the Serious Insights on AI series from “why” to “how” and “how fast.” It connects earlier discussions of AI strategy and governance to concrete patterns: unified operational data models that make agentic reasoning safe, orchestration layers that coordinate specialized agents, such as air traffic control, and change programs that acknowledge that adoption will progress at the pace of the most risk-sensitive functions. Readers looking for a pragmatic lens on agents in 2026, such as what to automate first, how to instrument trust, and where organizations will actually stall, will find this conversation a useful bridge between vision decks and production reality.
Top takeaways from the interview:
- The primary constraints on agent adoption are organizational, not technical: ambiguity in ownership and process, uneven maturity across functions, and deterministic expectations of probabilistic systems slow progress more than model limits, which is why sequencing change, strengthening controls, and treating agents as managed, evolving production systems are non-negotiable.
- The minimum viable “agentic” workflow in 2026 is not about maximum autonomy; it is about a tightly scoped loop where AI interprets a signal, assembles context, acts within policy, and closes the loop with auditable, measurable impact—often delivering 20–30 percent effort reduction on high-volume work like Tier 1 incident handling.
- Enterprise AI advantage increasingly comes from unified data and orchestration rather than any single model or agent: trusted, consolidated operational context and a governed orchestration layer enable safe, explainable agents, reduce integration brittleness, and turn domain knowledge into reusable institutional assets.
The Sean Iannuzzi interview
What’s the minimum viable end-to-end workflow in 2026 that proves the thesis? Pick one concrete process (e.g., ITSM incident triage to remediation, order-to-cash, HR onboarding) and walk the steps.
From an enterprise leadership perspective, the objective is not maximum autonomy. The objective is dependable, governed outcomes.
The minimum viable workflow is one where AI can interpret a signal, assemble the necessary operational context, reason through possible actions, take a bounded step within policy, and close the loop with clear auditability and measurable impact. A practical example is incident triage-to-remediation: assess the issue, determine likely causes, recommend or execute an approved action, and automatically document the result. Human teams focus on exceptions and higher-judgment decisions rather than routine coordination.
The key is sequencing. Organizations benefit from selecting one high-impact, repeatable workflow, proving measurable value quickly, and then expanding deliberately. In practice, we commonly see 20 to 30 percent reductions in overall handling effort, and significantly higher improvements for Tier 1 work when these loops are executed consistently.
A practical starting point is to choose workflows where exceptions are already well documented, volumes are high, and impact can be measured within the first 60 to 90 days. Early success builds both trust and momentum.

Where do current automations stall most often: identity, permissions, data quality, process ambiguity, exceptions, or organizational politics?
Technical capability is rarely the primary constraint. Operating model readiness tends to matter more.
Automation surfaces ambiguity quickly. As soon as systems begin acting autonomously, gaps in ownership, permissions, and process clarity become visible. Fragmented processes and unclear accountability slow progress more than model sophistication.
Data quality and completeness are foundational. AI requires trusted and complete context to act safely. This is why unified operational data models provide structural advantages. When configuration items, relationships, and ownership exist in a single system of record rather than scattered across tools, AI can reason safely and completely. Many organizations underestimate this advantage until they attempt agents across fragmented architectures.
Successful AI programs are therefore as much organizational and architectural as they are technical.
What architecture actually works in enterprises: single “boss agent,” multi-agent swarm, or orchestration layer over deterministic workflows, or is it a contextual mix, and why?
Architectures that perform reliably in enterprise environments balance adaptive intelligence with clear governance.
Rather than concentrating intelligence into a single coordinating agent or relying on loosely structured swarms, organizations generally see more consistent results with specialized, context-aware agents coordinated through an orchestration layer with defined guardrails and policy enforcement.
A single “boss agent” often becomes a bottleneck and a single point of failure. Unconstrained swarms introduce coordination risk and unpredictable behavior. Orchestration provides modularity, accountability, and failure isolation while still allowing agents to reason and adapt dynamically.
A useful analogy is air traffic control. Individual pilots operate their aircraft independently, but a centralized control layer ensures safe coordination at scale. Enterprise AI benefits from the same pattern: specialized capability with shared governance.
This approach enables non-deterministic reasoning while preserving predictability, auditability, and risk control.
What are the non-negotiable controls: approval gates, audit logs, policy-as-code, model monitoring, segregation of duties, rollback, and “kill switch” design?
If AI participates in operational execution, it should meet the same standards as any other enterprise system.
This includes explicit identity, role-based permissions, approval gates for higher-risk actions, policy enforcement, comprehensive audit trails, rollback capability, and the ability to disable or adjust behavior immediately as needed. Actions must be explainable and attributable.
Separation of duties is also critical. Design, approval, and execution should not reside with the same system or team. That structure mirrors how regulated environments already manage financial and operational risk.
These controls are not barriers to innovation. They are what make responsible scale possible.
How does an agent earn trust over time? What telemetry and metrics show it’s safe and correct (task success rate, exception rate, time-to-resolution, compliance drift, cost per outcome)?
Trust develops through evidence and consistency rather than design alone.
Agents should be measured in the same way operational teams are measured: task success rate, exception rate, time-to-resolution, cost per outcome, compliance adherence, and frequency of human intervention. Stable and predictable performance allows autonomy to expand. Increased variance signals the need for oversight.
Trust is also tightly linked to data integrity and completeness. Inconsistent context leads to inconsistent outcomes. For that reason, data quality functions as a safety system for AI.
CFOs often focus on cost per transaction, CISOs on auditability and risk posture, and COOs on throughput and exception rates. With proper instrumentation, agents make all of these visible and measurable.
The objective is not static automation but continuous performance improvement within governance boundaries.
What’s the integration reality: APIs vs. RPA, event-driven vs. batch, and how do agents handle brittle systems and vendor rate limits without collapsing into retries and chaos?
Enterprise environments are rarely uniform. Most operate a mix of modern platforms, legacy systems, and specialized tools that have accumulated over many years.
Traditional integration methods such as APIs, events, and workflow automation remain important foundations, but they are no longer sufficient on their own. Agentic systems introduce a new requirement. Integration is no longer only about moving data between systems. It is about sharing trusted context and coordinating decisions safely across systems.
As a result, integration increasingly shifts from simple system-to-system connectivity toward context-aware and agent-to-agent collaboration. Emerging patterns such as Model Context Protocol (MCP) and agent-to-agent communication allow platforms to provide structured context to agents, including identity, permissions, operational state, and approved tools. Instead of repeatedly reconstructing context through brittle calls and scripts, agents receive governed context directly and can act with greater reliability and accountability.
This reduces coupling, limits failure propagation, and improves resilience at scale.
Organizations with consolidated platforms also experience fewer integration surfaces and fewer points of failure. When workflows, operational data, relationships, and governance reside within a unified system of record, agents require less stitching and fewer custom connectors. That simplicity often produces more stability than best-of-breed architectures that depend on numerous integrations to approximate the same visibility.
In practice, resilience and observability matter more than architectural purity. The objective is not to connect everything. The objective is to reduce complexity while enabling intelligence to operate safely and consistently.
Finally, the first step is often normalization rather than automation. Reconciling duplicates, clarifying ownership, repairing relationships, and improving data completeness create the trusted context agents need to reason correctly. Without that foundation, even sophisticated integration strategies become fragile.
What are the top failure modes you’ve already seen or expect: hallucinated actions, misaligned incentives, cascading errors, prompt injection, data leakage, privilege escalation?
Common risks include actions based on incomplete context, excessive permissions, cascading effects across dependent systems, and insufficient oversight.
Another frequent issue arises when governance or ownership is unclear. When reasoning systems act without shared context or clear accountability, outcomes become inconsistent. Misaligned incentives between teams can also create friction where local optimization undermines system-wide performance.
A more subtle but equally common failure mode is expectation mismatch. Many organizations approach large language models and agentic systems with a deterministic mindset, expecting perfect, identical outcomes every time. These systems are probabilistic by nature. They behave more like skilled operators who improve with feedback rather than static automation that never varies. When teams expect zero variance, even small edge cases can erode trust and stall adoption.
Successful programs recognize this distinction early. They design for measurable reliability and continuous improvement rather than perfection. The objective is consistent, governed performance at scale, not flawless behavior on every transaction.
These risks are best mitigated through bounded agency, strong controls, clear accountability, and incremental rollout rather than broad, immediate autonomy.
“Reduce headcount” is a big discussion? Where do roles actually disappear, where do they shift, and what new work shows up (agent ops, governance, process redesign, data stewardship)?
In practice, the change is typically a shift in work rather than a broad reduction in roles.
Repetitive coordination tasks such as routing, documentation, and manual validation are well-suited for digital contributors. This allows people to focus on higher-value responsibilities, including oversight, exception handling, process design, and strategic decision-making.
At the same time, new responsibilities emerge around governance, data stewardship, and managing the AI workforce itself. The overall effect is usually an elevation of human contribution rather than replacement.
If you had to bet on one blocker that makes 2026 slower than predicted, be it regulation, security, change management, procurement, or model reliability, which one, and what mitigations are realistic?
The primary constraint is not model capability. It is organizational adaptability.
AI capabilities are advancing very quickly, but enterprises require time to responsibly absorb that change. Ownership must be clarified, risk and compliance expectations aligned, processes redesigned, and governance embedded. These adjustments take time because they affect how work is performed, how decisions are made, and how accountability is structured.
Another reality is that maturity is rarely uniform across an organization. Different departments often progress at different speeds. Technology teams may be ready to operationalize agents quickly, while legal, compliance, HR, or operational functions may require additional controls and confidence before expanding autonomy. In practice, enterprise adoption tends to move at the pace of the most risk-sensitive or least-prepared domain.
Many organizations also discover that their processes are less standardized than assumed. Variability that was manageable for humans becomes a blocker for safe automation. Addressing that variability requires deliberate simplification and normalization.
Technology can evolve in months. Institutional alignment and operational readiness evolve in quarters. The organizations that move fastest are not those with the most advanced models, but those that sequence change carefully, prove value early, and build trust step by step.
What are your thoughts about agents and IP?
Models and individual agents are becoming increasingly interchangeable. As foundation capabilities mature, the durable advantage rarely comes from any single model or standalone agent.
Domain-specific agents absolutely matter. They encode subject-matter expertise, workflows, and operational nuance that make intelligence useful in practice. However, from a defensibility standpoint, the greater value typically lies not in the individual agent itself, but in how agents are assembled, coordinated, and governed as part of a broader system.
In enterprise environments, the real differentiation emerges at the composition layer. The platform that brings agents together, assigns roles, connects skills and tools, enforces policy, and orchestrates dynamic workflows is what consistently drives outcomes. Specialized agents are the contributors. The orchestration layer is the system.
In practice, this means leveraging platforms that provide both general AI capabilities and enterprise-grade orchestration, then investing in domain-specific configuration, operational knowledge, and reusable assets such as prompt libraries, skill templates, and agent team blueprints. These artifacts capture institutional learning and can be applied repeatedly across use cases, which makes them far more durable than any single implementation.
Organizations that attempt to build orchestration infrastructure from scratch often underestimate the ongoing maintenance and risk burden. The more defensible intellectual property comes from process design, governance discipline, and accumulated operational context rather than from building foundational plumbing.
Over time, competitive advantage accrues to how effectively intelligence is composed and embedded into the enterprise, not to any one agent in isolation.
How do you think about agent change/knowledge management?
Agents should be managed as adaptive production systems rather than one-time implementations.
They require versioning, testing, staged releases, monitoring, and clear ownership. Structured feedback loops should continuously evaluate outcomes so behavior can be refined safely over time.
Knowledge, policies, and operational context must be treated as living assets. As environments change, agents learn and adapt while remaining fully auditable and controlled.
This ensures intelligence steadily improves performance without sacrificing reliability or trust.
About Sean Iannuzzi, Global Agentic AI Center of Excellence Practice Lead for NewRocket

Sean Iannuzzi is the Global AI Center of Excellence Practice Lead at NewRocket, where he is redefining the future of enterprise AI within the ServiceNow ecosystem. He leads NewRocket’s global AI strategy, driving innovation in generative and agentic AI through modular architectures, data-aware governance, retrieval-augmented intelligence, and multi-agent orchestration. Sean architected NewRocket’s Agentic AI Reference Architecture, the Value Realization Framework & Dashboard, and the company’s expanding portfolio of intelligent modular agents, including Elara, Ariel, Phoebe, Miles, Heidi, and others, transforming next-generation AI into measurable outcomes for enterprise clients worldwide.
For more serious insights on AI, click here.
For more serious insights on learning, click here.
Did you enjoy the Sean Iannuzzi interview? If so, like, share or comment. Thank you!

Leave a Reply