Aravind Parthasarathy, Client Partner for the Telecom and Technology sector at NewRocket, on why autonomous AI is an organizational problem

Autonomous AI will not fail in the enterprise because of model benchmarks alone; it will fail, or succeed, based on whether organizations are willing to re‑engineer the messy, human‑dependent systems, approvals, and trust gaps that actually run their businesses today. In this interview, Aravind argues that most blockers to agentic AI at scale are organizational: brittle integrations, legacy budgeting and risk models, opaque accountability, and institutional distrust of core systems that years of workarounds have papered over but never resolved.
Top 3 Takeaways
- Autonomous AI is primarily an organizational problem: even with near‑perfect models, enterprises cannot scale agents when workflows assume humans as implicit exception‑handlers, governance moves at human latency, and no one has clearly defined what agents may do, under which conditions, and who owns their mistakes.
- Legacy processes, policies, and budgeting models silently sabotage AI beyond the pilot stage, from approval chains that reveal a much lower real autonomy threshold than leadership statements, to IT spend structures that reward maintenance over reinvention, while knowledge loss from retiring engineers erodes already fragile system understanding.
- The clearest sign an enterprise is not ready for autonomous AI is its inability to name a concrete accountability structure for bad agent decisions and to redesign risk, data access, and cross‑boundary workflows accordingly, regardless of how impressive vendor demos look in controlled environments.
The Aravind Parthasarathy Interview
Are most enterprises failing at autonomous AI because the technology is immature, or because their own environments are a mess?
The models are still imperfect (hallucinations, brittle tool-use, and occasional multi-step reasoning failures), but they’re improving quickly. What isn’t improving quickly is the enterprise processes and functions we’re deploying the AI on: tightly coupled apps, patchwork integrations, and workflows that still depend on humans as the “glue” are problems at every enterprise.
Even if you assume a perfect model today, most enterprises still can’t deploy autonomous agents at scale because the failure points are organizational: brittle integrations, inconsistent permissions, data no one fully trusts, and governance processes that move in days while agents operate in milliseconds. A common pattern is a pilot that works in a sandbox but breaks the moment it needs real entitlements (for example, creating a customer credit memo, changing an order, or initiating a refund) because no one has defined up front what the agent is allowed to do, how it’s audited, and who is accountable when it’s wrong.
Workforce skepticism compounds all of this. When employees perceive autonomous AI as a job reduction threat rather than a force multiplier, the organizational immune system activates before the technology even gets a fair test — and agents get confined to read-only mode indefinitely.
What are CIOs still underestimating about the gap between deploying AI features and supporting true autonomy?
Governance velocity is still often underestimated and is added as an after-the-fact retrofit. Shipping an AI feature, such as a copilot, summarizer, or chatbot, doesn’t require the company to change how decisions get made. True autonomy does. You have to re-engineer the decision: who (or what) can authorize an action, under what conditions, with what controls, and with what audit trail.
Most enterprises still run approval chains designed for human judgment, with latency measured in hours or days. Agents operate in milliseconds. You can’t bolt a governance model built for human review onto such a system – by the time any issues are identified, it will be too late. CIOs often underestimate how much process and accountability redesign is required up front.
Before any agent can be granted meaningful authority, the organization must have already answered questions it has likely never formally asked: who can authorize what, at what threshold, with what stop conditions, and through what audit trail.
Which legacy processes, policies, or practices most often kill AI initiatives after the pilot phase?
Three patterns dominate.
First: exception-handling by human escalation. The moment an agent hits an edge case, the workflow assumes “a person will catch it.” Enterprises have thousands of these silent human checkpoints, especially in finance, customer ops, and supply chain. Unless they are engineered to help the Agent learn so that exceptions continually reduce, the AI success rate will never scale to a sustainable level of benefits.
Second: data and access policies built for system-to-system integration, not agent-to-system reasoning. Agents need contextual data across domains. For instance, an Agent trying to troubleshoot an issue about international roaming charges on a customer’s wireless phone bill will need access to customer data, their orders or contract data, including any special rates applied due to factors such as corporate discounts, past customer support records, usage data, billing data, and inter-operator transaction records. However, most permission models were written for fixed applications, not dynamic tool-using actors. AI pilots often work with a curated dataset; production fails when real entitlements and audit requirements show up.
Third: change and risk processes that treat any production change as a multi-week, multi-stakeholder event. Agentic systems inevitably surface new edge cases and require frequent updates to prompts, tools, policies, and guardrails. The AI pilot passes these gates because it’s supervised and contained. Scaling triggers the enterprise “immune system”: security, risk, audit, and ops controls that were never designed for high-frequency change – enterprises need to adopt policy-as-code to govern this function.
Is “bad data” the real problem, or is the deeper issue that nobody trusts the institution’s own systems enough to automate decisions?
“Bad data” is often a symptom of a deeper trust deficit and is one that predates AI by decades. That deficit has its own origin: when organizations stopped trusting a system, they built another one to compensate. Over time, this created redundant systems with overlapping functions, divergent data stores, and no single source of truth. The sprawl is the artifact of distrust, not the cause of it.
What filled the gap was human judgment. People learned which reports to discount, which fields are unreliable, and which numbers need triangulating before anyone acts. That institutional compensation is invisible in any data catalog and often varies by team or department. When you remove the human from this process and ask an agent to reason over the raw data, you expose every trust gap simultaneously, that too at machine speed, at production scale.
Organizations are not refusing to automate because their data is bad. They are refusing because automating would force them to confront, at scale and at speed, how little confidence they actually have in their own systems. That is a fundamentally different problem than a data quality initiative can solve.
Where do approval-heavy workflows expose that a company does not actually want autonomy, even when leadership says it does?
Approval processes codify an organization’s actual risk tolerance, power distribution, and accountability structure – and they are far more telling about all three than any AI strategy document.
The most telling aspect of the approval-heavy workflow is in the exception clause. Ask any leadership team: Should the agent approve a vendor payment under $10,000 without human review? Most say yes. Under $100,000? Hesitation. What if the vendor is new? More hesitation. Cross-border? The answer becomes no. Trace the boundary of where they stop saying yes, and you have the organization’s real autonomy threshold. This is almost always a fraction of its stated AI ambition.
If an enterprise has dense, multilayered approval workflows and leadership has not explicitly committed to re-examining the risk tolerance and control assumptions that built them, they will not get past the pilot phase. Autonomy requires a deliberate decision to redesign those structures and not a workaround that leaves them intact.
Are CIOs trapped by technical debt, or are they also trapped by budgeting models that reward maintenance over reinvention?
Almost 70% of an enterprise’s IT budget goes towards maintaining current operations – this is consistently reported in Gartner and Deloitte research. That factor, combined with technical debt that has accumulated over time, contributes to hampering CIOs.
In this landscape, legacy operating spend is predictable, auditable, and easy to justify every budget cycle. Justifying multi-year reinvention programs, especially when they carry a high risk of failure, is much harder in this budgeting model.
Until organizations treat core AI platforms, controls, and data foundations as durable infrastructure (not experiments), and necessary for delivering on basic business goals, CIOs will continue to be trapped in this cycle.
How serious is the knowledge-loss problem as legacy engineers retire, and are enterprises moving fast enough to capture their institutional memory?
Most enterprises are late in this attempt to control knowledge loss. Every enterprise has the issue of tribal knowledge – the workarounds, undocumented dependencies, and the informal rules that keep systems stable. Process mining and AI-assisted capture can surface some of it, but only where there’s a digital trace. A lot of legacy-system knowledge exists only in people’s heads.
Some enterprises resort to outsourcing entire functions to transfer this risk to a vendor and essentially pay the vendor to re-capture this legacy knowledge and document it. In many cases, the impact of this problem shows up when there are production incidents, data losses, or the inability to respond to specific requests that occur after the legacy engineers retire.
When business units resist consolidation, how much of that is legitimate operational concern versus empire preservation?
There will certainly be some legitimate concerns, such as transition risk, SLA degradation, loss of configuration depth that took years to build, and uncertainty about how the future state will serve any vertical-specific requirements. These deserve structured evaluation, and the business units need to be involved in designing a future state as well as the transition model that would address these and any other key concerns.
Empire preservation looks different: requirements that expand until no solution alternative is viable, a lack of clarity about specific business metrics that would be impacted and the reasons, and a lack of meaningful alternatives being discussed.
Such consolidation needs strong executive sponsorship – they need to be willing to put in the work to separate substantive objections from structural delay, and act based on that evaluation.
What’s the clearest sign that an enterprise is nowhere near ready for autonomous AI, no matter how strong its vendor demos look?
The enterprise cannot answer one question: who is accountable when the agent is wrong, not who reviews it after the fact, not who the vendor escalation contact is, but who in the organization owns a bad autonomous decision and what happens to them?
If the answer is vague, distributed, or deflected to the vendor, they are not ready. The governance gap shows up in follow-on questions too: How will real-time agent behavior be monitored in production? How will human operators feed structured learning back so exceptions shrink over time rather than accumulate? How will the organization break the departmental boundaries an agent inevitably has to cross?
Vendor demos sidestep all of this. They are optimized for clean data, crisp boundaries, and best-case sequences. Production autonomous AI operates in none of those conditions. An enterprise that cannot name its accountability structure has not yet decided whether it actually wants autonomy, regardless of what the strategy deck says.
About Aravind Parthasarathy,
Client Partner for the Telecom and Technology sector at NewRocket

Aravind Parthasarathy is the Client Partner for the Telecom and Technology sector at NewRocket, an AI‑first ServiceNow Elite Partner. His work focuses on moving agentic AI from experimentation into production workflows, drawing on extensive experience implementing emerging technologies at enterprise scale and advising senior executives on practical, outcome‑driven adoption.
For more serious insights on AI, click here.
Did you find this interview with Aravind Parthasarathy useful? If so, please like, share or comment. Thank you!
The cover image is AI-generated from the author’s prompt and Aravind’s source photos.

Leave a Reply