
How Agents Can Go Wrong
As organizations embrace the democratization of AI, they often overlook the messy realities lurking just beyond the pilot projects and training sessions. Giving end users the tools to build and deploy their own AI agents sounds empowering—and in many ways it is—but without careful design, clear governance, and shared intent, it also invites a wave of unintended consequences. Before celebrating the creativity unleashed by user-developed agents, it’s worth stepping back and asking: how might this new autonomy create conflict, confusion, and risk inside the enterprise? Here’s a closer look at how agents can go wrong.
1. Conflicting Objectives and Outcomes
Different users will inevitably program agents to optimize for personal or departmental goals rather than organizational strategy. One agent prioritizes speed over quality, another enforces rigid compliance at the expense of agility—both work at cross-purposes without anyone immediately realizing it.
2. Data Fragmentation and Silos
Agents may be trained or tuned on isolated datasets. Instead of a unified knowledge base, you’ll end up with pockets of “agent intelligence” that reinforce tribalism, mistrust, and divergent versions of “truth.”

3. Uncoordinated Decision-Making
Agents making independent decisions without a common governance model will start issuing contradictory recommendations, approvals, or even executing tasks that interfere with one another. People will quickly lose faith in automated outputs.
4. Attribution and Accountability Gaps
When an agent makes a mistake, who is accountable—the user who created it, the team it served, the platform that enabled it? Without clear attribution, conflicts will escalate rather than resolve.
5. Undermining Hierarchies and Authority
Junior employees could deploy agents that outperform or second-guess decisions made by senior leadership. Traditional lines of authority erode as algorithmic actions become more trusted than human management. And that’s not to mention the proposed hierarchies of agents suggested by agent governance models. Who will build the governance into those models, and how does that get tested? Where does the ultimate authority (and accountability) for agent orchestration lie?
6. Security and Compliance Violations
End-user agents might bypass security protocols or regulatory guidelines either out of ignorance or willful disregard. One rogue agent could leak confidential information or execute tasks that expose the company (government, not-for-profit) to massive liabilities.
7. Conflicting Workflows and Processes
Without integration standards, agents will automate different slices of workflows in ways that don’t align. Handoffs will fail. Process bottlenecks will multiply. People will waste hours trying to debug problems created by opaque automation.
8. Intellectual Property and Ownership Disputes
Agents generating content, code, designs, or decisions will spark debates about ownership. Was it the agent’s “owner,” the team, or the organization? Messy fights will follow, especially in performance reviews or IP-sensitive industries.
9. Bias Amplification
Agents often reflect the biases of their creators. If individual biases get coded into agents at scale, expect systemic discrimination, microaggressions, and culture clashes to rise rather than fall.
10. Information Overload and Noise Pollution
If agents relentlessly produce insights, reports, or notifications without coordination (or the ability to personalize communications), employees will drown in low-signal, high-noise communication streams. People will spend more time sorting through clutter than acting meaningfully.
11. Passive-Aggressive Sabotage
Some employees might weaponize their agents—slow-walking processes they disagree with, subtly reshaping narratives through algorithmic influence, or creating artificial bottlenecks to advance personal agendas.
12. Emotional Disconnect and Human Alienation
When agents handle more interpersonal tasks—sending feedback, negotiating deliverables, escalating issues—the human element erodes. Relationships built on trust and empathy could degrade into sterile, mechanical interactions.
How Agents Can Go Wrong: Tread carefully
Most organizations are not preparing for these realities. Vendors, like Microsoft and Salesforce, tout democratized AI development, suggesting that user agents will somehow align naturally with enterprise goals. Unlike previous “democratization” technologies such as PCs, spreadsheets, local databases, or expert systems, agents aren’t localized nor easily contained. They can, and likely will, operate on enterprise systems, making recommendations or taking actions that don’t account for the issues listed above. Close examination of the issues will also clearly point out that the future failure of agents isn’t, for the most part, a technological issue, but rather arises from the people who use it unwittingly, unintentionally, or maliciously.
The issues listed here aren’t new. Those managing distributed computing research and implications know how hard it is to break up logic and distribute computation and decision-making. What has changed, like much of modern AI, is access, scale, and reach. These are no longer academic questions to those building agent-based research projects; they are likely issues already being faced by those deploying AI agent technology, even if they don’t know it yet.
For more serious insights on AI, click here.
Did you find this post on How Agents Can Go Wrong useful? If so, like it, subscribe, leave a comment or just share it!
Leave a Reply