The AI Wild West is Over — Why IRM Must Now Govern the Frontier

When John A. Wheeler and Avivah Litan collaborated as colleagues at Gartner, they shared a simple but powerful conviction: technology without governance invites risk, and risk without context invites disaster. That belief feels more urgent than ever in the age of generative AI.

This month, Avivah returned to the spotlight with a compelling Gartner webinar titled “A Partner Framework to Manage AI Governance, Trust, Risk and Security.” It laid out a comprehensive vision for AI Trust, Risk, and Security Management (AI TRiSM), exposing the vulnerabilities of current AI adoption strategies and presenting a future where organizations no longer treat AI oversight as optional.

But here’s the problem: most companies are still stuck in a fractured model of Governance, Risk, and Compliance (GRC). And the rise of autonomous, agentic AI systems is about to make that dysfunction terminal.

A GRC and Information Security Wake-Up Call

Gartner’s data speaks volumes:

  • Nearly 30% of enterprises deploying AI have already experienced a data compromise.

  • 47% of leading firms are now shifting toward centralized AI teams, abandoning older decentralized models.

  • 99.5% of organizations still lack full classification and permissioning of data before it’s used in AI models.

This isn’t just a governance issue. It’s a failure of GRC to evolve at the pace of the systems it claims to oversee.

Traditionally, risks have been fragmented and isolated in separate areas such as GRC and Information Security. Disjointed efforts focused on areas like endpoint protection, access controls, and patch management depending on team priorities, not business priorities. Those measures are wholly inadequate in the digital era.

Source: IRM Navigator™ Framework by Wheelhouse Advisors

Enter IRM Navigator™: GRC and Information Security Integrated for the Age of Intelligence

Integrated Risk Management (IRM), as defined in the IRM Navigator™ Framework, focuses on identifying, monitoring, and mitigating risks related to the risk domains of enterprise, operational, technology and compliance risks.

  • From static controls to dynamic monitoring of AI applications, models, and agents

  • From rule-based enforcement to contextual, AI-powered policy engines

  • From siloed security tools to real-time, cross-functional governance systems

Take the example of a hospital using a large language model (LLM) chatbot. One physician might ask a medical question and get blocked for seeking excessive opioid dosage information. A compliance officer, with higher permissions, gets the same response approved. These policies aren’t enforced by firewalls or static rules—they’re enforced by LLMs trained on the organization’s acceptable use policies.

IRM must now account for:

  • Prompt injection threats

  • Agentic misalignment and intent drift

  • Contaminated retrieval-augmented generation (RAG) data

  • Shadow AI and orphaned agents

Gartner’s Litan spotlighted vendors like Zenity and Bosch AI Shield that monitor agent workflows in real time, map intent, and auto-remediate deviations. These aren’t security features—they are core autonomous IRM functions in the AI age.

A New Sheriff in Town — IRM

For years, enterprise technology leaders have operated under the illusion that AI adoption could be sandboxed. Let the data scientists experiment. Let marketing play with copy tools. Let compliance worry about it later. That era is over.

The Frontier Isn’t Friendly Anymore

For years, enterprise technology leaders have operated under the illusion that AI adoption could be sandboxed. Let the data scientists experiment. Let marketing play with copy tools. Let compliance worry about it later.

That era is over. Autonomous agents can now write code, send emails, and interact with customers and regulators in real time. And unlike traditional applications, these agents don’t follow playbooks—they learn, evolve, and occasionally hallucinate. What do we call that? Risk.

Strategic Guidance for IRM Leaders

To align Gartner’s AI TRiSM with IRM, organizations must:

  1. Inventory all AI use cases, models, and agents

  2. Define enforceable acceptable use policies

  3. Deploy runtime inspection systems capable of anomaly detection, data redaction, and intent mapping

  4. Map TRiSM responsibilities into existing IRM governance structures

  5. Require independence from frontier model providers to avoid vendor lock-in and ensure auditability

The shared responsibility model is a trap. As Avivah noted, even when SaaS vendors provide AI features, the enterprise is still responsible for its outcomes. If your chatbot goes rogue, no one will blame OpenAI.

Final Word

The IRM Navigator™ Framework makes clear that GRC and Information Security must evolve beyond monitoring compliance and patching vulnerabilities. It must now interrogate digital intent. IRM is not a sidebar to cybersecurity. It is the main act. GRC and Information Security must rise to meet it—or risk being replaced by something smarter, faster, and far more dangerous.

Samantha "Sam" Jones

Samantha “Sam” Jones is a seasoned technology market analyst, specializing in integrated risk management and adept at uncovering market insights through advanced analytical tools. Passionate about sustainable business practices and emerging technologies, she enjoys staying at the forefront of the industry by participating in community tech events and exploring new trends.

Previous
Previous

When Robots Walk, Risk Converges - Humanoids and the Future of Integrated Risk Management

Next
Next

The Risk Ignored — Part 1: Revisiting the Origin Story of a Software Industry