As AI introduces speed, scale and uncertainty that traditional ERM was never built to handle, risk leaders are being forced to rethink resilience from the ground up. In this article, Markus Krebsz sets out how EW-AiRM© reframes enterprise risk management for the age of self-learning systems — a shift in mindset that mirrors the conversations emerging at the #RISK Executive Forums, where boards and executives are grappling with how to future-proof governance, value creation and survival in an AI-driven world.

Prof. Markus Krebsz

UN AI Lead and Risk Specialist Prof. Markus Krebsz: Synthesising 30 years of financial expertise into the future of ethical, AI-driven enterprise resilience.

Artificial intelligence (AI) is rapidly transforming the business landscape, yet boards, executives and regulators are only gradually discovering an uncomfortable truth:

traditional Enterprise Risk Management (ERM) was never designed to cater for AI risks.

It was built for slow-moving, well-understood risks and not for self-learning systems that can develop suboptimal performance, generate anomalous outputs, display deviations in systems behaviour resulting in unintended and often non-compliant actions, drift literally overnight, or trigger AI Black Swan events with no historical precedent.

In short, it is uncertainty galore, at massive scale with potentially devastating consequences.

ERM framework shortcomings within an AI context

Most organisations, even the ones heavily deploying AI, still rely on traditional ERM frameworks (such as COSO ERM or ISO 31000) to manage their risks, including for innovative technologies and cybersecurity. They classify AI as just another “emerging technology risk” inside existing divisional buckets or organisational silos (e.g. IT, Data science, compliance, legal, risk etc.).

Traditionally, ERM risk assessments are point-in-time rather than continuous. And, second or third order AI failures (e.g. model collapse, systemic bias or autonomous escalation) rarely feature explicitly in firms’ risk taxonomies and risk registers.

Although several dedicated AI risk frameworks (e.g. NIST AI RMF, ISO/IEC 42001, EU AI Act RMF) have emerged over the last few years, each of these have naturally some gaps and limitations.

In contrast, EW-AiRM© (Enterprise-wide AI Risk Management©) is the first framework and approach to integrate AI-specific risk management into holistic enterprise oversight and governance:

Unlike traditional ERM approaches or standalone AI risk frameworks, EW-AiRM© bridges divisional boundaries since it is designed to sit above and augment these frameworks thereby delivering measurable business value. 

EW-AiRM©

EW-AiRM©

Source: Picture credit: Enterprise-wide AI Risk Management© (EW-AiRM©)

The EW-AiRM© multi-layered approach 

Its multi-layered methodology towards managing AI risk across the full system life cycle includes:

  • The Strategic layer, linking AI risks to enterprises’ key foundational pillars, incl. Strategic alignment & Necessity assessment, Organisational readiness, Data & Technological maturity, Risk Tolerance & Prioritisation, Governance & Accountability, and Through-the-lifecycle Monitoring & Adaptability. This is where AI risk is expressed in terms that boards and C-suites understand: Risk-adjusted ROI, Risk appetite, Competitive positioning, Regulatory exposure and Reputational implications. Instead of treating AI as an IT sub-topic, EW-AiRM© proactively pushes AI risk into existing board committees and risk appetite statements, alongside the more traditional enterprise risk domains. Ideally, all this needs to happen prior to the deployment of any AI system and then continuously throughout its lifecycle until full decommission (although often, many of those pillars currently form an after-thought at some point after the systems have gone already live.)
  • The Operational layer deploys and operationalises both the thoroughly researched, peer-reviewed and widely regarded MIT risk domain taxonomy and controls. This is where EW-AiRM© augments firms’ existing ERM frameworks with the MIT AI Domain Risk Taxonomy. Its seven AI risk domains cover Discrimination & Toxicity, Privacy & Security, Misinformation, Malicious actors & Misuse, Human-Computer interaction, Socio-economic & Environmental harms and AI System safety, Failures & Limitations. It then maps those to hundreds of mitigations/control measures distributed across Governance & Oversight, Technical & Security, Operational Process and Transparency & Accountability controls. This layer represents EW-AiRM© ’s “engine room” of inventorying AI risks and systems, mapping them to risk categories, defining specific controls, and embedding them in the firms’ policies, processes and tooling, covering the full system lifecycle. 
  • The Resilience layer aims to substantially increase enterprises’ AI Black Swan preparedness by addressing new dimensions of unpredictability caused by AI’s emergent (mis)behaviours, compounding unknown unknowns and accelerated cascading effects. EW-AiRM© bakes all of this in scenario planning, circuit-breaker mechanisms (e.g. automatic rollback or kill-switches for breaches of predefined thresholds), stress-testing and incident response playbooks designed specifically for large-scale AI systems outages, not just generic IT failures.

Risk Expo Europe Banner

#RISK Expo Europe, 10-11 November 2026, Excel London - Europe’s leading Risk, GRC, Security & RegTech Expo.

The EW-AiRM© origin story

Maybe somewhat surprisingly, the EW-AiRM© framework itself was initially a by-product of our participatory policy development work - a multi-year project for the United Nations - which ultimately led to the creation of the UN ECE WP.6 AI Common regulatory arrangement (CRA) and Declaration on the regulatory compliance of services with embedded AI systems and other digital technologies as well as HAiPECR (The Human-Ai Paradigm for Ethics, Conduct and Risk, available on the OECD.AI’s Catalogue of Tools and Metrics for Trustworthy AI).

Whilst some of the 193 UN member states have now become signatories to the AI CRA and Declaration, we realised that there is another, much more practical and enterprise-focussed dimension to AI Risk management – hence EW-AiRM© was born.

Future-proving, not reinventing the ERM wheels

First publicly introduced at our GRC #RiskEurope2025 keynote, the urgent need for such a risk practitioner-focused approach became quickly clear:

An audience survey suggested that whilst more than 95% of attendees’ firms are already using AI or have been considering it, none (yes, zero!) had a robust, reliable and resilient AI risk management approach of any shape or form in place.

Urgent change is needed, if enterprises want to meet the two most import objectives of risk management: i) Survival (i.e. the firm’s going concern) with ii) a long-term sustainable profit/benefit (e.g. we use “profit” here for corporate/commercial enterprises with “benefit” for not-for-profits/NGOs interchangeably).

In conclusion, EW-AiRM©  ’s aim is not to reinvent the wheel: Instead, it taps into existing ERM frameworks, to augment them with the incredibly well-researched, detailed MIT Risk Domain Taxonomy and controls – and thereby future-proves firms by embedding AI risk throughout the lifecycle and building robust AI Black Swan resilience. 

Next stops on your EW-AiRM© implementation journey

If you want to upskill quickly, please check out the 1-day virtual EW-AiRM© public course offered in partnership with the Institute of Risk Management – all details/dates are here.

Also, please look out for the forthcoming EW-AiRM© book, scheduled for release in 2H26.

Finally, if your firm needs tailored help with implementing EW-AiRM©, you can contact us either at RiskAi.Ai or Enterprise-wide AI Risk Management© (EW-AiRM©).

Prof Markus Krebsz

Prof. Markus Krebsz

Written by Markus Krebsz, Creator and author of EW-AiRM© framework. Follow us at the Human-Ai Institute©, Enterprise-wide AI Risk Management© (EW-AiRM©)