Businesses everywhere want to move forward with artificial intelligence, weaving AI into a host of business processes. There’s nothing inherently wrong with that — but our push to weave AI into business processes should not end up with human employees stuck outside those processes.

Matt Kelly, Radical Compliance

Matt Kelly, CEO, Radical Compliance

This is likely to be a significant challenge for AI governance, and for AI adoption overall. Companies will need to decide where “the human point” should be in their business processes — that is, where AI processing ends and human oversight begins. If you get that determination wrong, any number of unwanted outcomes might arise, from greater regulatory enforcement risk to AI investments not meeting your desired objectives. 

For example, say you’re a large IT services firm and you want to hire a handful of software engineers. You post those job openings online, and receive 10,000 applications. How should you use AI to assess those applications and winnow them down to a smaller number? Exactly how small should that smaller number be, anyway? 

If you use AI to whittle applicants from 10,000 down to 5,000, you might not be using AI to its fullest potential. On the other hand, if you use AI to go from 10,000 down to 10 finalists (eliminating 99.9 percent of all applicants), you’re placing enormous trust in AI to make crucial decisions for you. How will you assure that the AI isn’t engaging in algorithmic discrimination or making some other costly mistake? 

In truth, there is no “correct” answer to our hypothetical use-case above — and many more AI use-cases, from customer due diligence to dynamic product pricing to automated software development, that all face similar questions. AI can help to improve those processes, but only when companies know where and how to keep human judgment in place. 

That’s the AI governance challenge we need to solve.

→ #RISK New York: The most critical conversations in risk are happening in New York, July 9-10. Don’t miss your chance to be in the room where strategies are forged and alliances are built. Secure Your Place for July 9-10 Now

Three Steps for Better AI Oversight

If companies want to strike the right balance, where you can harness AI to its fullest potential without its many regulatory, operational, and reputation risks derailing your plans, then companies need to keep three points top of mind as they develop AI governance plans.

First, educate everyone from the board on down about the risks and opportunities that artificial intelligence poses for your business. 

Different groups will need different education, depending on their role. The board will need to understand what your organization’s AI strategy is (“We will be adopting AI for this use, but not that one, and here’s why”), what types of risk your AI strategy will bring, and how you plan to keep those risks in check. Leaders of business operations teams that want to adopt AI, meanwhile, will need to know how senior management and GRC teams will evaluate those AI use-cases. 

More broadly, remember that the EU AI Act requires all companies using artificial intelligence “to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems,” and many other laws impose a duty of care for how your company uses technology. So educating employees isn’t just good business sense; in many instances it will be a regulatory obligation.

Second, collaborate across business functions to develop the right AI governance policies, procedures, and controls. 

No single business function will be able to govern AI from its own silo. Successful governance — where you can identify and evaluate all risks and opportunities, and then proceed with appropriate controls in place that everyone will follow — depends on active collaboration among IT, cybersecurity, compliance, legal, and finance functions; as well as the First- and Second-Line operating functions that will be using AI systems on a daily basis. 

Each of those functions brings valuable perspective. Ideally, you could establish a standing AI governance committee of some kind including all those voices, meeting on a regular basis to review possible uses for AI, the risks that would need to be addressed, and any performance metrics you’ve developed to keep your AI adoption on the right path.

Third, clearly define the role that human oversight will play in your adoption and use of AI. 

Human oversight of artificial intelligence is a central element of the EU AI Act. The ability to appeal AI-made decisions to a human also appears in legislative and regulatory proposals across multiple U.S. states — and it’s just good business sense, too. Companies need to be sure that all critical decisions are made by humans, and that all AI-made decisions align with your business objectives, regulatory obligations, and ethical priorities. 

That brings us back to the challenge we mentioned above: defining the human point in your business processes. That point is where human oversight enters the picture; everything before it is handled by artificial intelligence, with all the risks and rewards AI brings.

Get Started Now

Where should that point of human oversight be placed? Every company will need to answer that question on your own, in accordance with your own risk assessments, business objectives, and ethical values. But you won’t be able to answer that question well without an AI-aware workforce, working across business functions to analyze AI risks carefully. 

So start building that culture of AI awareness now. Otherwise the risks race ahead of your ability to keep up.

Risk New York Banner 17.11.47

#RISK New York, where leaders meet, July 9-10, 2025, Fordham Law School, NYC - Join your peers: Citi, Google, Microsoft, Meta, BNY and Deloitte

Matt Kelly

Matt Kelly, CEO, Radical Compliance

#RISK New York

This dynamic environment will be a central focus at #RISK New York (July 9-10, Fordham Law School), where I will be speaking. The conference provides a vital platform for leaders to discuss these evolving challenges, share strategies, and gain the insights needed to navigate the future of risk and compliance.

My session, “AI Governance for Modern Risks” (Day 1, 11:45 AM - 12:15 PM), will specifically explore how AI is challenging traditional GRC concepts and the new principles needed for effective AI governance.

You can register here.