We are delighted to confirm that AI Governance leader, Dr Cari Miller will speak at #RISK A.I. Digital, this month.

Livestreaming March 20 and 21, #RISK A.I. Digital examines how organisations can harness AI technologies responsibly, ethically and safely.

Across two days, thought leaders, industry experts and senior professionals will provide their insight, enabling audiences to understand how to mitigate risk, comply with regulations and achieve robust governance over AI.

Event speaker, Dr Cari Miller is Head of the AI Governance Practice at the The Center for Inclusive Change and has been recognised as one of 100 Brilliant Women in AI Ethics.

Dr Miller will be at #RISK A.I. Digital to discuss frameworks for procuring AI systems, revealing what best practice looks like across the entire procurement lifecycle in the AI age.

Below, she goes over her professional journey and takes a look at the key issues.

A Framework for procuring AI Systems 

  • Time: 16:45 – 17:35 GMT
  • Date: Thursday 21st March (Day 2)

Click here to register for free to #RISK A.I. Digital

Could you briefly outline your career so far?

I started as a corporate strategist right at the cusp of the digital explosion around 1998/99, so I had a front row seat at the digital transformation frenzy. As a corporate strategist, I found myself doing things that people didn’t really get to do – leveraging big data to shape organisational structures, market penetration strategies, and more.

Going into e-commerce, I was introduced to the movement of supply chains, inventories, and customer service activities, and this eventually led me to delve into the realms of Google and Facebook advertising. Witnessing the rise of algorithms and their ubiquitous influence, I noticed their use of those same algorithms to selling all sorts of things, from mundane products such as shoes, to critical areas like housing and employment. It raised ethical concerns that struck a chord with me, and the format didn’t seem very fair.

Feeling constrained within the confines of that role, I quit and pursued a passion project: attaining a doctorate in responsible AI research.

This pivot towards ethical considerations became a defining shift in my career. Along the way, my interests shifted towards workplace tech and ethical tech, with procurement emerging as a recurring theme. Through this journey, I kept returning to the notion of procurement as the gatekeeper, influencing decisions with ethical implications. And that’s how I arrived at where I am today.

How do an organisation’s procurement processes impact upon risk associated with AI technology?

I used to think HR was the real unsung heroes of any organisation, because your people are your best assets. But with the rise of AI and its pervasive influence on the workplace, I’ve had a change of heart. It’s not that HR isn’t crucial—they absolutely are. But now, it’s procurement that has become the gatekeeper, deciding what gets in and what stays out.

When suppliers come knocking, they don’t always tell you what’s under the hood of their solutions. The challenge is that some of those solutions are “high risk” solutions, and you have to know which ones are which. With legacy IT systems, you have to know what technical debt the solutions are passing on to buyers. But when you buy an AI solution, now you also need to know what “ethical debt” is being passed on too.

So, we’ve got a checklist of questions that we’ve always asked: What are the security risks? What functionalities are missing that I might need? And with AI, we’ve got to factor in questions to get to the ethical debt: What ethical choices were made in the development of this solution? Can I live with those decisions? What did they overlook, and why did they make those choices?

This isn’t just about ticking boxes; it’s about managing liability. Because if we drop the ball, it’s not just our reputation on the line—it’s our brand, our workflows, our operations, and most importantly, it can have consequential impacts on our employees and our customers. So, dealing with AI adds a whole new layer of complexity to the procurement process.

Could you outline the two key risk indicators that are most prevalent in AI systems developed for high-risk domains?

When we discuss risk assessment, we often focus on identifying key risk indicators. With AI, the complexity can be overwhelming. In developing my risk framework, I drew from traditional models like COSO and ISO, which follow distinct patterns within those frameworks.

Typically, you’ll find risk likelihood and probability matrices, often arranged in a three by three grid. However, I streamlined this framework into to a simpler two by two matrix, which represents the two overarching risk categories that stood out as paramount. These two categories encompass various sub-categories, which we’ll explore further during my session at #RISK A.I. Digital.

From a procurement standpoint, especially in the UK, we’re well aware of high-risk domains such as education, healthcare, and more. However, within these domains, the level of risk can vary significantly between different systems. 

For instance, in the medical field, one system might deal with critical diagnostics like cancer detection, posing a high level of risk. Meanwhile, another system might manage medical supply inventory logistics, which, although operating within the medical domain, presents a lower risk profile.

Therefore, our risk management practices must consider these nuances when navigating the procurement lifecycle.

The first key risk indicator revolves around the impact on humans, whether direct or indirect, encompassing a range of factors. The second indicator focuses on the complexity of the system, like contrasting black box AI with deterministic systems. These distinctions will be explored in detail during my #RISK A.I. Digital session, when I’ll present a comprehensive exploration of these crucial risk indicators.

Don’t miss Dr Cari Miller exploring these issues in depth at #RISK A.I. Digital in the session:

A Framework for procuring AI Systems

One of the best opportunities for organisations to control AI risks is to leverage their procurement processes.

In this session, participants will learn about two key risk indicators that are most prevalent in AI systems that are developed for high-risk domains (e.g., employment, health/medical, education, housing, finance, public assistance, critical infrastructure, safety features, etc.). Buyers that master an understanding of these risk indicators will gain substantial advantages in achieving ROI objectives.

The session will provide an overview of best practices across the entire procurement lifecycle to identify risks, develop practical mitigation strategies, and implement reasonable control mechanisms in order to establish a reliable risk tolerance posture.

Details

A Framework for procuring AI Systems

  • Time: 16:45 – 17:35 GMT
  • Date: Thursday 21st March (Day 2)

The session sits within a packed agenda of insight and guidance at #RISK A.I. Digital, livestreaming through March 20 and 21.

Discover more at #RISK A.I. Digital

AI is a game changer, but the risks of generative AI models are significant and consequential.

 The #RISK AI series of in-person and digital events features thought leaders, industry experts and senior professionals from leading organisations sharing their knowledge, and experience, examining case studies within a real-world context, and providing insightful, actionable content to an audience of end user professionals.

Attend the #RISK AI Series to understand how to mitigate these risks, comply with regulations, and implement a robust governance structure to harness the power of AI ethically, responsibly and safely.

Click here to register for free to #RISK A.I. Digital