We are very happy to announce that International Data and Technology Regulation Leader, Caro Robson will speak at #RISK Digital, this month.

Caro Robson, Digital Legal, Risk and Compliance

Livestreaming on 13 February 2024, #RISK Digital examines the changing risk landscape in a content rich, knowledge sharing environment. The one-day event sees over thirty expert speakers provide insight and guidance on how organisations can mitigate risk, reduce compliance breaches and improve business performance in the digital age.

Leveraging over 15 years’ sector experience, Caro Robson has worked with governments, international organisations and multinational businesses on data and technology regulation.

She has recently worked as Director of Regulatory Strategy for the Jersey Office of the Information Commissioner.

Caro Robson will be at #RISK Digital to discuss the potential impact of AI on Third-Party Risk Management (TPRM) strategies.

Below, Caro elaborates on her professional journey and answers questions on the key issues of her #RISK Digital session.


  • Harmonising Progress and Security: Exploring the Confluence of AI and Third-Party Risk Management

     Tuesday, 13th February 2024, 13:00 – 13:30pm GMT




Could you outline your career pathway so far?

I’ve worked in privacy, technology and digital regulation for over 15 years now. I started in the field working for part of the UK Government, and since then I’ve worked with the international community, built the global privacy programme for an international aviation group in Abu Dhabi, and led the privacy function in the Middle East and Africa for a multinational payments company.

More recently I led the data protection and digital practice for a public policy consultancy in Brussels, consulting with the EU institutions on some really cutting-edge privacy and digital issues, and have just completed my time as Director of Regulatory Strategy for the information regulator in Jersey. So, a “technology globe-trotter” as a colleague once said!

How are evolving technologies such as AI adding complexity to the data protection and privacy risks within Third Party Risk Management (TPRM)?

Third party risk management with technology suppliers is nothing new. Any time an organisation relies on software or infrastructure from another company to handle its data, there are risks.

Those of us who’ve spent years negotiating tech contracts are used to protecting our organisations from information security risks, data location issues, system malfunctions, data availability, data protection issues, third party access to data… there’s a long list that organisations are used to managing!

But AI is different. Because AI requires input data to build, train and maintain its models, the use of any AI-powered system brings with it a new set of risks that are unique to AI. Any organisation using AI tools must be aware of how the data they put into the system will be used by the AI provider, in particular if it will be used for training (or “improving”) the underlying models.

This could have implications for the end user organisation, particularly if sensitive commercial information or personal data is involved. How do you communicate that to your customers in a privacy notice? Does it align with your existing business-to-business contracts? Does your cyber insurance cover it?

Organisations should also think about the output from AI tools, in particular whether they could be biased in their results or use “automated decision-making” to make significant decisions about individuals. Companies should also think about whether they can claim IP rights or commercial protection for anything generated using AI systems provided by a third party.

The law is still developing in this area, and it’s not clear whether the company using an AI tool will be able to claim IP rights over anything produced by their employees using models developed by another company.

There’s an additional risk if the provider developed the underlying AI model using data obtained contrary to data protection, IP or sectoral regulations. The legal risk may sit with the developer / provider, but the law is evolving and companies should be mindful of this when negotiating contracts with their AI suppliers.

The flip side of this risk is that AI systems built without a sufficiently large dataset can be biased in their results, which may lead to outputs that could damage the end user’s reputation or commercial bottom line.

That’s before we even begin to think about new AI-specific regulations being developed all over the world, in particular the US and EU! So, there’s a lot to consider when using any AI tools.

In what ways are TPRM strategies changing so that organisations can stay innovative while optimising privacy and security?

I think it starts with an awareness of the risks. (Hopefully our session will give organisations a good overview!) Compliance and legal teams should be mindful of these areas when negotiating contracts for products that use AI, but before a project even gets to that stage, boards and executives should make commercial decisions with these issues in mind.

There’s increasing litigation in the US and Australia to hold boards accountable if their AI-based systems give biased or unfair results. So, board members and C-suite executives need to be thinking of risk when deploying AI in their organisations.

To stay innovative whilst optimising privacy and security, organisations can think about training for all members of their workplace, including anyone using AI-based systems on a daily basis (this could include AI-powered web browsers or Chat-GPT).

An AI Policy is a good place to start, setting out when AI tools can be used, what the limitations for their use are, what data can and cannot be inputted, and how the output results will be used. Making sure every member of the organisation is aware of the policy is important.

For vendor management, AI can of course provide the solution too.

There are a number of great TPRM systems available that help companies manage their vendors and supply chains. Looking for products that integrate checks around data usage and location, how underlying models make decisions, and what rights the end user organisation has over the output, should mean that AI can help to solve some of the issues it can cause for a company’s TPRM strategy.

Don’t miss Caro Robson hosting a panel debate on this subject at #RISK Digital in the session: Harmonising Progress and Security: Exploring the Confluence of AI and Third-Party Risk Management

Third-Party Risk Management (TPRM) intersects with the transformative realm of Artificial Intelligence (AI). Our panel of experts will unravel the complexities involved in seamlessly integrating AI technologies while safeguarding against potential risks from external partnerships.

From evaluating vendors utilising AI, to crafting effective risk mitigation strategies, our discussion will provide invaluable insights, practical advice and real-world anecdotes to assist organisations in navigating the delicate balance between innovation and security. Join us for a compelling conversation that delves into the shifting dynamics of TPRM amid the rising influence of artificial intelligence.

Also on the panel…


Harmonising Progress and Security: Exploring the Confluence of AI and Third-Party Risk Management

Time: 13:00 – 13:30pm GMT

Date: Tuesday 13 February 2024

The session sits within a packed agenda of insight and guidance at #RISK Digital taking place 13 February.

Discover more at #RISK Digital

#RISK Digital will examine the changing risk landscape in a content rich, knowledge sharing environment. Attendees will be able to learn and better understand how to mitigate risk, reduce compliance breaches, and improve business performance.

Risk is now everyone’s business. Enterprise chiefs need to be tech-savvy, understanding how GRC technology fits into strategy and how to solve regulatory challenges.



Click here to register for free for #RISK Digital