We are very happy to announce that International Data and Technology Regulation Leader, Caro Robson will speak at #RISK London, this October.

Caro Robson

Caro Robson, International Data and Technology Regulation Leader

Taking place on October 18 and 19 at ExCeL London, #RISK London addresses the issues impacting organisational risk today, from Governance, Risk and Compliance (GRC), to Environmental, Social and Governance (ESG), organisational culture, and much more.

The event builds on the success of #RISK 2022, allowing organisations to examine the cumulative nature of risk, unite GRC specialities and share views with subject-matter experts.

Leveraging over 14 years’ sector experience, Caro Robson has worked with governments, international organisations and multinational businesses on data and technology regulation. She has recently worked as Director of Regulatory Strategy for the Jersey Office of the Information Commissioner.

Caro will appear exclusively at #RISK London to discuss the growing role of AI in the workplace, and how DPO’s can help to mitigate risk.

Risk London 2023 CTA Image

  • AI in the workplace – the DPO framework and roadmap to avoid chaos Thursday 19th October, 15:00 - 16:00 - Data Protection & Privacy Theatre
  • Inspiring Women; Supporting Female Representation is critical for the future - Wednesday 18 October, 15:00 - 15:30 - ESG, Workplace & People Theatre

BOOK YOUR PLACE AT #RISK LONDON

RISK-LONDON+BLK

We spoke with Caro to learn more about her professional journey, and for an introduction to the key themes in her #RISK London session.

Could you outline your career pathway so far?

I’ve worked in privacy, technology and digital regulation for over 14 years now. I started in the field working for part of the UK Government, and since then I’ve worked with the international community, built the global privacy programme for an international aviation group in Abu Dhabi, and led the privacy function in the Middle East and Africa for a multinational payments company. More recently I led the data protection and digital practice for a public policy consultancy in Brussels, consulting with the EU institutions on some really cutting-edge privacy and digital issues, and have just completed my time as Director of Regulatory Strategy for the information regulator in Jersey. So, a “privacy globe-trotter” as a colleague once said!

In your sector, what are organisations’ chief concerns as they bid to balance increased adoption of AI with compliant data handling practices?

Well I’ve worked across several sectors, and I think that the challenges are surprisingly similar across all organisations. For example, regulators are under the same pressure as any public sector entity to maximise efficiency and effectiveness, especially where public money is concerned. Adopting new technology has to be part of that; in fact many Data Protection Authorities are leading the way in adopting technology for improved efficiency. For any legal or compliance consultancy, the main concerns will always be the need to ensure that clients deal with regulatory matters fairly and transparently, following “proper administration.”

Whilst both private and public sectors are governed by broad data protection legislation, the public sector is often under increased regulation due to human rights law, whilst large corporations have additional pressures under company law. So it’s always important to understand the regulatory environment in which an organisation operates; AI is a field that cuts across several areas of regulation. For example, there’s been a huge increase in the use of corporate law to litigate AI issues, such as algorithmic bias, and so boards have to be very aware of the implications of any technology their organisation uses. Any solution involving AI must be used in a way that is transparent, auditable, fully tested and risk-assessed, and of course secure. Public confidence will be essential for any organisation.

What compliance benefits will an “AI revolution” bring to data protection and security professionals?

It may be a cliché, but I think the benefits of AI will only outweigh its risks if it emerges through evolution rather than revolution. We need to understand the technology, including its limitations, as it develops. This is a particular risk with AI, because the tools being released often use models that were built long before the business or individual user adopts them, and as such the user has had no input in their development. But if we can manage these risks, I think many AI-based tools have the potential to take on high-volume, low-risk tasks that might otherwise take up a huge amount of DP and security professionals’ time or be missed by them altogether.

I don’t see AI replacing DP and security professionals (who are often under-resourced to begin with), but I think AI has the potential to triage high-volume tasks, or risk-assess complex systems to highlight issues for the human professionals to focus upon. Hopefully this will free-up the capacity of those teams to dedicate their time to the most complex and high-risk areas (which may include the development of AI technology itself). Examples would be the use of NLP tools to check whether large volumes of contracts contain standard data protection terms, or scanning enterprise architecture for cybersecurity risk-indicators. However, the key will always be to ensure that those using the tools – up to and including the board – understand them, including their limitations. A major part of this is knowing how to ask the right questions of any AI model.

I think the growth in litigation against organisations for discrimination caused by algorithmic bias shows that boards are coming under increased pressure to scrutinise the models used by their business operations. So I think the “AI revolution” is likely to result in DP and security professionals’ roles changing to become more prominent at board level and more focused on scrutinising and explaining AI models. This should include the models’ limitations and biases; but if these are understood and managed, then hopefully the benefits will far-outweigh the risks of using AI for compliance. But I don’t think it will replace DP and security professionals: at least not for a long time!

Don’t miss Caro Robson exploring these issues to depth at #RISK London, in the panel debate: “AI in the workplace – the DPO framework and roadmap to avoid chaos”

This interactive session will provide a practical roadmap for DPOs to get organised with AI, work to overcome challenges and pitfalls, and build a responsible AI strategy in the workplace.

Also on the panel:

The session sits within a packed two-day agenda of insight and guidance at #RISK London, taking place on October 18 and 19 at ExCeL London.

The event unites thought leaders and subject matter experts for a deep-dive into organisational approaches to handling risk. Content is delivered through keynotes, presentations and panel discussions.

Details

  • Session: Day 2, AI in the workplace – the DPO framework and roadmap to avoid chaos
  • Theatre: Privacy & Data Protection
  • Time: 15:00 – 16:00 BST
  • Date: Thursday 19 October 2023

#RISK London is also available on-demand for global viewing. 

Book Your Place at #RISK London

RISK-LONDON+BLK