We are delighted to confirm that Regulatory Innovations Practitioner and Consultant, Kate Shcheglova-Goldfinch will speak at #RISK A.I. Digital, on now.

Livestreaming March 20 and 21, #RISK A.I. Digital examines how organisations can harness AI technologies responsibly, ethically and safely.

Across two days at #RISK A.I. Digital, thought leaders, industry experts and senior professionals will provide their insight, enabling audiences to understand how to mitigate risk, comply with regulations and achieve robust governance over AI. 

Event speaker, Kate Shcheglova-Goldfinch is Programme Lead on AI-gov in finance at CJBS AI ethical principles development.

Kate leverages more than four years’ experience as the EBRD senior PM on fintech strategy, digital technologies and regulatory platforms development serving local authorities (including central banks).

Kate will be at #RISK A.I. Digital to discuss the setting of boundaries in AI. Below, she goes over her professional journey and introduces the key issues.

Setting AI Boundaries, Risk tolerance, Permissible-use case 

  • Time: 14:30 – 15:00 GMT
  • Date: Thursday 21 March (Day 2)

Click here to register for free to #RISK A.I. Digital

Could you briefly outline your career so far?

I have devoted most of my career to financial journalism and almost five years ago I was invited to the role of external senior PM at EBRD to advise the Ukrainian central bank (The National Bank of Ukraine) on the development of an innovative regulatory platform.

I am very grateful for the unique experience in designing innovative regulatory acts that I acquired as a team lead and am very proud that last year, together with my team, I completed an almost three-year project to launch a regulatory sandbox for the NBU.

It was an extra-resilient case when, for the first time in history, a regulatory sandbox was launched during a full-scale war.

Last year I completed an AI programme at Oxford Saïd Business School and delved into the topic of AI-governance, realizing that it will be key for both regulators and society in the coming years.

Today I am studying country- and corporate AI- governance cases, and I am also researching the topic of AI-ethics in the financial sector as Programme Lead on Regtech and AI governance in finance at Cambridge Centre for Finance, Technology and Regulation (CJBS).

I was recently selected as a delegate to UN Women UK (CSW68) and I see it as my mission to launch an AI programme for women to make the AI-landscape more diverse and equitable.

What major factors should be considered when setting boundaries for the use of AI technologies?

According to A Pro-Innovation Approach to AI Regulation published by the Department for Science, Innovation & Technology and Office for Artificial Intelligence, the most effective way to develop progress must be based on achieving common values and priorities.

Such values and priorities include, in particular, better public services, high quality jobs and opportunities to learn the skills that will power our future. Stimulating these priorities should be ensured by modern technologies and should be at the top of the agenda for the development of AI technology.

Is it possible for businesses to remain innovative while fully respecting these boundaries and using AI ethically and compliantly?

Of course, it is possible to be innovative and fully compliant, and here it is worth emphasising Responsible AI implementation (RAI). 

It is generally accepted that it implies transparency, accountability and fair (unbiased) systems from development to deployment, but I would say that the priority should be ethics, which implies human-centric RAI implementation and its positive social effect.

This requires meeting not only technical standards, but also implementing high-quality and modern management systems and having a diverse team. This diverse team must reflect all of the company’s values and broad expertise, and be built on gender and multicultural diversity.

Kate Shcheglova-Goldfinch explores these issues in depth at #RISK A.I. Digital when she moderates the panel: Setting AI Boundaries, Risk tolerance, Permissible-use case

As artificial intelligence (AI) continues to permeate various aspects of society, organisations are faced with the critical task of defining clear boundaries, establishing risk tolerance levels and delineating permissible use cases for AI-driven applications.

This panel will explore the complex considerations involved in setting AI boundaries and navigating the ethical and regulatory landscape.

On the panel:

Details

Setting AI Boundaries, Risk tolerance, Permissible-use case  

  • Time: 14:30 – 15:00 GMT
  • Date: Thursday 21 March (Day 2)

The session sits within a packed agenda of insight and guidance at #RISK A.I. Digital, livestreaming through March 20 and 21.

Discover more at #RISK A.I. Digital

AI is a game changer, but the risks of generative AI models are significant and consequential.

 The #RISK AI series of in-person and digital events features thought leaders, industry experts and senior professionals from leading organisations sharing their knowledge, and experience, examining case studies within a real-world context, and providing insightful, actionable content to an audience of end user professionals.

Attend the #RISK AI Series to understand how to mitigate these risks, comply with regulations, and implement a robust governance structure to harness the power of AI ethically, responsibly and safely.

Click here to register for free to #RISK A.I. Digita