The UK has laid out a new set of AI use-principles designed to reduce risk, safeguard consumers and promote fairer competition between developers.

UK’s AI principles aim for accountability and transparency

Focussing on transparency and accountability, the principles’ creators hope to strike a balance between innovation and regulation, while preventing the tech giants from achieving dominance on the rapidly evolving AI landscape.

The UK’s Competition and Markets Authority (CMA), the country’s antitrust watchdog, has taken the lead in shaping the principles, recognising the need to harness AI’s potential while avoiding the pitfalls of unchecked power in the hands of a few tech companies. The seven principles cover various aspects of AI regulation, targeting foundational models like ChatGPT and addressing concerns about market monopolisation and anti-competitive behaviour.

Sarah Cardell, the Chief Executive of CMA, highlighted the transformative potential of AI in enhancing productivity and simplifying everyday tasks. However, she also underscored the importance of preventing AI from becoming a tool for a select few to wield disproportionate power, ultimately hindering broader economic benefits.

“That’s why we have today proposed these new principles and launched a broad program of engagement to help ensure the development and use of foundation models evolves in a way that promotes competition and protects consumers,” Cardell stated.

The principles will underpin the CMA’s approach to AI as it prepares to assume new powers in the coming months for overseeing digital markets. The regulator’s goal is to assemble a comprehensive framework for AI governance that takes input from key stakeholders, such as Meta, OpenAI, Microsoft, Google and other leading developers, as well as from subject-matter experts and law makers.

In their current state, the principles address critical aspects of AI, such as access to essential resources, diversity of business models and flexibility for businesses to use multiple AI models. 

Unlike the creation of a new AI regulator, the UK has opted to distribute regulatory responsibilities for AI between the CMA and other bodies overseeing human rights and health and safety. This approach aligns with global efforts, with the G7 agreeing to adopt “risk-based” regulation that maintains an open environment for AI innovation.

The AI conundrum

As the UK gears up to host a global AI safety summit in six weeks, these principles signal the country’s commitment to ensuring a balanced, competitive, and transparent AI landscape that benefits consumers and businesses alike.

The issues take centre stage at #RISK Amsterdam on September 27 and 28, where experts will debate the key issues surrounding AI, machine learning and their application in global societies.

Click here for the full #RISK Amsterdam agenda

risk Amsterdam

#Risk Amsterdam is here to empower you with the knowledge, insights, and connections you need to survive and thrive in a fast changing world of risk.

Don’t miss out on this opportunity to learn from the best and network with the brightest minds in risk.

REGISTER FOR RISK AMSTERDAM TODAY!

Key sessions taking place at #RISK Amsterdam:

A New Era of AI Governance

  • Date: Wednesday 27 September, 2023
  • Location: GRC & Financial Risk Theatre
  • Time: 12:00pm – 13:00pm (CET)

More than ever before, everyone’s talking about AI. From image generation to task automation to chatbots that come pretty close to passing the “Turing test”.

This session will explore the challenges and opportunities associated with AI governance, and propose strategies for addressing them in the coming years. 

You’ll learn Ideas and strategies for promoting responsible and ethical use of AI technology – and for addressing potential challenges in the governance of AI.

Also not to be missed:

The EU’s Game-Changing AI Act: What it means and Where it’ll take us

  • Date: Thursday 28 September, 2023
  • Location: Privacy, Security & ESG Theatre
  • Time: 11:00am – 12:00pm (CET)

Recently passed by the European Parliament, the AI Act is designed to shape the future of artificial intelligence. It addresses critical aspects of AI technology, including ethics, safety, transparency, and accountability.

The session aims to shed light on the act’s implications and the profound impact it will have on AI development and adoption within the EU and beyond.

Discover more at #RISK Amsterdam

With over 50 exhibitors, keynote presentations from over 100 experts and thought leaders, panel discussions, and breakout sessions, #Risk Amsterdam 2023 is the perfect place to learn about the present and future risk landscape.

Going live September 27 and 28, 2023 at RAI Amsterdam.

Click here to register for #RISK Amsterdam