AI needs to be regulated, but its swift evolution is posing challenges to the creation and application of new legal frameworks. Below, we look at steps that governments are taking to harness the emerging technology in a bid to promote its safe and effective use.

Nations push forward with AI regulation

During their May gathering in Hiroshima, Japan, G7 leaders recognised the urgent need for governance of AI and immersive technologies. They committed to talks on the matter, referred to as the “Hiroshima AI process,” with their respective ministers. It is hoped that the outcomes of the talks will be reported by the end of the year.

Japan itself is pushing to get new laws in place by the end of 2023, and these are expected to align more closely with United States’ approach, rather than the stricter stance taken by the EU. In June, Japan’s privacy watchdog issued a warning to OpenAI, cautioning against the harvesting of sensitive data without individuals’ explicit consent.

In China, temporary regulations came into effect on August 15th, mandating that service providers go through security assessments and obtain clearance prior to launching mass-market AI products. Following approvals from government authorities, four prominent Chinese tech companies unveiled their AI chatbots to the public at the end of August, complying with the new regulatory suite.

Europe and the UK

Regulators in Europe reached an agreement to modify draft laws, and are currently working through the particulars with EU member states to transform the drafts into official legislation. In parallel, on September 13, European Commission President Ursula von der Leyen proposed setting up a global panel to weigh the threats and opportunities stemming from AI.

On September 21st, Brando Benifei, the EU lawmaker in charge of talks on the bloc’s AI Act, made a plea for member countries to find common ground on key points so that a settlement can be reached before the end of 2023.

September 21 also saw Poland’s Personal Data Protection Office (UODO) reveal its intention to look into OpenAI following a complaint alleging the chatbot violated EU data protection regulations. The complainant said that OpenAI had failed to alter false information on them generated by ChatGPT.

In the UK, the Competition and Markets Authority (CMA) published seven principles on September 18th aimed at achieving greater accountability among developers. The measures are designed to prevent dominant tech companies from monopolising the sector, and curb anti-competitive practices such as product bundling. 

These proposals, brought in six weeks ahead of Britain’s hosting of a global AI safety summit, will form the foundation of the nation’s approach to AI regulation as it prepares to assume new authority to oversee digital markets.

In France, privacy watchdog CNIL said in April that it would launch an investigation into potential Chat GPT violations, following the tool’s temporary banning in Italy.

In May, Italy’s data protection authority said it would bring in experts to get better insight into AI’s use. After the initial outlawing of Chat GPT, the chatbot was once again made available to users in the country in April.

April also saw Spain’s data protection agency launch a preliminary inquiry to explore potential data breaches associated with ChatGPT.

Australia and the US

In Australia, there are plans to introduce regulations that will force search engines to develop new codes aimed at combatting the dissemination of AI-generated child sexual abuse content, as well as the production of deepfake variations of such material. The country’s internet regulator announced this initiative earlier in September.

There have been significant regulatory developments in the US, meanwhile. Two weeks ago, Congress held hearings on AI, hosting a forum with Mark Zuckerberg and Elon Musk. Over 60 senators participated, with Musk advocating the appointment of an American “referee” for AI amid widespread acknowledgement for the need for more laws on the tech.

On September 12, the White House announced that seven companies including Adobe, IBM, Nvidia, had signed President Joe Biden’s voluntary commitments outlining AI governance measures, which involve watermarking AI-generated content.

In a significant legal ruling on August 21, Washington DC district Judge Beryl Howell determined that artworks created solely by AI do not qualify for copyright protection under US law. 

Also in the summer, the Federal Trade Commission opened an investigation into OpenAI based on allegations of potential violations of consumer protection laws.

In July, the United Nations moved forward with its outlines for AI regulation. During initial talks, the Security Council looked at military and non-military use-cases of AI, recognising the tech’s capacity to promote global peace and bolster security.

UN Secretary-General Antonio Guterres also unveiled plans to develop a high-level AI advisory body by the year’s end, aimed at scrutinising AI governance frameworks.

Know the rules

As momentum gathers on the race to manage AI, it is crucial that business leaders understand what’s going on so that organisations can use emerging technologies ethically, responsibly, and to best effect.

Get to the heart of the conversation at #RISK Amsterdam on September 27 and 28, where experts will debate the key issues surrounding AI, machine learning and their application in global societies.

Click here for the full #RISK Amsterdam agenda

risk Amsterdam

#Risk Amsterdam is here to empower you with the knowledge, insights, and connections you need to survive and thrive in a fast changing world of risk.

Don’t miss out on this opportunity to learn from the best and network with the brightest minds in risk.

REGISTER FOR RISK AMSTERDAM TODAY!

Key sessions taking place at #RISK Amsterdam:

A New Era of AI Governance

  • Date: Wednesday 27 September, 2023
  • Location: GRC & Financial Risk Theatre
  • Time: 12:00pm – 13:00pm (CET)

More than ever before, everyone’s talking about AI. From image generation to task automation to chatbots that come pretty close to passing the “Turing test”.

This session will explore the challenges and opportunities associated with AI governance, and propose strategies for addressing them in the coming years. 

You’ll learn Ideas and strategies for promoting responsible and ethical use of AI technology – and for addressing potential challenges in the governance of AI.

Also not to be missed:

The EU’s Game-Changing AI Act: What it means and Where it’ll take us

  • Date: Thursday 28 September, 2023
  • Location: Privacy, Security & ESG Theatre
  • Time: 11:00am – 12:00pm (CET)

Recently passed by the European Parliament, the AI Act is designed to shape the future of artificial intelligence. It addresses critical aspects of AI technology, including ethics, safety, transparency, and accountability.

The session aims to shed light on the act’s implications and the profound impact it will have on AI development and adoption within the EU and beyond.

Discover more at #RISK Amsterdam

With over 50 exhibitors, keynote presentations from over 100 experts and thought leaders, panel discussions, and breakout sessions, #Risk Amsterdam 2023 is the perfect place to learn about the present and future risk landscape.

Going live September 27 and 28, 2023 at RAI Amsterdam.

Click here to register for #RISK Amsterdam