The government claims its “innovation-friendly and flexible” plans for AI regulation could help businesses avoid the “uncertainty that comes with regulatory compliance”. But could lighter touch regulation increase risks for UK businesses and individuals alike?

AI

Artificial intelligence is driving advancements in several important fields. But there is increasing concern about the risks associated with AI and its potential to cause social and economic harm.

As many countries attempt to mitigate or prevent these risks through tougher regulation, the UK government has published a policy paper setting out a less substantial approach. 

The proposals, still in their early stages, would initially involve no new law, no new regulator, and no new powers to existing regulators.

The UK’s “pro-innovation approach to regulating AI” is, along with the recently-published reforms to data protection law, ostensibly an attempt to “unleash” the potential of technology through looser regulation.

But Lilian Edwards, professor of law, innovation and society at Newcastle University, suggests that the project could fail to achieve this.

“At a time when not just the EU but Brazil, Canada, China—and probably soon the US and India—feel legislation is necessary to create useful, saleable, trustworthy AI, the UK has apparently decided none of that stuff is necessary,” Edwards said.

The UK’s proposed approach to AI regulation

According to Nadine Dorries, secretary of state for digital, culture, media and sport (DCMS), the government’s AI policy paper seeks to establish a “regulatory framework that is proportionate, light-touch and forward-looking”.

The proposed framework would consist of a set of “cross-sectoral principles” implemented by a set of established regulators, with a “pro-innovation” approach to enforcement.

Cross-sectoral principles

The government’s framework would establish a set of guiding principles that apply across sectors, with existing regulators responsible for governing AI among organisations within their current remit. 

The “early proposals” for the cross-sectoral principles for AI regulation would include:

  • Ensuring that AI is used safely
  • Ensuring that AI is technically secure and functions as designed
  • Making sure that AI is appropriately transparent and explainable
  • Embedding considerations of fairness into AI
  • Defining legal persons’ responsibility for AI governance
  • Clarifying routes to redress or contestability

Newcastle University’s Edwards describes the cross-sector principles as “pretty much mom and apple pie for AI regulation”.

“There’s nothing particularly wrong with the UK’s long-awaited AI ‘white paper’ (the real thing is still down the road),” Edwards told me.

“But there’s also little that’s very right—or, dare one say, innovative—for a document where every second word seems to be ‘pro-innovation’.”

Implementation by existing regulators

Rather than creating a new central regulator to govern AI, the proposals would see existing regulators interpret the principles in a way that is relevant to their remit.

The policy paper states that the government will consult with the Information Commissioner’s Office, the Competition and Markets Authority, Ofcom, the Medicine and Healthcare Regulatory Authority and the Equality and Human Rights Commission on how to develop sector-specific guidance implementing the principles.

The paper acknowledges that there are “differences between the powers of regulators to address the use of AI within their remit” and that  “inconsistency” is a “key challenge” to be solved in the government’s approach.

“Many regulators will have the flexibility within their regulatory powers to translate and implement our proposed principles, but not all,” the paper states.

At present, however, the government does not see an immediate need to address this inconsistency by handing regulators new powers.

“We need to consider if there is a need to update the powers and remits of some individual regulators,” the paper states. “However, we do not consider that equal powers or uniformity of approach across all regulators to be necessary.”

Enforcement

With no new powers granted to regulators—and no new law to create AI-related offences—“enforcement” might not be the correct term to use in the context of the cross-sectoral principles for AI. 

Where the paper mentions enforcement, it is largely portrayed as secondary to the requirement to promote innovation.

“Through our engagement with regulators, we will seek to ensure that proportionality is at the heart of implementation and enforcement of our framework, eliminating burdensome or excessive administrative compliance obligations,” the government says.

The government would ask regulators to address “high-risk concerns rather than hypothetical or low risks associated with AI.”

“We will also seek to ensure that regulators consider the need to support innovation and competition as part of their approach to implementation and enforcement of the framework.”

Mariano delli Santi, legal and policy officer at UK campaign group Open Rights Group, believes this characterisation of innovation and regulation to be misguided.

“The UK whitepaper on AI governance builds on the misconception that loose and self-regulatory frameworks would support innovation,” delli Santi told me. 

“This is wrong: robust ethical and legal boundaries liberate and encourage innovation.”

Delli Santi argues that rather than consisting of a loose set of principles, regulatory frameworks must be “clear” and “provide guidelines and rules that organisations can rely on to navigate the complex questions that arise from innovation.”

Divergence from the EU

The UK’s AI plans appear drastically different from proposals currently under consideration in the EU.

The European Commission’s draft AI Act, published last April, would require EU member states to designate a “national supervisory authority” to monitor the market for AI products. 

Authorities would have the power to impose steep fines on organisations that violate the AI Act—up to 6% of global turnover.

The Act would apply to all providers placing AI systems on the market or putting AI systems into service in the EU, irrespective of whether those providers are established within the EU. 

The extraterritorial nature of the law means that, irrespective of any “light touch” regime that may develop in the UK, any UK business hoping to provide or distribute AI products in the EU would be bound by the bloc’s stricter rules.

The EU’s proposals have been criticised—both by those who believe the law would be too arduous and would stifle innovation and those who argue it does not go far enough (and who sometimes claim that some forms of innovation should be stifled).

Writing in the FT, MIT research scientist Andrew McAfee criticised the EU law’s definition of “high-risk” AI systems—which, he claimed, would impose excessive obligations on some providers of seemingly benign applications.

“AI-using entrepreneurs and early-stage investors… will baulk at these expensive and time-consuming requirements and direct their energies away from high-risk application areas,” McAfee writes.

Conversely, the US-based NGO Human Rights Watch states that the EU’s draft AI Act “does not meaningfully protect people’s rights” and criticised the law’s reliance on “self-regulation.”

In April 2021, shortly after the publication of the law’s first draft, Access Now’s Daniel Leufer told me that while European Commission had “acknowledged that some uses of AI are very problematic for fundamental rights,” it hadn’t “gone far enough” to safeguard those rights.

Regulation of AI in the GDPR

The EU General Data Protection Regulation (GDPR) already places limitations on the use of AI systems in certain circumstances, both via the rights over “automated individual decision-making”, set out at Article 22 of the regulation, and by restricting the use of personal data.

Research from the Future of Privacy Forum cites over 70 cases from courts and data protection authorities that involve some aspect of automated decision-making.

The “UK GDPR” remains UK law for now. As such, the country’s AI rules are still largely consistent with those of the EU.

However, the UK’s Data Protection and Digital Information Bill, published 18 July, would substantially amend key provisions of the law—including Article 22’s automated decision-making rules.

With UK-based organisations continuing to be bound by EU law if they wish to offer services in the bloc, this inconsistency could create additional risks for UK businesses. 

‘Light touch’ regulation and business risk

While neither the UK nor EU’s regulatory frameworks have taken effect, the differences in approach between the two jurisdictions are clear. 

The EU’s draft AI Act would impose substantial compliance obligations and a risk of high fines. Businesses wishing to enter the EU market will need to consider these regulatory risks.

But there are risks associated with looser regulation, too, argues Open Rights Group’s della Santi.

“UK businesses will adopt practices that fit these loose legal and ethical standards, only to find out that these are at stake or outright illegal within the regulatory frameworks of foreign markets and jurisdictions,” he said. 

Della Santi also suggests that companies could face reputational risks by “being associated with a country” where “bad-faith actors look for shelter and act in defiance of everyone elses’ laws and social norms.”

“This is another example of why a pro-growth regulatory regime ought to be a robust, rights-based framework that promotes public trust and, in turn, encourages everyone to embrace rather than reject innovation.”