Regulators in Europe continue to come up against challenges as they push to finalise new laws governing artificial intelligence (AI), with any agreement now looking unlikely until at least December.
The draft AI regulations, crucial for shaping the future of AI technology, are currently under discussion between the European Parliament and EU member states. Despite three prior meetings, talks are still encountering significant hurdles.
In a pivotal fourth meeting taking place this week, lawmakers will look more closely at foundation models like OpenAI’s ChatGPT and high-risk AI systems.
Spain, currently holding the EU presidency, has been heading up efforts to expedite the process, and has come forward with a number of proposed solutions. Among compromises, the country has advocated a tiered approach to regulate foundation models with over 45 million users.
Furthermore, Spain has recommended rigorous vetting processes for Very Capable Foundation Models (VCFM), such as ChatGPT, to uncover potential vulnerabilities.
Those in opposition argue that smaller platforms can be just as dangerous, with some emphasising the need for comprehensive regulations. While Spain has consulted with other EU nations on potential compromises, a definitive agreement is unlikely to emerge in the upcoming meeting.
If an agreement remains out of reach in December’s fifth gathering, negotiations could spill into early next year, potentially complicated by the EU parliamentary elections in June.
Despite the ongoing difficulties, EU lawmakers, including industry chief Thierry Breton and AI Act co-rapporteurs Dragoș Tudorache and Brando Benifei, have expressed optimism about approving the draft before the end of 2023.
Know the risks
Since its initiation in 2021, the EU’s work on the AI Act has reached critical junctures, with significant issues arising, including biometric surveillance, facial recognition, and AI application rules. These current proposals aim to classify AI tools based on their risk levels, imposing varying obligations on governments and companies accordingly.
The EU now faces a complex challenge in balancing technological advancement with regulatory safeguards, shaping the future landscape of AI within its borders.
These issues fall into focus at PrivSec Global on 29 and 30 November, where experts will address AI laws, principles and ethics in the following exclusive sessions:
→ Technology: The problem with facial recognition regulation
- Day 2: Thursday 30th November 2023
- 10:00 - 10:45am GMT
Facial recognition technology is now commonly used on most personal devices and to help criminal investigations, and it’s one of the most advanced technologies of recent years. However, after The European Parliament recently passed its version of the Artificial Intelligence Act, the use of facial recognition remains one of artificial intelligence’s riskiest uses.
As facial-recognition tools are full of biases, lawmakers are ready to fight to avoid any risk of mass surveillance. Maybe the solution lies not in banning facial recognition or any one type of biometric, but in regulating how we consent to the sharing and tracking of identifying data.
→ Ethical AI in principle: Innovation overtaking human rights?
- Day 2: Thursday 30th November 2023
- 12:30pm - 13:15pm GMT
The intersection of AI and privacy continues to attract attention, with a focus on ensuring that AI systems respect individual privacy rights and avoid discriminatory practices.
As technology races forward and regulatory efforts strive to catch up, the question emerges: Are we shaping a sustainable ethical future amid rapid advancement?
Discover more at PrivSec Global
As regulation gets stricter – and data and tech become more crucial – it’s increasingly clear that the skills required in each of these areas are not only connected, but inseparable.
Exclusively at PrivSec Global on 29 & 30 November 2023, industry leaders, academics and subject-matter experts unite to explore these skills and the central role they play in privacy, security and GRC.