The data privacy regulator in Italy has imposed a €50,000 fine on Trento for violating laws regarding the northern city’s use of artificial intelligence (AI) in urban surveillance schemes.

The penalty came with the stipulation that Trento erase information harvested in two projects that were financially backed by the European Union, and stands as the first time that a local administration has been hit by the GDPD authority over the use of data from AI technologies.

Following an exhaustive examination of Trento’s projects, the GPDP identified “multiple violations of privacy regulations” but the watchdog acknowledged the municipality had operated with good intentions. It was also noted that Trento did not go far enough with data anonymisation and that the data was incorrectly disclosed to third parties.

Trento has said that it will appeal the action taken against it, stating: “The decision by the regulator highlights how the current legislation is totally insufficient to regulate the use of AI to analyse large amounts of data and improve city security”. 

The case comes during Italy’s G7 presidency, and the Italian prime minister, Giorgia Meloni is under pressure to manage transitions to AI carefully, especially when it comes to protecting data subjects’ rights and safeguarding privacy.

At the end of last year, EU authorities reached provisional terms for regulating AI platforms, inching nearer to establishing comprehensive rules for evolving technologies. A significant point of contention revolved around the deployment of AI in biometric surveillance.

Know the risks

The Italian regulator’s decision to impose this fine on Trento underscores the task that lawmakers face and organisations face concerning the safe use of AI for extensive data analysis.

The topic falls under scrutiny next month at #RISK Digital, where experts will debate the risks and opportunities that come with AI integration into business operations.

Not to be missed at #RISK Digital

ChatGPT: the data privacy nightmare? Part 2

  • Date: Tuesday, 13th February 2024
  • Time: 09:30 AM - 10:00 AM GMT

Our distinguished panel of AI enthusiasts and experts delves into the intriguing world of ChatGPT in a lively discussion. This panel goes beyond the code to explore the capabilities, applications and ethical considerations surrounding ChatGPT’s language generation.

Our panelists will demystify the intricacies of ChatGPT’s training, emphasising the importance of privacy in the development and utilisation of AI language models

Join us for an immersive discussion on the evolving role of AI language models, the ethical dimensions of language generation, and the measures taken to ensure user privacy.

On the panel:

“ChatGPT: the data privacy nightmare? Part 2” is just one of the exclusive sessions taking place at #RISK Digital this February.

                                                  Click here to see the full agenda

Streaming live on 13th February 2024, #RISK Digital will examine the changing risk landscape in a content rich, knowledge sharing environment. Attendees will be able to learn and better understand how to mitigate risks, reduce compliance breaches, and improve business performance.

Click here to register for free for #RISK Digital