According to a new report, OpenAI is still not putting enough effort into reducing the number of inaccuracies in its ChatGPT chatbot, resulting in a failure to comply with EU data rules.

The findings, made by a taskforce at the EU’s privacy watchdog stated:

“Although the measures taken in order to comply with the transparency principle are beneficial to avoid misinterpretation of the output of ChatGPT, they are not sufficient to comply with the data accuracy principle.”

In 2003, Europe’s collective of national privacy regulators put together the task force to put greater scrutiny over ChatGPT. The move came after anxieties surfaced within Italy’s regulatory authority about user experiences of the popular AI service.

According to the task force, investigations sparked by national privacy regulators in several member states are still underway. It has not been possible to present a comprehensive overview of the outcomes, but the findings should be seen as a ‘common denominator’ among the national authorities, experts say.

“As a matter of fact, due to the probabilistic nature of the system, the current training approach leads to a model which may also produce biased or made up outputs”, the report continued.

“In addition, the outputs provided by ChatGPT are likely to be taken as factually accurate by end users, including information relating to individuals, regardless of their actual accuracy,” the report added.

Know the risks

One of the fundamental principles of the EU’s data protection regulations is data accuracy. As businesses embrace AI-driven tools and other emerging technologies, it has never been more important for data practitioners to understand how to process information compliantly and ethically.


The issues take centre stage at #RISK Digital this summer.

Not to be missed at #RISK Digital

Understanding the Business Potential for Generative AI using the new Cs framework

Time: 13:15 – 13:45 GMT

Depending on who we listen to, the world is either rushing towards or away from generative AI. Most of this seems to be based on the fear of missing out, rather than any concrete critical analysis.

A group of MSc students at the University of Derby in the UK have helped to develop a novel analytical framework that will help businesses fully understand the challenges and consequences of using generative AI in their process workflows.

It will also help businesses to develop governance and compliance policies. It is based on 16 words beginning with the letter “C”.

This new framework has similarities to the Vs of big data from the early 2000s but is specifically tailored to the generative AI context.


  • · Richard Self, Senior Lecturer in Governance of Advanced and Emerging Technologies, University of Derby


Promoting a secure-by design culture in AI

Time: 13:45 – 14:15 GMT

Unlock the secrets to fostering trust and confidence in artificial intelligence with our engaging panel discussion on promoting a secure-by-design culture. Currently, AI permeates every facet of our lives, ensuring the security and integrity of AI systems is paramount. Join our esteemed panel as we delve into the principles, practices, and challenges of embedding security into the DNA of AI from inception.

From threat modelling and risk assessment to secure development methodologies and ongoing monitoring, we’ll explore strategies for integrating security considerations throughout the AI lifecycle.

On the panel

  • Erin Nicholson, Global Head of Data Protection & Privacy, Thoughtworks
  • Khagesh Batra, Head of Data Science, The Adecco Group
  • Dalit Ben-Israel, Partner, Head of IT and Data Protection Practice, Naschitz, Brandes, Amir & Co


These are just two of the exclusive sessions taking place at #RISK Digital this July!

Click here to see the full agenda

Discover more at #RISK Digital

Streaming live on Wednesday 3 July, #RISK Digital examines the changing risk landscape in a content rich, knowledge sharing environment.

Attendees will be able to learn how to mitigate risks, reduce compliance breaches, and improve business performance.

Click here to register for free for #RISK Digital