Niovi Vlachopoulou, Data Protection Consultant, Sopra Steria
Livestreaming on 25 January 2024 as part of Data Privacy Day, Global Privacy Day brings together thought leaders and senior industry professionals to discuss the present landscape of data protection and privacy, and the current and future challenges that professionals face.
Niovi Vlachopoulou is a Greek qualified lawyer who is currently working as a Senior Data Protection Consultant at Sopra Steria Benelux. Niovi has broad and overarching experience in providing comprehensive privacy solutions to European Commission Institutions and Agencies and supporting law enforcement authorities in the EU to ensure compliance with the GDPR/EUDPR and LED.
Below, Niovi talks about her professional journey and introduces some of the key issues arising in her Global Privacy Day session.
- Safeguarding AI Data - Thursday 25th January 2024, 13:30 – 14:00 GMT
Could you briefly outline your career so far?
I studied law in the Aristotle University of Thessaloniki, and I have worked as a lawyer in Greece. I then decided to pursue postgraduate studies in KU Leuven and then at the University of Edinburgh, where I specialised in IT law. I have worked as a legal analyst in Ashurst LLP in Glasgow, on projects involving banking law as well as data protection.
I decided to move to Brussels to complete the Blue Book traineeship at the European Commission, in the Directorate General for Informatics (DIGIT), in a very aspiring Unit that dealt with policymaking for IT laws and the European Interoperability Framework. After that, I started working in the consulting company I am currently in, Sopra Steria, as a Data Protection Consultant.
In my current position, I have mainly worked with public authorities based in the EU and Belgium and I am part of larger teams where I am responsible for ensuring privacy by design and assessing the IT systems for compliance with the relevant legislation.
Could you summarise some of the common misconceptions surrounding AI and data protection?
As AI technology is rapidly advancing, people from the privacy world have expressed their concerns about AI potentially being incompatible with the established privacy regulations. I think we need to adopt a different approach to the subject.
AI should not be perceived as the data protection enemy to be defeated, but as a powerful tool that organisations can implement in their practice to enhance privacy. AI-driven technologies can help identify and classify personal data both in batches, but also in real-time. AI technology can be further used to handle data subjects’ requests and enforce different actions (e.g., deletion of personal data) as well as security measures, such as access control and encryption on different sets of data.
From a completely different angle, it is also sometimes believed that AI-generated data, being synthetic data, are automatically assumed as anonymised data and therefore, it falls outside the scope of the privacy regulations. This is another myth meriting debunking. Indeed, if developed with proper safeguards, AI-generated data can be completely anonymous, but this is not always the case.
Sometimes synthetic data preserves the characteristics of the original data, if generated by one-to- one mapping, thus preserving structural equivalence. In that case, the AI-generated data can allow for re-identification of the original data subjects, which would be an example of pseudonymous and not anonymised data.
What are some of the essential behaviours, mindsets and approaches that organisations should adopt in order to embrace AI technologies in ways that prioritises privacy?
The technology behind AI is not new: most companies have been using machine-learning and data analysis to some extent for their IT systems. What is potentially more problematic is that AI relies on a massive collection of data, and it is not straightforward how this data is processed.
In that sense, organisations should return to “the basics” of privacy, meaning that the basic principles of privacy, including transparency, lawful purpose, accuracy and data minimisation should be implemented when designing an AI-driven system. The security of the original personal data used for training the AI model, needs to be considered a priority and for this reason, approaches like data anonymisation and encryption of data are important.
All the processes need to be written down to create internal policies covering everything from data collection, retention, and storage to access controls and regular audits, ensuring that there is appropriate data governance throughout the data lifecycle. Especially in the case of large-scale processing of data, creating the above policies is quintessential in order to ignite trust and align with the ethical values of the society.
Last but not least, it is important that all teams dealing with AI technology, ranging from HR to the engineering team, should have appropriate allocation of resources and there should always be human oversight involved in order to mitigate any risks of discrimination or bias.
This engaging discussion aims to demystify common misconceptions surrounding AI data protection while shedding light on factual insights. Participants will navigate through the intricacies of safeguarding AI-generated data, gaining a comprehensive understanding of essential practices.
Whether you’re an AI enthusiast, a data protection professional or simply curious about the intersection of AI and privacy, this session promises to unravel the complexities and provide practical insights for effective AI data protection.
Also on the panel…
- Kristen Pennington, Partner, Privacy Law, McMillan LLP and Member of the PrivacyRules International Alliance (Panel Host)
- Ellie Dowsett, Data Protection Officer, Best Companies
Safeguarding AI Data
- Time: 13:30 – 14:00 GMT
- Date: Thursday, 25th January 2024