We are delighted to confirm that Data Privacy and Digital Law expert, Petruta Pirvan will speak at #RISK A.I. Digital, this month.

 Livestreaming March 20 and 21, #RISK A.I. Digital examines how organisations can harness AI technologies responsibly, ethically and safely.

Across two days, thought leaders, industry experts and senior professionals will provide their insight, enabling audiences to understand how to mitigate risk, comply with regulations and achieve robust governance over AI.

Event speaker, Petruta Pirvan is Founder & Legal Counsel, Data Privacy and Digital Law at EU Digital Partners. A specialist in data protection implementation, management, and educational programmes, Petruta leverages more than 15 years’ industry experience. 

Petruta will be at #RISK A.I. Digital to discuss how organisations can use AI effectively while safeguarding data privacy.

Below, she goes over her professional journey and takes a look at the key issues.

Bridging the Gap Between AI and Privacy Regulations 

  • Time: 17:30 – 18:00 GMT
  • Date: Wednesday 20th March (Day 1)

Click here to register for free to #RISK A.I. Digital

Could you outline your career pathway so far?

I’m a lawyer specialising in interpreting data protection international legislation with more than 15 years of practice in my profession. I’m a member of the International Association of Privacy Professionals, a Fellow of International Privacy (FIP) and a Certified Data Privacy Manager (CIPM) and Professional for Europe and US (CIPP/E & CIPP/US).

I hold course certifications in AI Ethics and Digital Policies from, among others, University of Helsinki.

I have been long acting as an in-house data privacy counsel for top multinational companies such as Oracle and Accenture focusing on conduct privacy by design reviews for client facing services, offerings, projects, products, or initiatives, and incorporating data privacy principles into those initiatives. While working in Denmark as a Data Privacy Compliance Office for the giant shipping company A.P. Moller-Maersk I have led the Global Data Privacy Compliance Program by setting up the group data privacy practices, guidance, and policies.

My work in privacy compliance led me to live in different countries, respectively in Denmark, Ireland, the Netherlands, and Romania and currently I’m established in Bucharest working as Legal Counsel, Data Privacy and Digital Law for global companies under my own brand which is EU Digital Partners (https://eudigitalpartners.com/).

At the same time, I act as a Data Privacy Consultant/Head of Privacy and Compliance at the Dutch company Wrangu B.V.

Can you outline some of the key challenges in reconciling AI with burgeoning AI regulations, such as the EU’s AI Act and the UK’s DPDI Bill?

In the quest to regulate AI a multitude of laws, regulations and standards have been promoted and they are currently in different stages of adoption.

To provide few examples, the EU AI Act, the US NIST AI Risk Management Framework, the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, the FTC Compulsory Process for AI-related Products and Services, the Chinese Interim Measures for the Management of Generative Artificial Intelligence Services.

This complex legislative environment creates the sentiment of untidiness and lack of harmonisation, especially in a world intrinsically linked by economic ties and equally impacted by the rapid development of AI which is poised to be one of the most impactful technology in decades.

One of the results, and a key challenge, is that AI impacts will vary across geographies, as these are shaped by local cultures, economic disparities, and different social norms. We want to build AI systems that bridge the digital divide and create an inclusive society, instead of making the gaps greater. 

From a legal standpoint, imagine how complicated it is for global organisation to draft a Universal Ethics AI Rulebook underpinning a global AI Governance Programme.

What steps can organisations take to remain proactive in their compliance strategies while ensuring compliance with AI regulatory frameworks?

Organisations must already act on setting up an AI Governance and an AI compliance framework.

A leaked version of the final AI Act is available and can be leveraged to anticipate the legal foundations for a trustworthy AI. That in conjunction with previous guidelines and an assessment list for trustworthy AI developed by the High Level Expert Group (HLEG) should be of paramount help.

In addition, existing or emerging risk management frameworks such as ISO/IEC 42001:2023 or US NIST AI Risk Management Framework can guide organisations to systematically address and control risks related to the development and deployment of AI systems and may finally constitute a cornerstone for subsequent compliance matters.

Finally, there are certain key characteristics which support AI trustworthiness, including accuracy, explainability and interpretability, privacy, reliability, robustness, safety, security, and risks mitigation. I would argue therefore, that these are the guiding principles to consider in identifying AI risks that can be addressed or mitigated while ensuring compliance with the AI regulatory frameworks.

Don’t miss Petruta Pirvan exploring these issues in depth at #RISK A.I. Digital in the session:

Bridging the Gap Between AI and Privacy Regulations

Also on the panel:

Details

Bridging the Gap Between AI and Privacy Regulations

  • Time: 17:30 – 18:00 GMT
  • Date: Wednesday 20th March (Day 1)

 

The session sits within a packed agenda of insight and guidance at #RISK A.I. Digital, livestreaming through March 20 and 21.

Discover more at #RISK A.I. Digital

AI is a game changer, but the risks of generative AI models are significant and consequential.

 The #RISK AI series of in-person and digital events features thought leaders, industry experts and senior professionals from leading organisations sharing their knowledge, and experience, examining case studies within a real-world context, and providing insightful, actionable content to an audience of end user professionals.

Attend the #RISK AI Series to understand how to mitigate these risks, comply with regulations, and implement a robust governance structure to harness the power of AI ethically, responsibly and safely.

Click here to register for free to #RISK A.I. Digital