Taking place at RAI Amsterdam on September 27 and 28, #RISK Amsterdam examines the trends and best practices organisations are employing to navigate today’s rapidly evolving risk landscape.
Anastasia Avramenko is Junior Ethics and Compliance Officer at Just Eat Takeaway.com. An International Technology Law Master’s graduate at VU Amsterdam, Anastasia has a passion for emerging technologies, investigations and criminal law. She will be at #RISK Amsterdam to discuss the tech phenomenon that everyone’s talking about – AI – and the frameworks that are needed to ensure its ethical and legal use.
- A New Era of AI Governance - Wednesday 27th September, 12:00 - 13:00pm (CEST) - GRC & Financial Risk Hub
We spoke with Anastasia about her professional journey and for an introduction into the themes on the table at her #RISK Amsterdam session.
Could you outline your career pathway so far?
My career began in earnest when I joined the Investigative Committee of the Prosecutor’s Office, where I worked on Tax Crimes and Crimes in the Economic Sphere. This experience gave me the opportunity to observe how defects in the legal system impact vulnerable sections of the population.
My next step saw me working within the Legal Tech department of a bank, creating a “robot-lawyer” for financial monitoring through algorithms and machine-learning. However, the project also exposed the potential for harm to customers as it resulted in violations of the right to privacy. This experience inspired me to obtain my Master’s degree in International Technology Law.
For the last few years, I have been working in the Ethics & Compliance teams in telecom and tech companies. I believe in the spirit of law rather than its letter. I always advocate for doing the right thing even when nobody is watching. However, companies often forget that doing the right thing goes far beyond just following the law. I help create robust and ethics focused compliance programmes that benefit both business, employees and society.
What do you see as the primary challenges and opportunities associated with AI governance?
Our world is being continuously revolutionised due to scientific developments and emerging technologies. AI is a revolutionary technology that, unfortunately, also presents a great danger to society and human rights.
To ensure that development of AI is guided ethically into the future, a proper understanding of the challenges and consequences for society caused by the use of this technology is necessary. This does not pertain solely to scientists and engineers.
AI has already become an inalienable part of everyone’s lives, which means it must be examined through social, political, ethical, cultural and legal perspectives. One of the problems regarding the regulation of all new technologies and AI in particular is the Collingridge dilemma: “Trying to control technologies is difficult… because in the early stages, when they can be controlled, not enough is known about the social consequences; but by the time those consequences become obvious, control becomes costly and slow”.
Another serious problem is the pace problem: technology is changing exponentially, but social, economic and legal systems are changing gradually. The regulatory system works slowly and cannot keep up with new technologies and therefore inhibits development. That presents a big challenge for AI regulation that is very difficult to solve.
Are there stand-out strategies or approaches that organisations should be adopting as they drive responsible and ethical use of AI tech?
It’s highly recommended that the companies adhere to the principles outlined in the UN Recommendation on the Ethics of AI. Even though the document is not legally binding, following it will help companies significantly when implementing AI tools. It will not only ensure companies are well-placed to prepare for upcoming AI legislation, but also create a foundation for the ethical and responsible use of AI that safeguards not only the company, but its clients and society.
To make sure that the principles are followed, a proper oversight is crucial. AI risk and impact assessments are important components of that oversight. Even though both concepts are closely related, and it seems like executing both is unnecessary, this is not exactly right. While risk assessment is more about identification and analysis of threats and vulnerabilities, impact assessment goes much further and considers the positive and negative implications of the technology for people and their environment.
Unfortunately, there is no universally accepted model for AI risk and impact assessments. However, there are plenty of materials and recommendations that companies should consider implementing in order to achieve an ethical and responsible model for the use of AI. For example, ISO/IEC 23894 on AI and Risk Management can assist organisations with integrating risk management into their AI-related activities and functions.
But keep in mind that the AI development is very rapid and this topic will continue to evolve as there will be even more attention to the accountability of business for the impacts of AI on human rights and society.
Don’t miss Anastasia Avremenko exploring these issues in depth in the #RISK Amsterdam panel: “A New Era of AI Governance.”
The session deep-dives into the challenges and opportunities associated with AI governance, and propose strategies for addressing them in the coming years.
Attendees will learn Ideas and strategies for promoting responsible and ethical use of AI technology—and for addressing potential challenges in the governance of AI.
Also on the panel
- Magdalena Rzaca, GDPR & IPR Legal Advisor, GÉANT
- Georgio Mosis, Next Generation AI and Data Lead, Philips
- Udi Cohen, CEO and Co-founder, Vendict
- Session: Day 1, A New Era of AI Governance
- Theatre: GRC & Financial Risk
- Time: 12:00 – 13:00pm (CEST)
- Date: Wednesday 27 September 2023
#RISK Amsterdam is also available on-demand for global viewing.