Taking place at RAI Amsterdam on September 27 and 28, #RISK Amsterdam explores the trends and best practices organisations are employing to navigate today’s rapidly evolving risk landscape.
Alexandru Gheorghe is a lawyer and Data Protection & Privacy Consultant and Cybersecurity Program Implementer at Inperspective. He leverages over 15 years’ industry experience and is passionate about online business and e-commerce. Alexandru will appear exclusively at #RISK Amsterdam to discuss the implications of the EU’s AI Act.
- The EU’s Game-Changing AI Act: What it means and Where it’ll take us - Thursday 28th September, 11:00 - 12:00pm (CEST) - Privacy & Security Hub
We spoke with Alexandru about his professional journey and for an introduction into the themes on the table at his #RISK Amsterdam session.
Could you briefly outline your career pathway so far?
Sixteen years ago, after I finished Law school and a Master’s degree, I got hired by one of the major banks in Romania – part of the French Groupe, Société Générale – as a Legal Advisor. The 2008 Financial Crisis came, and I felt like I got a front row ticket to the show as the banks prepared for impact. The bank’s director was a very clever guy and I had direct access to him, so I observed and learned a lot from him: “When you must take a big decision, involve several people. In case things go south following the decision, you won’t be the only person to blame.”
Then I saw the continuation of the crisis’ effects in the market from the inside of a debt recovery company where I got hired as a corporate legal counsel. Afterwards, I joined an e-commerce Swiss-owned company named Fashion Days, when e-commerce was just emerging in South - Eastern Europe. This was in 2010. I coordinated the legal department of this company which had a rapid growth: in 6 years, it had over 600 employees and was doing business in 7 European countries. When e-commerce started, professions changed – they became more technical, as IT Developers were stepping more and more into the light.
The legal profession changed with e-commerce, so I knew I had to make a switch, but I didn’t know what the switch was. I left Bucharest and went to Dublin, Ireland where I lived for half a year. There I met a solicitor and lawyer, who pointed out GDPR. I didn’t know what GDPR was, and I started some research. This was early 2017 and Ireland, just like the UK, was taking GDPR seriously. And then I knew: GDPR was the change I was looking for. I returned home to Bucharest after 6 months and founded a company the same year. In 2017, I was one of the first few in Romania to provide GDPR related consultancy for companies obliged to implement the Regulation’s requirements into their data processing activity.
In 2019, I relocated to Vienna, Austria. I worked from a start-up co-working place, and I got to meet a lot of interesting people with different sets of values, and I got close to the Austrian start-up environment. I had lived in Vienna for 2.5 years and, when the pandemic came, I was there and stayed there. During these last three years, the privacy profession has changed a lot, through the pandemic, and the war in Ukraine. The world went through medical shock; we saw OpenAI’s decision to outsource ChatGPT which brought with it the need to assimilate new types of risk. Cybersecurity steadily crossed into the privacy professional’s discussion and Artificial Intelligence brought in a lot of confusion.
I had to understand all these challenges of my legal profession. Therefore, I had to certify myself as a DPO. I was interested in understanding Cybersecurity and got a certification as a Cybersecurity Program Implementer ISO 27032, and I obtained an Artificial Intelligence Foundations certification. I have participated in all most 50 panels in the last three years, giving speeches on Privacy, Cybersecurity and AI, and recently, I was accepted into the Philosophy University of Bucharest as a PhD student, where I will study Applied Ethics with a strong focus on the Ethics of New Technologies.
How will AI enhance due diligence processes in the financial services sector?
AI in financial sector scares me, because AI will predict more and more what people want and offer products to stimulate people’s illusion of happiness. AI is used for task automation, delivering personalised recommendations based on predictive analytics and can detect fraud much more accurately.
The traditional banking needs transitioned to online and mobile banking needs, and this was required more and more by the millennials and the native digital Gen Z. Financial companies will use chatbots to offer 24/7 customer interaction on various platform and app instructions and functionalities, insights for wealth management solutions (people want to save or invest money), so financial investment advice as well.
This means that money decisions will become mathematical data-based decisions – devoid of instinct and flair for “reading the air”. And not all decisions have precise, mathematical results, so people will use AI and flip a coin. They will expect the result and they will blame AI (not themselves) if the result is bad.
To answer the question, AI itself will enhance due diligence if the model offers advice and consultancy based on the risk profile of every user. How will the AI legal framework generate internal standards for AI developers and AI users? We will see after the AI Act is being published.
In my opinion, risk profiling is key. I don’t like to gamble. Therefore, I minimise risk as much as I can, based on data I have, my set of values, my beliefs, and my perceptions of reality. I choose to position myself to risk in this way. But I know people that feel adrenaline when they lose. They don’t spend too much time trying to understand what went wrong in the process, they flip the coin again, they place a new bet.
Therefore, the way I like to spend my money is very different from the other guy that likes to take risks. I evaluate things differently, and this gets us back to the question of the due diligence process, which is different from one person to another, from one company to another.
In what ways are companies using emerging technologies to optimise their practices and shore up security?
It is a very good question that companies should ask themselves before using emerging technologies like AI. Companies will want to use the technology to improve production processes; improve search engine optimisation; drive cybersecurity and fraud management; write content in other languages; provide services in different languages; craft internal communications, improve customers relations; forge digital assistants, or offer personalised product recommendations.
But with this, different hurdles are involved, such as poor development processes, risk of data breach, poor security in the AI app itself, data leaks that expose confidential corporate information, research the company behind the app, malicious use of deepfake, etc.
Any interaction between the machine and the human being is risky. From what I see around, companies are still reluctant to use AI. In most of the cases, a few employees in the company started to use it because they were curious, so it was not the company’s decision. The companies were “forced” to take a look at some of the popular AI platforms and services (Midjourney, ChatGPT, Stable Diffusion, Dalle, Palm, Bard) because the “demand” came from the inside. And this is how security risks I’ve mentioned came into play.
Therefore, in my opinion based on how companies react to AI right now, companies risk the security of their confidential information and the security of personal data they process because there is no actual law in place to regulate the use of AI and align developers and users of new technologies to a common set of standards. Further, there is not enough understanding of AI in order to use it in a secure manner. This is where we are now.
Recently passed by the European Parliament, the AI Act is designed to shape the future of artificial intelligence. It addresses critical aspects of AI technology, including ethics, safety, transparency, and accountability.
The session aims to shed light on the Act’s implications and the profound impact it will have on AI development and adoption within the EU and beyond.
The event unites thought leaders and subject matter experts for a deep-dive into organisational approaches to handling risk. Content is delivered through keynotes, presentations and panel discussions.
- Session: Day 2, The EU’s Game-Changing AI Act: What it means and Where it’ll take us
- Theatre: Privacy & Security Hub
- Time: 11:00 – 12:00pm (CEST)
- Date: Thursday 28 September 2023
#RISK Amsterdam is also available on-demand for global viewing.