This week’s guests discuss abolishing Clearview AI, the paradigm shift in synthetic media and the impact of BlackLivesMatter on facial recognition technology.
Weekly news edit
Joining Joe Tidy for this episode’s weekly news segment was Stephanie Hare, author of Technology Ethics. Discussing AI and Synthetic Media was Nina Schick, political broadcaster and author of Deepfakes: The Incoming Infocalypse, and fellow panellists Victor Riparbelli, tech-entrepreneur and CEO of Synthesia, and Henry Aider, a leading deepfake researcher.
At the top of the agenda in the PrivSec Space this week was Privacy International and Max Schrems’ complaints to the European data protection authorities regarding alleged data scraping by Clearview AI.
In support of Schrems and his other recent efforts to target cookie banners, Hare said, “I think what is shows is that we can’t really look to regulators for privacy or data protection, they are much more useful in antitrust.”
She adds, “[Max Schrems] is probably our best hope because nobody seems to be doing anything.”
In what becomes a common thread throughout this episode, Hare argues banning certain AI technologies altogether is certainly possible.
“It is easier to shut Clearview AI down and anything like it but the police probably find it a really useful tool so you will get blow back,” Hare said.
In reference to the recent news that UK patients have until June 23 to opt-out of having their health data scraped into a new database and shared with third parties, Hare said, “It’s a total pandemic reaction.” Arguing that there should have been a public consultation, she says, “deanonymisation is frighteningly easy” and that “this does not build trust.”
Max Schrems will be delivering the Keynote speech at PrivSec Global on June 24 at 4pm BST.
Register now and hear Max Schrems share his views on his work, the state of EU data protection, and whether the EU-U.S. data flows issue can ever be resolved.
In Conversation with Nina Schick
Exploring the bigger picture issues through a lens of geopolitics and cybersecurity, Schick and fellow panellists discuss the potentials of AI and Synthetic Media.
“There has been a paradigm change,” argued Victor Riparbelli. “Media production is going to move from something we record with a camera in the real physical world to something we can make on a computer.” This paradigm shift will allow creators to access typically inaccessible rich media, which he argued will “unlock a wealth of creativity and opportunity.”
However, with that comes the real risk of synthetic media being weaponised by bad actors or even citizens against other citizens, argued Schick.
Aside from intimate image abuse, which Henry Ajdar said does not need to be accurate to be harmful, “it is important not to overstate the sophistication of these tools,” he said. “It’s getting easier to make one but not easier to make a good one.”
Additionally, in reference to the belief that deepfake media will cause “disinformation on steroids,” Ajdar argued that research has actually shown that there is only a marginal difference between showing a deepfake and fake article in believability at current.
Making a similar point, Victor said, “it is enough to cause a lot of damage,” but he is not so much worried about deepfake’s impact on politics, but the real threat that it poses to society in regard to potentials for revenge porn, bullying, and blackmailing.
The way forward according to Ajdar, is developing the concept of authenticity in the digital age.
Is the cat out the bag with AI?
Joe Tidy was then joined in a panel discussion to discuss whether the cat is out of the bag with AI, with Ivana Bartoletti, founder of Women in AI and Technical Director at Deloitte, Megan Marie Butler, a subject matter expert in AI in HR and the future of work, and Rogelio Aguilar Alamilla, Deputy Head of Privacy Operations at BNP Paribas and Member of the European AI Alliance.
As pointed out by Rogelio Aguliar Alamilla, “there has been a shift in mood around AI around the world.” The EU for example, published its first attempt to regulate AI in April, which will place requirements on products that have high-risk technology.
“AI is far more than technology. I fear the use of facial recognition in public – that doesn’t make me a luddite […] I just want some morality,” said Bartoletti.
She added, “I do not think artificial intelligence can be regulated at a national level or even a European level […] I want to see some global movement on this as well.” She asked, “do we want to take surveillance to the next level?”
Megan Butler disagreed with the idea that you cannot or should not “uninvent a technology”, adding, “that’s innovation. […] As you go, you need to mature.”
Paying attention to the socio-political context in which IBM’s facial recognition technology faced a memorandum, Bartoletti said “BlackLivesMatter brought to life the link between facial technology and racism.”
“It’s not only a technical issue,” said Aguilar Alamilla. “We assume just because that it was tested well in one controlled situation, that it can be released out into the world.”
Episode 3: Kleptocracy and Virtual Assets will air June 8, where Joe Tidy will be joined by Dr Justine Scerri Herrera, from Ministry of Finance Think Tank and Partner at MK FinTech; Tiago Severo Gomes, Partner at Caputo, Bastos e Serrs Advogados; and Pamela Calaquian, Director Compliance and Controls Asset Servicing and Digital at BNY Mellon.
To register to watch GRC TV Episode 3, click here.
To watch GRC TV Episode 2 on-demand, click here.
Register for PrivSec Global 2021, here.
Register for FinCrime World Forum 2021, here.