Streaming live November 29 and 30, PrivSec Global unites experts from both Privacy and Security, providing a forum where professionals across both fields can listen, learn and debate the central role that Privacy, Security and GRC play in business today.
Emmi Bane, B. Sci, MPH, CIPP-US, is Senior Privacy Program Manager at HP. Ms Bane earned her Bachelor’s of Science at Portland State University, majoring in Sociology, Psychology and Micro-Molecular Biology, and her Master’s degree in Public Health Genetics at the University of Washington.
Ms. Bane’s current research interests include the ethical, legal and social issues attendant on data privacy, international research ethics, and the intersection of technology, research and clinical care.
Below, Emmi Bane talks about her professional journey and introduces themes of her PrivSec Global session.
Ethical AI in principle: Innovation overtaking human rights? - Day 2, Thursday 30th November, 12:30 - 13:15pm GMT
Could you outline your career pathway so far?
My journey to this point has been an unorthodox one! I started my career in research and medical ethics, where I studied consent and community based participatory learning. My graduate degree is in Public Health Genetics, which is effectively the exploration of how genomic technologies impact populations.
I got my start in what might be considered a more conventional privacy career while I was working as a HIPAA (Health Information Portability and Accountability Act) compliance officer for a tech app; I became deeply concerned by the complexion and sheer volume of data that are effectually unregulated by privacy laws (N.B. in the US, only certain kinds of organizations are regulated by HIPAA, leaving the protection and governance of other sensitive data to a patchwork of State and Federal privacy laws).
Since then my research has focused on the sociology of privacy, and the impact that technological phenomenon (such as AI) have on our collective expectations surrounding consent.
What are the primary security risks that AI systems pose to global businesses?
In my opinion, the biggest threats are our unknown unknowns, and by this I do not necessarily mean the ultimate existential meltdown of a hostile sentient robot (although panicking about that can still be on the table, if you like). My biggest concern is that we don’t know exactly what we are dealing with, nor do we fully understand how our oversights and biases might be shaping our results.
Besides the structural concern that our new “insights” will ultimately evolve into a third order simulacra, the sheer volume of data being thrown at training models is a cause for alarm simply because of the potential for incidental inclusion of damaging data.
“Dirty data” and other agnostic or unfiltered data sets can contain all types of information. This potential comprehends trade secrets, contracts, data governed by third party contracts (entities subject to umbrella laws like HIPAA and the GDPR are especially at risk here) restricted or confidential IP information, and internal production code. Every unintended inclusion or combination is a potential security or privacy risk, and the more data you process, the greater the potential for incidental risk.
LLMs training on data originated by other LLMs can magnify bias and error in ways we may fail to detect. It has also been demonstrated that you cannot instruct a model to fully “de-learn” training data that have been processed. On top of this, we have no idea what implications or derivations may be possible from these data sets in the future, creating potential vulnerabilities and harms arising from new insights or conclusions indefinitely.
What challenges do private and public sectors face as organisations try to innovate with AI while upholding privacy?
A major challenge all organizations are probably facing to varying degrees is the mounting pressure to deliver immediate results versus the need to analyse long term consequences and risks. It is easy to demonstrate immediate benefit from novel technology, but it is more difficult to assess the longitudinal societal and existential threats of uses that have only been around a short time.
Beyond not knowing what incidental inclusions of personal data might be in training sets, we can’t always tell what data might become a privacy risk in the future. We build so many complex algorithms to predict outcomes that every part of a data constellation can be implicative.
The more information we have, the less feasible it becomes to decouple it from an individual, or from identifiable profiles of an individual. The inferences we are able to make with the addition of each data element balloon to the point that we may ultimately be unable to control the impact.
The intersection of artificial intelligence and privacy continues to attract attention, with a focus on ensuring that AI systems respect individual privacy rights and avoid discriminatory practices.
As technology races forward and regulatory efforts strive to catch up, the question emerges: Are we shaping a sustainable ethical future amid rapid advancement?
Get to the heart of the matter, only at PrivSec Global.
Also on the panel:
- Jessica Figueras, CEO, Pionen
- Nicolette Nowak, Vice President Legal, AGC & DPO, Beamery
- Tainá Baylão, Senior Specialist Data Protection, Infineon Technologies
- Gal Ringel, MineOS CEO and co-founder, MineOS
- Session: Ethical AI in principle: Innovation overtaking human rights?
- Time: 12:30 – 13:15pm GMT
- Date: Day 2, Thursday 30 November 2023
Discover more at PrivSec Global
As regulation gets stricter – and data and tech become more crucial – it’s increasingly clear that the skills required in each of these areas are not only connected, but inseparable.
Exclusively at PrivSec Global on 29 & 30 November 2023, industry leaders, academics and subject-matter experts unite to explore these skills and the central role they play within privacy, security and GRC.