We are very happy to announce that “The Privacy CIO” Martin Gomberg will speak at #RISK Digital, this month.

Martin Gomberg a.k.a The Privacy CIO, Author, CISO Redefined, Consultant and Founding Member, The Privacy Panel

Livestreaming on 13 February 2024, #RISK Digital examines the changing risk landscape in a content rich, knowledge sharing environment. The one-day event sees over thirty expert speakers provide insight and guidance on how organisations can mitigate risk, reduce compliance breaches and improve business performance in the digital age.

Author of CISO Redefined (Cyberite, 2018), Martin Gomberg is founding member at The Privacy Panel, and was CIO for a global media network for nearly two decades.

A CISO and global director of security, privacy, and business protection, Martin headed global technical strategies for a major bank, and was Vice Chair of a US State Department Overseas Security Advisory Council Industry Group.

Martin will be at #RISK Digital to discuss the 2023 data breach suffered by genetic testing company, 23andMe, which affected the personal data of nearly 7 million people.

Below, Martin answers questions on his professional journey and discusses the key concepts and implications of his #RISK Digital session.

  • 23andMe….and everyone else Tuesday, 13th February 2024, 09:00 – 09:30am GMT




What can people do to better protect their online information as AI becomes more integrated into our lives and what can businesses do to better protect themselves and to avoid being the next headline?

Perhaps it differs a bit for we in the US than for our friends in Europe, UK, and elsewhere, but it seems to me that most people that you ask will say that they care very deeply about their privacy, at least until it becomes an inconvenience, or denies them something that they want, or someone asks them to disclose more than they should and they don’t want to make an issue or seem impolite.

If offering your name, email address and a few choice facts about us, and the things that we like gets us to the shorter coffee line, greeted by first name and our name scribbled on the side of our coffee cups, most will be happy to provide it.

We fill out applications that ask our Social Security (government issued ID numbers in the US) without any clear purpose for it. Worst, for job applications before an offer is even made, a disclosure with no benefit to us. We give away far too much far too easily often without questioning or understanding the implications of the information that we give away.

And so, if spitting in a tube is the means to know where we are from, all of those to whom we are related, known and potentially unknown persons, and the likelihood we might carry some genetic life impactful affliction, or be related to someone else who might, it is easy to understand why we do it.

But we trust too much. We assume that all companies provide the same protection for us and care for the information that we provide about ourselves in the same way that the best of companies of course do, or we hope that they do. We assume that they all meet the requirements of law and compliance with regulations in the same way or to the same degree, which of course, they do not.

Believing that would be naïve on our parts. Far too many companies are unequipped and fail to meet even the minimal standards we would expect of a modern business, whether it is for lack of awareness or understanding, or a lack of capability, resources or simply a lack of willingness to make the expenditures.

Even some of the biggest and best-resourced of companies, multinational name brands, compelled by the most stringent of regulations, those holding the most data managed by the most complex of organizations with the broadest array of defences, do lose track of the data they hold or how it is being processed and the consistency of its protection across the different parts of their massive enterprise.

Unfortunately, in our day-to-day relationships with many of the businesses that we choose to use, businesses of any size, their proficiency and integrity is hard for us to know, and even if we did know, the depth of their competence would be beyond most of our capacity to understand. And so inadequately informed, we are forced to trust, relate, provide what is asked of us, and accept an accountability for our own data and selves that we are completely ill prepared to provide and should never be asked of us to do.

Even as companies offer us increased control of our data through browser privacy controls and other means, these are controls created by sophisticated technical experts, the implications of these, and the intricacies of each, though well understood by them, are often beyond us.

Incognito mode will prevent exposure of much of our data and our being tracked across websites – a very good protective defence – but with the caveat that some sites won’t work. But it’s okay – we can exclude them so that the defence does not apply to those excluded sites. What?! Oh, and that site has a valid certificate. What does that mean to most?

Your rights are explained in our privacy policy, and what you are allowed to do on our site in our terms of use. But each is crafted in brilliant legalese, often impossible to read, with frightening sounding terms and clauses all well beyond our comprehension. So, the Privacy Policy protects some of our rights, but the Terms of Use protect the company against abuse, misuse, explains penalties, and whether violations will be handled in court or arbitration, what you accept, consent, authorize, and indemnify by proceeding. What?! Terrifying and incomprehensible.

Is it a policy, a contract, or a devil’s bargain? I have forty years of experience as a CIO, CISO and privacy specialist. Our expectations of everyday people to understand the technology we offer them as informed options, choices and preference settings, and our language in describing their rights and protections, and what we expect of them as consumers – this for people who just want to go online, buy something, and enjoy their life – baffles me. We must do better.

And so, we come to the 23andMe exposure – certainly egregious, whether they call it an event, an incident, or a breach, as if anyone other than reputation management companies that they undoubtedly have hired, will care about the subtle distinctions; where the personal data of 5.5 million customers, and through relationship, an additional 1.4 million individuals was taken and disclosed. They, to the extent we know, succumbed to what is called a “credential stuffing” attack – an automated onslaught leveraging compromised IDs and Passwords to gain access to customer accounts that perhaps should have been better protected by commonly-used secondary defences, but apparently were not, and these were used to gain access to their massive repositories of held client personal data.

We know that individuals use the same IDs and passwords on multiple sites, so compromising one website often provides the key to compromising another, if not protected by multifactor or more complex authentication methods, or if better password controls are not required, and ultimately, if the exposed data itself if accessed is not encrypted.

But we are human; we are forced to either create and remember unique and impossible-to-remember passwords for different sites, use password vaults which we must trust and believe cannot themselves be compromised, exposing all our passwords, or we write them down, generally the simplest and most common means humans have for addressing passwords.

But the 23andMe exposure to me, apart from the nature of the data disclosed, comes as no surprise. The biggest and best of our companies in every industry are attacked and compromised every day –many that assume themselves to be well prepared. But we often and willingly attribute to our attacker’s malevolent genius, when the answer is better explained by our incompetence and inadequate preparation.

We don’t adequately harden our technical environment. We don’t limit who is allowed access by providing what we call ‘least privilege’ access. We insufficiently train. We hire temps to fill gaps and give them more access than needed because it is expedient and easier than providing the correct credentials and figuring out what they might need to fill a vacated role. We put things in the cloud but don’t properly configure the needed safeguards. We don’t whitelist the equipment and software allowed to be installed in our environment, a source of compromise for some of our biggest and most egregious breaches. 

We allow systems and applications that we should retire to live longer than we need. And we collect more data than needed, hold it longer than we need, and often use it in ways that are different or for derivatives of the purposes for which it was initially provided.

Cleaning up data that is obsolete – old accounts, emails written, data collected – is the biggest nightmare for any company, and without good data hygiene our exposures are magnified. But most don’t do it well despite regulatory requirements, and for those less regulated, many do not do it at all. Storage is cheaper than clean data hygiene.

We teach people to please, be pleasant, not objectionable, and teach our employees to provide customer service. One network technician told by an executive to fix a performance issue for a connection kept outside of the protected network space for safety, did so by moving the connection from outside to the inside of the protected network, making the network available to the entire world.

Good customer service is important. Blind customer service is not. And though not a guarantee, or a get out of jail free card, encrypt.

Most privacy laws consider that any unauthorized disclosure is a breach triggering specific mandatory reporting and regulatory responses. Whatever they choose to call it, 23andMe was breached, and there were expectations of their safeguards, though until recently not heavily regulated, and an expected response. Regulators, prosecutors, auditors, and civil claims will evaluate them for culpability in the breach, not me.

So, let’s turn our attention to the 23andMe breach from another perspective, not specifically to the company, or the breach, but to the nature of the data: DNA, the state of protective controls, and to the nature of the state of existing and emerging privacy law.

DNA data is undoubtedly sensitive, and unlike other personal data, DNA data and our genomic identity is not limited to one individual but spans all the individuals to whom that individual is related by blood, whether those individuals are known, are unknown, or have never been met, regardless of where they may live, and even if they have not yet been born.

Genomic data spans individuals and time. Existing and emerging privacy laws in Europe, and increasingly across the US as state laws emerge, and elsewhere globally, define personal data as being ‘any data that identifies or is directly or indirectly related to any individual, known or unknown’.

For companies involved with the collection of DNA data, its unauthorized disclosure of any one individual is potentially a disclosure of many. This is a challenge, legally, ethically and from the perspective of our ability to assure security, it is increasingly questionable.

As more and more data are amassed and concentrated in massive data lakes, in the cloud, and the relationships within that data become deeper and increasingly more referential to each other, and persist for longer time, are used in new ways and span generations, we must rethink the relationship of our laws, technology and security regarding DNA.

Genetic data is at the forefront of innovation. Its collection and use are vital to the development of new and effective therapies, in clinical application, clinical trials, and health research. Genomics-based trials and other clinical trials benefit from inclusivity and scale, shared and collaborative research, and for the most effective of trials, global or multinational participation. And society demands a lot of its DNA data. DNA research repositories will need to be increasingly dense and persistent; will need to transcend generational boundaries and increasingly derive new uses and purposes for which a clear basis in law is not always evident, but a clear necessity of purpose is.

Europe’s ‘1+ Million Genomes’ (1+MG) initiative has aligned 25 EU countries, Norway, and the UK in establishing a genomic infrastructure for medical research and clinical trials. In the US, the National Institute of Health National Human Genome Initiative anticipates genomics research will potentially generate up to 40 exabytes of data in the next decade.

Paraphrasing my good friend R.T. O’Brien of Reinbo Consulting Ltd., ‘Big dinosaurs get bigger teeth, before their prey gets thicker armor’. The challenge of the increasing data load of denser and deeper and more attractive data over time is always the need to ‘keep up’ and continually invest and outpace potential attack and breach by making investments in security. 23andMe is the canary in the coal mine. There is more to come.

And so, the question asked, particularly in the context of the age of AI: how can individuals protect themselves and their data privacy?

Well, there is standard guidance about using unique passwords: hover over links to assure that we know where they lead, encrypt your data, read privacy statements and be prudent where you offer consent, and similar.

Although we should do all of this, on one side it is the weight of corporate interests, nation state actors, technical malevolence, automation, AI, complex science, and the implementation of regulation, controls, law, language, and continually changing consumer devices, interfaces, and OS updates.

On the other, it is those of use trying to enjoy a day at the beach.

People cannot be expected to defend themselves and be responsible for the assurance of their own protection. It is not in their DNA.

We need to do a much better job.

Don’t miss Martin Gomberg exploring these issues at  #RISK Digital in the session:

23andMe….and everyone else.

Additional information is surfacing regarding a data breach initially disclosed by the genetic testing company 23andMe in October.

However, as the company provides more details, the situation is growing increasingly unclear and causing greater uncertainty for users trying to comprehend the implications. It also raises the question of whether we can ever effectively protect our information, and if our privacy laws are an even fit to DNA data that spans individuals, families, and generations.

Panellists will delve in to what this means from a privacy and privacy law perspective.

Also on the panel:


 23andMe….and everyone else
  • Time: 09:00 – 09:30am GMT
  • Date: Tuesday, 13th February 2024

The session sits within a packed agenda of insight and guidance at #RISK Digital taking place 13 February

Discover more at #RISK Digital

#RISK Digital will examine the changing risk landscape in a content rich, knowledge sharing environment. Attendees will be able to learn and better understand how to mitigate risk, reduce compliance breaches, and improve business performance.

Risk is now everyone’s business. Enterprise chiefs need to be tech-savvy, understanding how GRC technology fits into strategy and how to solve regulatory challenges.




Click here to register for free for #RISK Digita