Artificial intelligence is now one of the singular most powerful arbiters of human rights across the globe. It is no longer an experimental technology operating at the margins of our world, but an active geopolitical force embedded in the molecular fabric of our society. For those tasked with anticipating, mitigating, and underwriting systemic threats, one truth is now unavoidable: AI risk is human risk.

Cecilia Jastrzembska

Cecilia Jastrzembska, Senior Policy Advisor

This year’s United Nations’ call to Unite to End Digital Violence Against Women and Girls is a recognition that technology-facilitated abuse has become a transnational, high-impact risk category, and one that current governance frameworks are nowhere near containing. It calls attention to a fact which far too few practitioners have internalised: technological violence has real-world consequences.

Three realities must now be confronted.

From Code to Corporeality 

The first is that the digital does not stay digital. We can already see this manifesting in the worst possible ways. In Australia for one, 15-year-old Matilda “Tilly” Rosewarne died by suicide after a doctored nude image of her was circulated widely among peers on Snapchat and other platforms; a horrifying example of deepfake image-based abuse that started on a phone screen, and ended in her tragic demise. Similarly, in the United States in 2025, 16-year-old Elijah “Eli” Heacock took his own life after being sextorted using AI-generated nude photos of himself; material that never existed in reality, but which was real enough to terrorise him.

Women and underrepresented groups are demonstrably bearing the overarching brunt of digital violation. Research from UNESCO shows that online harassment has chilling effects on women’s participation in public life:

  • 44% of female parliamentarians receive death and sexual violence threats
  • 30% of women journalists self-censor on social media
  • 20% withdraw completely
  • Over a quarter report negative mental health impacts linked to online violence.

These are not fringe figures; they are professional realities that diminish societal resilience and democratic legitimacy. This also causes a dangerous negative feedback loop; the more that women are deterred from public service or bullied out of it, the less representative governments become. And as a litany of studies have shown, when representation suffers, outcomes, economies, policies, diplomacy and overall social equality suffers as a result.

One particularly urgent emerging category of digital violence against women and girls (DVAWG) is deepfake pornography. 96% of this is directed at women, with 99% of cases being sexualised. This isn’t merely another sexist genre of multimedia content; in her seminal book The New Age of Sexism: How AI is Reinventing Misogyny, Laura Bates has classed this as the next sexual violence epidemic facing schools and workplaces alike.

Recently, X has been flooded with pornographic deepfakes made by the platform’s AI tool Grok, estimated at one nonconsensual image generated per minute, predominantly of women but also including child sexual exploitation material. When complaints are made, the tool issues “apologies” yet continues to generate further imagery.

As founder of Full Fathom Five Claire Roberts said, “Musk’s move to restrict this to paying subscribers isn’t a safeguard, it’s a business model that treats the sexualisation of children as a premium feature”. Global backlash has now catalysed watchdogs from the UK’s to Australia’s to launch investigations into the tool.

The effects of deepfake pornography are particularly severe. They haunt victims in job searches, courtrooms, social interactions, and mental health outcomes. During the RISK Global panel I moderated, Deepfakes, Sexual Abuse & the New War on Women: Law, Power and the Fight for Safety, I spoke to TV presenter Laura Hamilton who had the misfortune of being targeted, along with a string of other celebrities. She recounted the continuous damage for her sense of safety as well as her family.

The deepfakes were so convincing that even relatives and friends who had known her for years were questioning her on their veracity.  Any question of how “serious” these incidents in false equivocations to real world sexual violence entirely misses the point; it is a disingenuous derailment towards semantics instead of the necessary focus on victims. To be targeted this way is simply degrading, offensive, and abusive, and the gravity of their effects must not be downplayed.

“Femicides often sit on a continuum of violence that can start with controlling behaviour, threats and harassment-including online”. -Sarah Hendriks, Director of UN Women’s Policy Division 

Women in GRC - Nominations

Women in Governance, Risk & Compliance (GRC) - 2026 Award Nominations

An Orwellian Online State: AI As The New Societal Moderator

The second is that algorithmic bias in AI systems is continuously and actively rewriting our realities. The risk here must not be underestimated. When moderation algorithms suppress feminist advocacy while not just allowing, but encouraging misogynistic abuse to flourish, the outcome is not neutral error, it is an insidious structural distortion. These distortions carry measurable downstream consequences: withdrawal from public life, career attrition, psychological harm, and as aforementioned, actual loss of life.

Bias in AI systems is also certainly not theoretical. It determines who is believed in courtrooms, who is flagged by moderation tools, who is excluded from credit, healthcare, employment, and whose content is made visible or ultimately hidden. In December 2025, women on LinkedIn from the US to the UK participated in a collective experiment by changing their genders to male on linked in and using more agentic language. The results were staggering; content reach for many as much as quadrupld almost immediately after making the switch. For example, Kamales Lardi, a CEO and thought leader saw her post impressions increase by 421% in a few days , while Megan Cornish, a top voice on LinkedIn watched her views increase 400% on the platform.

The implications for this in terms of representation are seismic, as this materially affects reach and both political and corporate influence for women. It must also be remembered that AI bias does not in fact stay at the same level, but often steadily increases in a feedback loop, as shown by the racially biased Predictive Policing algorithm (PredPol) which incorrectly predicted reoffending rates for black residents at twice the rate for white residents even when controlling for actual crime rates. Many have also drawn parallels on this to the Amazon recruitment algorithm scandal, which downgraded CVs containing the word “women” for two years. The upshot is, AI is now indubitably an active decider in what and whom gets normalised.

“Equality in AI shows up in very ordinary places. Who gets an appointment, who gets a follow up call, whose concern is taken seriously. It is not abstract”. -Katy Cherry, CEO, HTL 

#RISK Digital - 11 march 2026

#RISK Digital - 11 march 2026, presented by #RISK Expo Europe, Sponsored by: Auditboard

Reframing Risk; A Global Challenge 

The duality is clear; though the current and potential harms of AI are deeply problematic, on the flip side, the shared benefits of well-governed systems are of huge significance. This can look like AI detecting coercive control patterns earlier than human-only models, revealing hidden trends in abuse data, or supporting law enforcement with trauma-informed design and more. However, these benefits do not emerge spontaneously. They require shared responsibility across the value chain. AI can either be the most powerful accelerant of inequality we have ever built, or one of the most effective levellers when it comes to its potential for prevention, early intervention and active mitigation of harms detected. 

Where the expectation of ethics in application lies is of paramount importance. Currently, the onus of responsibility is grossly inverted, with women and ethnic minorities being expected to operate in a state of hypervigilance both offline and online, including self-policing, reporting, even proving harm, all the while absorbing the consequences. This can be downright retraumatising. During a recent summit I organised, Championing Equality in AI which was featured by Politics UK, Head of Al at Globeducate Clara Hawking described four forces shaping online misogyny: AI algorithms that reward engagement, content that spreads quickly because it elicits strong reactions, generative tools making creating sexualised or abusive content “incredibly easy,” and malicious groups using gamification to recruit and radicalise youth. This environment, she warned, is becoming “a training ground for a belief system of our entire generation.” What we design and deploy now is not just generating incredibly serious consequences for today’s users, but is also quietly codifying the values that will govern tomorrow’s world.

In risk management, we do not ask end users to secure critical infrastructure. We mandate standards, audits, stress-testing, and accountability. Gendered AI harm should be treated no differently. Instead, bias should be handled with the equivalent gravitas as cybersecurity risks.

As Lea Rattei, legal AI expert said; “we must treat bias like security vulnerabilities, with financial accountability for organisations who fail to do so”.

Not to do so is not just unjust; it is operationally unsound. There is an unmistakable need now for proactive regulation, rather than the post-crisis optics management we are seeing unfolding internationally. 

“Secret systems create public problems” Cha’Von (CJ) Clarke Noelle, AI Psychology & Root Ethics Strategist and author of ‘The Digital Polycrisis’

Excellent regulation and responsible design does not stifle innovation; on the contrary, it is simply good business, engendering trust, adoption and customer retention. Beyond this, it also engineers long term resilience. For AI products and programs to stay competitive, ethical design with inclusion at its core needs to be reframed from a moral preference or “activist ask” to the strategic necessity it is.

In essence, AI can either be a gateway, or a gatekeeper. Whether it expands freedom or entrenches harm depends on architectural choices being made now by those designing and implementing it. It’s high time to stop expecting the most vulnerable in society to keep playing physical and digital whack-a-mole. What is permissible in design becomes probable in deployment; essentially, that which we do not reject, especially within AI, we reinforce.

This article was written by:

Cecilia Director, Policy, UK Government

Cecilia Jastrzembska, Senior Policy Advisor

Cecilia Jastrzembska, Senior Policy Advisor

Cecilia Jastrzembska is a Senior Policy Advisor, Co-Founder of Women In AI, UKAI, an Advocate for 50:50 Parliament, Founder of European Movement Women and nominee for the Outstanding Award for Tech Regulation by CogX for her work on mitigating algorithmic bias as well as Grassroots Campaigner of the Year for the UN Women UK Awards.

A longstanding gender equality campaigner, Cecilia is an award winning global speaker and journalist published in over 25 international newspapers. Her last campaign on making misogyny a hate crime won the Campaign Champions Award and was partnered by European Parliament.