We are delighted to confirm that Behavioural Data Science Pioneer, Ganna Pogrebna will speak at #RISK A.I. Digital, opening tomorrow.

 Livestreaming March 20 and 21, #RISK A.I. Digital examines how organisations can harness AI technologies responsibly, ethically and safely. 

Across two days, thought leaders, industry experts and senior professionals will provide their insight, enabling audiences to understand how to mitigate risk, comply with regulations and achieve robust governance over AI.

Event speaker, Ganna Pogrebna is Lead for Behavioural Data Science at The Alan Turing Institute. She serves as the Executive Director of the Artificial Intelligence and Cyber Futures Institute at Charles Sturt University, and is Honorary Professor at the University of Sydney.

Ganna will be at #RISK A.I. Digital to discuss how AI can exacerbate existing bias, and how organisations can combat adverse behaviours that machines inherit from their human creators.

Below, Ganna goes over her professional journey and takes a look at the key issues.

Amplification of Existing Bias

  • Time: 12:15 – 13:45 GMT
  • Date: Wednesday 20th March (Day 1)

Click here to register for free to #RISK A.I. Digital

Could you briefly outline your career so far?

My journey into behavioural data science has been a fascinating exploration of human behaviour through the lens of quantitative models. 

My career began in the field of decision theory, where I dedicated myself to developing and testing mathematical models to predict human behaviour. This led me to work at such universities as Columbia University in New York, the University of Warwick in the UK, and Humboldt University in Germany.

The pivotal moment in my career came in 2013 when I joined the Warwick Manufacturing Group at the University of Warwick. This transition began my path into data science, particularly behavioural data science, as I embarked on projects that leveraged large-scale datasets to unravel consumer choice, digital transformation, and the nuances of artificial intelligence.

Today, I am Executive Director of the Artificial Intelligence and Cyber Futures Institute at Charles Sturt University, Honorary Professor at the University of Sydney, and Lead for Behavioural Data Science at the Alan Turing Institute.

My contributions to the field encompass smart technological and social systems, cybersecurity, human-computer interactions, and innovative business models. One of my notable achievements is the development of the digital security risk-tolerance scale (CyberDoSpeRT), which has gained widespread recognition and application both in Australia and internationally. 

My dedication to advancing the understanding of risk analytics and modelling has been honoured with the prestigious Leverhulme Research Fellowship award. I engage with a wider audience through my Data Driven blog on YouTube, the Inclusion AI blog, and my involvement in the Oxford Handbook of AI Ethics and the upcoming Cambridge Handbook of Behavioural Data Science.

What factors and issues contribute to bias amplification within AI technologies?

Bias amplification within AI technologies is a multifaceted problem that stems from various sources and manifests in different forms. At the core of this issue lies data bias, where the information used to train AI models is not immune to historical biases, sampling biases, or biases in data labelling. These biases, when ingrained in the training data, set the stage for the AI system to perpetuate and even amplify prejudices.

Adding another layer to the complexity is algorithmic bias, which arises from the design and implementation of the algorithms themselves. These biases can be subtle and insidious, often going unnoticed until they manifest in the system’s output. Moreover, AI systems that employ feedback loops are particularly susceptible to bias amplification.

As these systems reinforce their own decisions over time, any initial bias can become more pronounced, leading to a cycle of discrimination and unfairness. The consequences of bias amplification in AI technologies are far-reaching, affecting not just the individuals who are subject to the biased decisions but also undermining the credibility and trustworthiness of the AI systems themselves.

It is a challenge that calls for diverse and varied approaches, combining technical solutions with ethical considerations to ensure that AI technologies serve the interests of all members of society fairly and justly.

What measures can organisations take to offset risk of bias when embracing AI?

Risk of bias in AI must be tackled from multiple angles. One of the key strategies is to ensure that the training data sets are diverse and representative of the population they aim to serve.

By capturing a wide range of perspectives and experiences, organizations can reduce the likelihood of bias being encoded into the AI models. Regular audits of AI systems for bias and fairness are crucial in maintaining the integrity of these technologies. These audits can help identify any biases that may have crept into the system, allowing for timely corrections and adjustments.

Transparency and explainability are also vital in mitigating bias. By making AI systems more transparent and their decision-making processes more explainable, organizations can build trust and facilitate a better understanding of how these technologies operate. Ethical guidelines play a pivotal role in steering the development and deployment of AI technologies.

By adhering to a set of principles that prioritize fairness and equity, organisations can ensure that their AI systems are designed with human-machine trust considerations at their core.

Inclusive development practices that involve a diverse group of stakeholders can help identify potential biases and ensure that the AI systems are fair and inclusive for all users. By implementing these measures, organisations can offset the risk of bias in AI and pave the way for technologies that are not only innovative but also ethical and trustworthy.

Don’t miss Ganna Pogrebna exploring these issues in depth at #RISK A.I. Digital in the session: Amplification of Existing Bias.

The amplification of existing bias poses a critical challenge to ethical AI development and deployment. This panel discussion brings the intricate dynamics of bias propagation in AI systems, shedding light on the role of human factors and exploring strategies to mitigate its negative effects.

Panellists will also delve into various dimensions of bias amplification, examining AI algorithms, biased datasets and societal prejudices across domains. They will explore the complex interplay between human biases, algorithmic decision-making and the perpetuation of systemic inequalities.

Speakers:

Details

Amplification of Existing Bias 

  • Time: 12:15 – 13:45 GMT
  • Date: Wednesday 20th March (Day 1)

The session sits within a packed agenda of insight and guidance at #RISK A.I. Digital, livestreaming through March 20 and 21.

Discover more at #RISK A.I. Digital

AI is a game changer, but the risks of generative AI models are significant and consequential.

 The #RISK AI series of in-person and digital events features thought leaders, industry experts and senior professionals from leading organisations sharing their knowledge, and experience, examining case studies within a real-world context, and providing insightful, actionable content to an audience of end user professionals.

Attend the #RISK AI Series to understand how to mitigate these risks, comply with regulations, and implement a robust governance structure to harness the power of AI ethically, responsibly and safely.

Click here to register for free to #RISK A.I. Digita