Automated systems are making an increasingly broad and important range of decisions about everything from credit applications to criminal justice. That’s why fairness in AI is one of the most important ethical issues in tech—and perhaps society as a whole.

GRC World Forums’ Analyst and Research Director Robert Bateman spoke to Alexandra Ebert, Chief Trust Officer at Mostly.ai about how synthetic data could be a way to solve the problem of fairness in AI.

While AI has huge potential in so many areas, AI systems can exacerbate bias and discrimination. Part of the problem is that machines are trained on “real world” data, which often under-represents people in certain groups. If structural racism, sexism and other biases are reflected in a dataset, this can lead to an AI system making discriminatory decisions.

Enter synthetic data. Synthetic data is not “real world” data—it allows AI developers to tailor their datasets to represent fairer demographics. This reduces the likelihood of discrimination—and also brings privacy benefits.

Join Robert and Alexandra as they discuss the pros and cons of using synthetic data to address the problem of fairness in AI.