Megan Marie Butler, ahead of her PrivSec Global appearance, explains the privacy pitfalls of using Artificial Intelligence in HR.

megan marie butler

Megan Marie Butler

According to Megan Marie Butler, Director of Augmented AI Ltd, her future in tech was all too predictable. She began coding in high school, a self-confessed “geek” from the start; a millennial who rode the wave of the Web 2.0 era and became fixated on investigating it for herself.

Combining tech with social science, Butler currently researches AI ethics in HR and the future of work at Augmented AI Ltd, whilst also pursuing a PhD in advancing HR practices using AI at Leeds University Business School. She also advises on AI in HR at the Institute for Ethical Ai at Oxford Brookes as well as manages her research team at the HR Analytics Think Tank.

Working with a group in Canada, Butler and her team are building a board to advise the government on applications of AI in HR and bring together ideas around the use of neuro-linguistic programming (NLP) with a focus on qualitative research and applications in HR.

But she says her work in HR is really about asking, “who is using the data? why are they using it? What types of models are they using? Are they appropriate? Are they transparent? Does it work? Is it ethical?”

With an abundance of enthusiasm for both social science and tech, Butler explains why a more nuanced approach to data collection and use in HR is required.

“When we’re talking about discrimination, you need to have an understanding of the social science behind it,” she says.

 ”Who is using the data? why are they using it? What types of models are they using? Are they appropriate? Are they transparent? Does it work? Is it ethical?”

Companies sometimes say, she explains, that a system is too complex for anybody to fully understand, which can result in negative consequences for marginalised groups and individuals. “But then you dig into what they actually did,” she says, “and it’s like, actually, no, what you did was really simple, and you’re just covering up the fact that you did something inappropriate.”

But largely, she says, “businesses do not have ill intentions, but they absolutely must be hyper-aware.”

“There’s already a lot of clear laws, regulations and guidelines that already exist that some companies aren’t even following yet.” Butler argues that in order to succeed, companies must pay attention to the laws and refrain from getting too caught up in the AI marketing hype and sales, especially post-pandemic.

With a lot of new technologies and software on the market, Butler says that businesses need to ensure that what they are investing in now, will not become illegal in just a couple years’ time.

Using AI to make predictions has become all the more complicated now that predictions made pre-pandemic may no longer apply, particularly within HR. For example, Butler says, the use of term predictors, which estimate how long an employee might stay at the business, have become almost invalid. Thankfully, she says, tools such as term predictors are losing popularity due to concerns of privacy invasion.

To avoid investing in tools that will eventually come back to haunt you, Butler advices that businesses “ensure they really understands what they need and not get caught up in a good sales pitch or a good idea,” she explains, “A good tool starts with knowing what it intends to do.”

Secondly, “businesses should clearly understand what data is being used and how the tool or algorithms work – and that it fits with a specific use case from point 1 (including understanding the limitations and how users should be working with the tool). As the old saying goes, if it sounds too good to be true, it probably is – make sure you do your market research!”

Research shows that 90% of businesses say they will have AI implementation in progress within the next 2 years, spurring governments across the globe to work to produce AI regulations and laws to safeguard the transition to a more digitally dependent workforce.

Butler says that we can expect to see changes in the law as it relates to AI within the next 2-4 years. The EU has been active in this space is working on a horizontal regulatory proposal of high-risk AI systems to be published this year.

Megan Marie Butler will be appearing at “Are You Biased? How Fair is AI Use in Your HR Processes?” at 3pm on 23 March at PrivSec Global

Click here for more