Speaking to a US Senate panel this week, the chief of OpenAI expressed his significant concern that artificial intelligence technology could pose a risk to voting systems, undermining democracy.

Artificial Intelligence and Mashine Learning present many opportunities for innovation and growth, but they also bring potential risks and challenges that can impact the safety, security, and privacy of individuals, organisations and societies.

OpenAI CEO, Sam Altman added his recommendation that such technologies come under greater regulation and be subject to stricter guidelines so as to safeguard the integrity of elections.

Through the past year, firms across the world have been locked in a battle to harness AI and bring faster and more powerful AI-driven tools into the marketplace in projects that are swallowing unprecedented levels of financial funding and data.

Critics of the movement have been vocal about the potential risks that AI may bring upon society and business communities, with bias, prejudice and misinformation among the chief anxieties in current debate. Others have more dystopian fears in mind, with some projecting that artificial intelligence could spell the end of humanity as we know it.

Whatever the risks, the genie is out of the bottle in the view of Senator Cory Booker, one of a multitude of lawmakers working on how AI should be best regulated.

Highlighting the problem of misinformation and its potential role in the US elections next year, Senator Mazie Hirono said:

“In the election context, for example, I saw a picture of former President Trump being arrested by NYPD and that went viral,” illustrating the issue of fake images and how they might be used to sway public opinion.

In response to Hirono, Altman said that creators of such imagery should make it clear that those visuals have been manipulated, so that users can more easily distinguish between fact and fiction.

Addressing Congress, Altman hinted that the US government should look into stringent testing standards for AI platform developers, as well as the possibility for issuing operation licenses.

Called upon to judge which AI models should fall under new licensing laws, the OpenAI chief suggested that the threshold should fall upon any tool that is able to influence an individual’s beliefs. Altman also said that firms should be able to say whether or not they want the data they hold to be used in the training of AI models. On the other hand, information taken from the open internet should be free to take. He added that he would be open to the concept of advertising on platforms, but that a subscription-based set-up would be more desirable in his view.

Altman is among a number of tech leaders brought together in Washington DC to speak on AI-related issues. US legislators are also pushing to harness AI to tap into more benefits and shore up national security, while reducing risk of the technology being used by bad actors.

Get to the edge of the AI Debate

AI and ML present many opportunities for innovation and growth, but they also bring potential risks and challenges that can impact the safety, security, and privacy of individuals, organisations and societies.

Business leaders can access the very latest conversation and curated content on these issues in the #RISK AI & ML zone, part of the Privacy and Data Protection Theatre at #RISK London.

#risk London 2023 updated hero

#RISK is where the whole ‘risk’ community comes together to meet, debate, and learn, to break down silos and improve decision-making.

Learn more

Risk-London Black

Risk is now everyone’s business

 

Taking place October 18 and 19, #RISK London brings high-profile subject-matter experts together for a series of keynotes, engaging panel debates and presentations dedicated to breaking down the challenges and opportunities that businesses face in times of unprecedented change.

Learn more about #RISK London

“#RISK is such an important event as it looks at the broad perspective or risk. Risks are now more interconnected and the risk environment is bigger than ever before.”

Michael Rasmussen, GRC Analyst & Pundit, GRC 20/20 Research

Register your place at #RISK London to