There’s one more theme running through the AI Summit that will be worth keeping an eye on.
The first summit had the word “safety” in its title. Some felt the event pushed the narrative too hard and terrified people with dark talk of existential threats.
But it hasn’t fallen of the agenda entirely.
As a subject, AI safety is a rather broad church. It can relate to any number of risks: the generation and spread of misinformation, displays of bias and discrimination against individuals or races, the ongoing development by multiple countries of AI-controlled weapons, the potential for AI to create unstoppable computer viruses.
Prof Geoffrey Hinton, often described as one of the Godfathers of AI, says these as “short-term risks”. They might be up for discussion in Paris, but he argued on BBC Radio 4’s Today programme last week that they are unlikely to garner strong international collaboration in the long term.
The big scenario which he believes will really pull everyone together is the prospect of AI becoming more intelligent than humans – and wanting to seize control.
“Nobody wants AI to take over from people,” he says. “The Chinese would much rather the Chinese Communist Party ran the show than AI.”
Prof Hinton compared this eventuality to the height of the Cold War, when the US and Russia “just about succeeded” in collaborating in order to prevent global nuclear war.
“There’s no hope of stopping [AI development],” he said. “What we’ve got to do is to try to develop it safely.”
Prof Max Tegmark, founder of the Future of Life Institute, also share a stark warning. “Either we develop amazing Artificial General Intelligence [AGI] I that helps humans, or uncontrollable AI that replaces humans,” he says.
“We are unfortunately closer to building AGI than to figuring out how to control it.”
Prof Tegmark hopes the summit will push for binding safety standards “like we have in every other critical industry”.