ELON Musk’s claims that artificial intelligence (AI) ‘will kill us all’ have ‘no proof – yet’, according to a former responsible AI programme manager at Google.

Toju Duke, who worked at Google for nearly a decade, told The Sun: “I’ve not seen any proof with the AI were dealing with today.”

Toju Duke, a former responsible AI programme manager at Google

Toju Duke, a former responsible AI programme manager at GoogleCredit: Toju Duke / LinkedIn

 

Tesla and SpaceX founder Elon Musk

Tesla and SpaceX founder Elon MuskCredit: AFP

The eccentric billionaire has been a staunch critic of AI, and outspoken about the dangers it poses – yet his company xAI unveiled its very own chatbot called Grok just last month.

Despite this new AI offering, while attending the UK’s global AI Safety Summit in early November, Musk said: “There is some chance, above zero, that AI will kill us all.

“I think it’s slow but there is some chance.”

The dangers Musk, and experts like Duke, talk about include human rights violations, the reinforcing of dangerous stereotypes, privacy violations, copyright, misinformation, and cyber attacks.

Some even fear AI’s potential use in bio and nuclear weaponry.

“There is no evidence of that happening yet,” says Duke.

“But of course, it’s something that could be potentially a risk in the future.”

For now, the more grandiose fears of AI are merely runaway pessimism.

“The only thing I see that makes people think these things is with the likes of generative AI, they’re saying it has some form of emergent properties, where it is coming up with capabilities it was not trained to come up with,” explained Duke.

Emergent properties are behaviours that are born from the interactions AI has with human users, but are not explicitly programmed or designed by its creators.

“I think that’s where the fear comes in, you know, if it carries on like this, how far can it go?,” Duke added.

Duke, who founded her organisation Diverse AI to improve diversity in the AI sector, doesn’t think humans have many excuses if an intelligent machine does in fact ‘go rogue’.

“Ultimately we’re the ones building it,” she explained.

“We’re the ones training these models… I don’t think we have any excuses whatsoever.”

Humans must train AI like how we raise children, says Duke – with a level of cause-and-effect parenting.

“It’s like bringing up a child,” she said, adding that AI developers must encourage reinforcement learning over unsupervised learning.

Otherwise, AI will “do things beyond what it’s meant to” by chasing positive reinforcement.

Though the influence of having a global framework through which every country is responsible mustn’t be ignored.

“The responsible AI framework – if its implemented from the set-go, then some of these concerns will be non-existent,” Duke urged.

“AI’s being used in government and because it has all these inherent issues, its very important the right frameworks are put in place… it has its good and bad sides definitely, and we need to be aware of the bad sides.

“But if we work on it properly, then it will be for the good of everyone.”