[Your shopping cart is empty

News

Elon Musk could calm the AI arms race between the US and China, says AI expert

“Losing control of AGI is just like walking right off a cliff, in which case it's game over for humanity,” warned Max Tegmark.
Elon Musk’s influence on US President-elect Donald Trump could lead to more artificial intelligence (AI) safety standards, said the Swedish-American scientist Max Tegmark, who also warned that any geopolitical AI arms race would result in humanity’s "suicide race".
Speaking to Euronews Next at Web Summit in Lisbon, Tegmark, who is the president of the Future of Life Institute, said the US getting more AI safety regulation, however, will also depend on who Trump listens to.
"Instinctively, the Republican Party tends to be very against all regulation,” he said, unlike Musk who “put his money where his mouth was" and broke ranks with the likes of OpenAI CEO Sam Altman and Google to support a proposed California AI bill.
Trump has also vowed to repeal a Biden administration executive order on AI safety yet Tegmark said this would not make much of a difference in building AI safety standards as it was "quite weak regulation".
The scientist’s biggest fear is not generative AI, such as ChatGPT, but rather artificial general intelligence (AGI), a type of AI that matches or surpasses human cognitive capabilities, and that we could lose control of.
There's the geopolitical [AI] race. And there, the whole framing is wrong because it's not actually an arms race, to build AGI first, it's just a suicide race.
 Max Tegmark
President, Future of Life Institute
"Losing control of AGI is just like walking right off a cliff, in which case it's game over for humanity," Tegmark said.
"I think it will make a big difference whether Trump will listen more to Elon [Musk] on this or more to the anti-regulation intentions," he added.
What is Artificial General Intelligence?
AGI, seen as the "Holy Grail" by tech companies, has been hyped up by Altman, among other tech figureheads trying to raise funding.
For Altman, AGI is defined as a hypothetical form of machine intelligence that can solve any human task through methods not constrained to its training. Altman has said it can "elevate humanity" and does not refer to machines taking over.
The OpenAI CEO has said AGI could come as early as next year, while many others have predicted its arrival in the next decade.
"A lot of people latch onto the AGI brand now for hype and try to redefine it as being the thing that they're selling now, or that they're building now so they can raise money," said Tegmark.
He said the original AGI definition goes back to the 1950s, which defines it as AI that can do all human jobs, meaning AI could replace human workers and also develop and build the AGI machines themselves.
"You're not talking anymore just about a new technology like steam engines or the Internet. You're talking about a new species. That's why it's such a big deal," Tegmark said.
Machines taking control is the default outcome, according to a prediction in the 1950s by the computer scientist Alan Turing.
Turing said at the time this outcome was far off and created a test set to show when the time was near, originally named the "imitation game," but now called the Turing Test.
The test aims to identify whether a machine exhibits human-like intelligence by responding to a series of questions from a human. The human is unaware if the answers are provided by a machine or a human.
According to some researchers, the test was passed last year.
However, AGI does not exist right now and it is unclear when it will or even if it ever will, despite the corporate hype.
An AI arms race
As well as a corporate AI race there is also an AI arms race, according to Tegmark.
"There's an arms race within the United States between different companies and the only way to stop that is, of course, to have national US safety standards," he said.
I'm actually quite optimistic we'll get [safety standards] globally. And then we enter a really wonderful phase of human history.
 Max Tegmark
Future of Life Institute
Trump has suggested that the US's lead in AI will be a focus for his government.
"We have to be at the forefront… We have to take the lead over China," he said on the Impaulsive podcast in June.
"There's the geopolitical [AI] race. And there, the whole framing is wrong because it's not actually an arms race, to build AGI first, it's just a suicide race," Tegmark said.
"There's nobody to control this stuff".
"If you manage to go extinct who cares if you're Chinese, French, or American," he added.
Getting regulations right for each country is key, he said, rather than countries having to make agreements with each other.
"My vision for success here is much simpler. It's that the Chinese decides to put national safety standards in China to prevent their Chinese companies from causing harm. We Americans simply put national safety standards in the US not to appease China, but simply for their own purposes," Tegmark said.
Once that theoretically happens, the US and China will have the incentive to push their global allies to join them.
"I'm actually quite optimistic we'll get [safety standards] globally. And then we enter a really wonderful phase of human history where we have this age of incredible abundance and prosperity where we have all this amazing technology," he said.
"Companies will be able to innovate to help cure cancer, eliminate poverty and unnecessary road deaths and all the wonderful things that we hear people, entrepreneurs talk about," he said.
Euronews

Nov 20, 2024 10:55
Number of visit : 60

Comments

Sender name is required
Email is required
Characters left: 500
Comment is required