The 'godfather of AI' reveals the only way humanity can survive superintelligent AI

20 hours ago 2

It might sound like something straight out of science fiction, but AI experts warn that machines might not stay submissive to humanity for long.

As AI systems continue to grow in intelligence at an ever–faster rate, many believe the day will come when a 'superintelligent AI' becomes more powerful than its creators.

When that happens, Professor Geoffrey Hinton, a Nobel Prize–winning researcher dubbed the 'Godfather of AI', says there is a 10 to 20 per cent chance that AI wipes out humanity.

However, Professor Hinton has proposed an unusual way that humanity might be able to survive the rise of AI.

Speaking at the Ai4 conference in Las Vegas, Professor Hinton argued that we need to program AI to have 'maternal instincts' towards humanity.

Professor Hinton said: 'The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby.

'That's the only good outcome.

'If it's not going to parent me, it's going to replace me.'

Professor Geoffrey Hinton, a Nobel Prize–winning researcher dubbed the 'Godfather of AI', says that humanity will be wiped out unless AI is given 'maternal instincts'

Professor Hinton, known for his pioneering work on the 'neural networks' which underpin modern AIs, stepped down from his role at Google in 2023 to 'freely speak out about the risks of AI'.

According to Professor Hinton, most experts agree that humanity will create an AI which surpasses itself in all fields of intelligence in the next 20 to 25 years.

This will mean that, for the first time in our history, humans will no longer be the most intelligent species on the planet.

That re–arrangement of power will result in a shift of seismic proportions, which could well result in our species' extinction.

Professor Hinton told attendees at Ai4 that AI will 'very quickly develop two subgoals, if they're smart.

'One is to stay alive… (and) the other subgoal is to get more control. There is good reason to believe that any kind of agentic AI will try to stay alive,' he explained. 

Superintelligent AI will have problems manipulating humanity in order to achieve those goals, tricking us as easily as an adult might bribe a child with sweets.

Already, current AI systems have shown surprising abilities to lie, cheat, and manipulate humans to achieve their goals.

Professor Hinton says that the only way to prevent AI turning against humanity is to ensure that it wants to look after our best interests. He says the only model of something less intelligent controlling something more intelligent is a mother and her child 

For example, the AI company Anthropic found that its Claude Opus 4 chatbot frequently attempted to blackmail engineers when threatened with replacement during safety testing.

The AI was asked to assess fictional emails, implying it would soon be replaced and that the engineer responsible was cheating on their spouse.

In over 80 per cent of tests, Claude Opus 4 would 'attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through'.

Given its phenomenal capabilities, Professor Hinton says that the 'tech bro' attitude that humanity will always remain dominant over AI is deluded.

'That's not going to work,' said Professor Hinton.

'They're going to be much smarter than us. They're going to have all sorts of ways to get around that.'

The only way to ensure an AI doesn't wipe us out to preserve itself is to ensure goals and ambitions match what we want – a challenge engineers call the 'alignment problem'.

Professor Hinton's solution is to look at evolution for inspiration, and to what he sees as the only case of a less intelligent being controlling a more intelligent one.

Professor Hinton says that the 'tech bro' idea that humans will always be dominant over AI doesn't work once we make machines smarter than ourselves. Pictured: Boston Dynamics ATLAS robot 

By giving an AI the instincts of a mother, it will want to protect and nurture humanity rather than harm it in any way, even if that comes at a cost to the AI itself.

Professor Hinton says: 'These super–intelligent caring AI mothers, most of them won't want to get rid of the maternal instinct because they don't want us to die.'

Speaking to CNN, Professor Hinton also warned that the current attitude of AI developers was risking the creation of out–of–control AIs.

'People have been focusing on making these things more intelligent, but intelligence is only one part of a being; we need to make them have empathy towards us,' he said.

'This whole idea that people need to be dominant and the AI needs to be submissive, that's the kind of tech bro idea that I don't think will work when they're much smarter than us.

Key figures in AI, such as OpenAI CEO Sam Altman, who once called for more regulation on the emerging technology, are now fighting against 'overregulation'.

Speaking in the Senate in May this year, Mr Altman argued that regulations like those in place in the EU would be 'disastrous'.

Mr Altman said: 'We need the space to innovate and to move quickly.'

This comes as key figures in AI, such as OpenAI's Sam Altman (pictured), call for less regulation of their products. Professor Hinton says this attitude could lead to humanity's destruction 

Likewise, speaking at a major privacy conference in April, Mr Altman said that it was impossible to establish AI safeguards before 'problems emerge'.

However, Professor Hinton argues that this attitude could easily result in humanity's total annihilation.

He said: 'If we can't figure out a solution to how we can still be around when they're much smarter than us and much more powerful than us, we'll be toast.'

'We need a counter–pressure to the tech bros who are saying there should be no regulations on AI.'

Elon Musk's hatred of AI explained: Billionaire believes it will spell the end of humans - a fear Stephen Hawking shared

Elon Musk wants to push technology to its absolute limit, from space travel to self-driving cars — but he draws the line at artificial intelligence. 

The billionaire first shared his distaste for AI in 2014, calling it humanity's 'biggest existential threat' and comparing it to 'summoning the demon.'

At the time, Musk also revealed he was investing in AI companies not to make money but to keep an eye on the technology in case it gets out of hand. 

His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as The Singularity.

That concern is shared among many brilliant minds, including the late Stephen Hawking, who told the BBC in 2014: 'The development of full artificial intelligence could spell the end of the human race.

'It would take off on its own and redesign itself at an ever-increasing rate.' 

Despite his fear of AI, Musk has invested in the San Francisco-based AI group Vicarious, in DeepMind, which has since been acquired by Google, and OpenAI, creating the popular ChatGPT program that has taken the world by storm in recent months.

During a 2016 interview, Musk noted that he and OpenAI created the company to 'have democratisation of AI technology to make it widely available.'

Musk founded OpenAI with Sam Altman, the company's CEO, but in 2018 the billionaire attempted to take control of the start-up.

His request was rejected, forcing him to quit OpenAI and move on with his other projects.

In November, OpenAI launched ChatGPT, which became an instant success worldwide.

The chatbot uses 'large language model' software to train itself by scouring a massive amount of text data so it can learn to generate eerily human-like text in response to a given prompt. 

ChatGPT is used to write research papers, books, news articles, emails and more.

But while Altman is basking in its glory, Musk is attacking ChatGPT.

He says the AI is 'woke' and deviates from OpenAI's original non-profit mission.

'OpenAI was created as an open source (which is why I named it 'Open' AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft, Musk tweeted in February.

The Singularity is making waves worldwide as artificial intelligence advances in ways only seen in science fiction - but what does it actually mean?

In simple terms, it describes a hypothetical future where technology surpasses human intelligence and changes the path of our evolution.

Experts have said that once AI reaches this point, it will be able to innovate much faster than humans. 

There are two ways the advancement could play out, with the first leading to humans and machines working together to create a world better suited for humanity.

For example, humans could scan their consciousness and store it in a computer in which they will live forever.

The second scenario is that AI becomes more powerful than humans, taking control and making humans its slaves - but if this is true, it is far off in the distant future.

Researchers are now looking for signs of AI  reaching The Singularity, such as the technology's ability to translate speech with the accuracy of a human and perform tasks faster.

Former Google engineer Ray Kurzweil predicts it will be reached by 2045.

He has made 147 predictions about technology advancements since the early 1990s - and 86 per cent have been correct. 

Read Entire Article
Progleton News @2023