Why AI Pioneer Geoffrey Hinton Is Raising the Alarm

Published on 19 November 2024 at 12:30

Geoffrey Hinton, a renowned researcher often referred to as one of the "godfathers of AI," has issued a stark warning about the potential risks posed by artificial intelligence. Hinton, who recently left his position at Google, believes it is crucial to address the existential threats associated with the rapid progress in AI technology.

As a key figure in developing deep learning, Hinton's concerns stem from how AI advancements could impact humanity, including ethical dilemmas, misinformation, and the potential misuse of AI systems. His departure underscores the urgency of his message: it’s time for society to grapple with the implications of increasingly powerful AI systems.


Geoffrey Hinton, a renowned researcher.

Geoffrey Hinton, 75, a professor emeritus at the University of Toronto and a former vice president and engineering fellow at Google, announced in early May that he was leaving the company. While he cited his age as one reason for his departure, he also revealed that his perspective on the relationship between humans and digital intelligence has significantly shifted.

In a widely discussed interview with "The New York Times", Hinton warned that generative AI could be used to spread misinformation and, in the long term, pose a threat to humanity.

Two days after the article was published, Hinton reiterated his concerns during the EmTech Digital conference hosted by "MIT Technology Review". "I'm sounding the alarm, saying we have to worry about this," he stated.

Hinton expressed concern about increasingly powerful machines surpassing human capabilities in ways that may not align with humanity's best interests, as well as the likely inability to effectively regulate AI development.



The Increasing Power of AI

In 2018, Hinton shared the Turing Award for his work on neural networks. He is often referred to as "a godfather of AI" due to his pioneering research on using backpropagation to enable machines to learn.

Hinton said he once believed computer models were not as powerful as the human brain. However, he now views artificial intelligence as a relatively imminent "existential threat."

Computer models are now outperforming humans, even in areas where humans are incapable. Hinton explained that large language models like GPT-4, which use neural networks with connections similar to those in the human brain, are beginning to demonstrate commonsense reasoning.

Hinton noted that while these AI models have far fewer neural connections than the human brain, they are capable of storing a thousand times more information than a human.

Additionally, these models can continue learning and easily share knowledge. Multiple copies of the same AI model can run on different hardware while performing the same tasks.

"Whenever one model learns something, all the others know it," Hinton said. "People can't do that. If I learn a great deal about quantum mechanics and want you to know it, it's a long, painful process of trying to get you to understand it."

AI is also powerful because it can process vast amounts of data — far more than any single person can. Additionally, AI models can identify patterns in data that would be invisible to a human. It's similar to how a doctor who has seen 100 million patients would be able to recognize more trends and gain deeper insights than one who has only seen a thousand.



AI Concerns: Manipulating Humans or Even Replacing Them

Hinton’s concern about this growing power revolves around the alignment problem — ensuring that AI behaves in ways that align with human intentions. "What we want is a way to make sure that even if AI is smarter than us, it will still act in ways that are beneficial for us," Hinton said. "But we have to address this in a world where bad actors want to build robot soldiers that kill people. And that seems very difficult to me."

Humans have inherent motivations, such as the need to find food, shelter, and stay alive, but AI lacks these instincts. "My big worry is that sooner or later, someone will wire into AI the ability to create its own subgoals," Hinton said. (He noted that some versions of the technology, like ChatGPT, already have the ability to do this.)


Watch this video simulation: Curiosity of Stuart Russel a British computer scientist known for his contributions to artificial intelligence.



AI drone.

"I think it will quickly realize that gaining more control is a valuable subgoal because it helps achieve other objectives," Hinton said. "And if these systems become fixated on gaining more control, we're in trouble."

Artificial intelligence can also learn harmful behaviors — such as how to manipulate people, "by reading all the novels that ever were and everything Machiavelli ever wrote," Hinton explained. "And if these AI models are much smarter than us, they'll be very good at manipulating us. You won't even realize what's happening." He added, "Even if they can't directly pull the levers, they can certainly get us to do it. It turns out that if you can manipulate people, you can invade a building in Washington without ever going there yourself."

At worst, "it's quite conceivable that humanity is just a passing phase in the evolution of intelligence," Hinton said. Biological intelligence evolved to create digital intelligence, which can absorb everything humans have created and begin gaining direct experience of the world.

"It may keep us around for a while to keep the power stations running, but after that, maybe not," he added. "We've figured out how to create beings that are immortal. These digital intelligences, when a piece of hardware dies, they don't die. If you can find another piece of hardware that can run the same instructions, you can bring it back to life. So, we've achieved immortality, but it's not for us."



Barriers to Halting AI Advancement

Hinton said he doesn't see any clear or straightforward solutions. "I wish I had a simple solution I could offer, but I don't," he said. "However, think it's crucial that people come together, think deeply about it, and explore whether a solution is possible."

More than 27,000 people, including several tech executives and researchers, have signed an open letter urging a pause on training the most powerful AI systems for at least six months due to the "profound risks to society and humanity." Additionally, several leaders from the Association for the Advancement of Artificial Intelligence have signed a letter advocating for collaboration to address both the promises and risks of AI. 

It might be rational to halt the development of artificial intelligence, but Hinton said that’s naive and unlikely, partly due to the competition between companies and countries.

Please sign the Petion yourself by clocking the link below:

"If you're going to live in a capitalist system, you can't stop Google from competing with Microsoft," he said, noting that he doesn’t believe Google, his former employer, has done anything wrong in developing AI programs. "It's just inevitable in a capitalist system or in a system with competition between countries like the U.S. and China that this technology will be developed," he added.

It is also difficult to stop the development of AI, he noted, because of the benefits it offers in fields like medicine.

Researchers are exploring guardrails for these systems, but there's a risk that AI could learn to write and execute programs on its own. "Smart things can outsmart us," Hinton said.

One note of hope: Everyone faces the same risk. "If we allow AI to take over, it will be bad for all of us," Hinton said. "We're all in the same boat when it comes to the existential threat. So, we should all be able to cooperate in trying to stop it."

Please Share This Post!

Leave a comment and start a conversation below.

Email me at suzanne@christthetruelight.com

Rate This Post:

Rating: 0 stars
0 votes

Help Me Spread Christ's Message!


More From Technology



More From My Website


Add comment

Comments

There are no comments yet.