AGEofLLMs.com
Search

Geoffrey Hinton “the Godfather of AI” on AI and Safety

Calculating... Comments

Geoffrey Hinton, a University Professor Emeritus of computer science at the University of Toronto, has won the 2024 Nobel Prize in Physics. 

While his work has set the stage for machine learning, which helps computers act like humans, he has recently focused on pushing for safer artificial intelligence (AI). Last year, Hinton, who is often called “the godfather of AI”, made news by leaving his job at Google because he was worried that bad people might use the technology to cause harm.

University of Toronto Press Conference - Professor Geoffrey Hinton, Nobel Prize in Physics 2024

Geoffrey Hinton, a University Professor Emeritus of computer science at the University of Toronto, discusses winning the 2024 Nobel Prize in Physics. He is joined by University of Toronto President Meric Gertler. Widely regarded as the “godfather of AI,” Hinton shares the prize with John J. Hopfield of Princeton University for foundational discoveries and inventions that enable machine learning with artificial neural networks.

Concerns about AI Safety

In this recent interview he has shared his concerns about AI's potential dangers. He stressed the urgent need for more research on AI safety to avoid disastrous outcomes. As he put it:

"My worry is that it may also lead to bad things, and in particular when we get things more intelligent than ourselves, no one really knows whether we're going to be able to control them. How do we avoid catastrophic scenarios? We don't know how to avoid them. That's why we urgently need more research."

Hinton urged governments to push large companies to allocate more resources toward AI safety research. He said that a bigger share of AI development should be focused on safety, not just on making models more advanced. He explained:

"I think governments can encourage the big companies to spend more of their resources on safety research. So at present, almost all of the resources go into making the models better, better, so they can have shiny new models and there's a big competition going on and the models are getting much better and that's good. But we need to accompany that with a comparable effort on AI safety. The effort needs to be more than like 1%, it needs to be like maybe a third of the effort goes into safety because if this stuff becomes unsafe, that's extremely bad."

a scientist thinking about future of AI
a scientist thinking about future of AI

AI Becoming Smarter than Humans

Hinton believes AI will eventually surpass human intelligence, with many researchers agreeing on varying timelines. Some expect this to happen in the next 20 years, while others think it might take longer. He noted:

"Most of the top researchers I know believe that AI will become more intelligent than people. They vary on the timescales. A lot of them believe that that will happen sometime in the next 20 years. Some of them believe it will happen sooner, some of them believe it will take much longer. But quite a few good researchers believe that sometime in the next 20 years, AI will become more intelligent than us, and we need to think hard about what happens then."

Hinton said it’s tough to predict how superintelligent AI would act. He compared it to a chess AI that can beat humans without revealing its strategies, showing the unpredictability of smarter-than-human AI. This unpredictability, he argued, makes it crucial to align AI goals with human values:

"If you think of a superintelligent chess AI, for example, you know it's going to beat you. You have no idea how. You can't possibly know what moves it makes or maybe even explain why they're good moves. But you know it'll beat you. And that's I think the point that a lot of people that are cautioning about AI are trying to make. Like a superintelligence will be able to figure out how to deal with all of us if that becomes its alignment, its intention. We might not even know of the myriad of ways in which it might do so. And that's why it's so important to get this idea, this alignment correctly."

"My guess is it probably happen sometime between 5 and 20 years from now. It might be longer. There's a very small chance it'll be sooner. And we don't know what's going to happen then. So if you look around, there are very few examples of more intelligent  things being controlled by less intelligent things which makes you wonder whether when AI  gets smarter than us, it's going to take over control."

Future of AI: Risks & Benefits

Hinton pointed out immediate risks like fake videos swaying elections and more sophisticated phishing attacks. He said large language models have made it much easier to create convincing phishing attempts:

"There are many different risks from AI and they all have different solutions. So immediate risks are things like fake videos corrupting elections. We've already seen politicians either accuse other people of using fake videos or use fake videos themselves and fake images. So that's one immediate danger. There's also very immediate dangers from things like cyber attacks. So last year, for example, there was a 1200% increase in the number of phishing attacks and that's because these large language models make it very easy to do phishing attacks. And you can no longer recognize them by the fact the spelling is wrong and the syntax is slightly odd. Their English is perfect."

While Hinton stressed immediate risks, he also warned of longer-term dangers from superintelligent AI. He emphasized that continued research is crucial to keep AI development on track and beneficial.

Potential Benefits

Despite the risks, Hinton is optimistic about AI’s potential benefits, especially in healthcare. He highlighted the importance of balancing these benefits with safety:

"I'm hoping AI will lead to tremendous benefits, tremendous increases in productivity, and to a better life for everybody. I'm convinced that it will do that in healthcare."

"AI is going to be much better at diagnosis. So already if you take difficult cases to diagnose a doctor gets 40% correct an AI system gets 50% correct and the combination of the doctor with the AI system gets 60% correct which is a big Improvement. In North America several hundred thousand people a year die of bad diagnosis. With AI diagnosis is going to get much better. But the thing that's going to really happen is you'll be able to have a family doctor who's an AI who has seen 100 million patients and knows huge amounts and will be much much better at dealing what whatever ailment it is you have because your AI family doctor will have seen many many similar cases."

About OpenAI and Sam Altman

Hinton criticized OpenAI CEO Sam Altman for shifting focus from safety to profits. He noted that OpenAI initially emphasized safety, but Altman later prioritized profits instead. Hinton was particularly pleased that one of his former students was involved in the brief firing of Altman. He stated:

"OpenAI was set up with a big emphasis on safety. Its primary objective was to develop artificial general intelligence and ensure that it was safe. Over time, it turned out that Sam Altman was much less concerned with safety than with profits. I think that's unfortunate."

Conclusion

Geoffrey Hinton’s thoughts highlight AI’s dual nature—its promise and its risks. He calls for a balanced approach that prioritizes safety research and regulations while pushing AI technology forward. His perspectives underscore the need for a collaborative effort among governments, researchers, and companies to navigate the complex future of AI.

Related Posts

Visitor Comments

Please prove you are human by selecting the tree.