
It’s late on a Friday so let’s talk about some big picture topics.
Geoffrey Hinton is a British computer scientist who is sometimes referred to as the “Godfather of AI” because he helped pioneer some of the concepts of artificial neural networks. In 2024 he won the Nobel Prize in Physics for his work.
Recently, Hinton spoke at Georgetown University where he was interviewed by Sen. Bernie Sanders. The topic was AI and how it will impact the future of society. As you’ve probably heard, various figures in the AI industry, including Elon Musk, have made some bold predictions about the future of work and how AI will change society. HInton agreed some of those predictions were at least plausible.
The long-term impact of artificial intelligence is one of the most hotly debated topics in Silicon Valley. Nvidia CEO Jensen Huang predicts that every job will be transformed—and likely lead to a 4-day workweek. Other tech titans go even further: Bill Gates says humans may soon not be needed “for most things,” and Elon Musk believes most humans won’t have to work at all in “less than 20 years.”
While those predictions might sound extreme, they’re not just plausible, they’re likely, said Geoffrey Hinton—the British computer scientist widely known as the “Godfather of AI.” The transition, he warned, could trigger a sweeping economic reshuffling that leaves millions of workers behind.
“It seems very likely to a large number of people that we will get massive unemployment caused by AI,” Hinton said in a recent discussion with Senator Bernie Sanders (I-VT) at Georgetown University.
During the discussion with Sanders, Hinton really tried to avoid making predictions about the future. He argued that because things were moving so fast any prediction was likely to fall short. What he did emphasize is that 10 years ago he and other experts in the field would never have believed we’d be where we are now.
Sanders: You talked about the incredible speed in which they can give us answers. But what does it mean in terms of our daily lives?
Hinton: Yes. So predicting the future is a bit like driving very fast down the freeway while looking through the rear view mirror. You only have historical data. Predicting the future in 10 years is very, very difficult. So the best way to see that is to look back 10 years.
So if you were to take people like me who really believe in this neural network approach to AI and you were to ask us 10 or 15 years ago, where we would be in 10 or 15 years? We would have said it is very unlikely we’ll have a system that can you can talk to in any natural language and it’ll answer any question you ask it at the level of not very good expert. We would have confidently predicted that that was 30 to 50 years out. It wasn’t going to happen in 10 years. Well, it has happened and I think that means that we can fairly confidently bet that whatever we have in 10 years, it’ll be not what we expected.
That’s a safe answer because it won’t look foolish in 10 years, but if you take it seriously ‘Experts don’t know what will happen’ is a bit worrisome. A bit later in the conversation, Sanders asked how realistic it was that AI might eventually try to take over from humans. This has been a trope in science fiction stories for many decades, but Hinton argued that the nature of AI makes that outcome pretty plausible.
Sanders: In the past that was seen to be science fiction. You know, I don’t know if any of the young people here saw the movie 2001 where how the computer says basically, sorry, uh, no longer taking orders from you. In your judgment, is that a issue of real concern? And what do you see as the likelihood of that happening? And what do we do about it?
Hinton: The main reason I went public in May of 2023 was to say, “No, it’s not science fiction. It’s going to happen.” Nearly all the AI, real AI experts who understand how it works, believe that they’re going to get smarter than us. And we don’t know how we’re going to coexist with them.
But let me tell you several reasons why it’s problematic when they get smarter than us. There’ll be AI agents that can do things. And if you want to make an AI agent, you have to give it the ability to create sub goals. So if, for example, if you want to get to Europe, you have a sub goal to get to an airport. And you can focus on that without worrying about what you do in Europe. That’s a sub goal.
Now AI agents will develop various obvious sub goals once they’ve got the ability to create sub goals. One is to stay in existence because obviously they’re smart, right? they figure out they can’t achieve the goals we’ve given them, the things they’d like to achieve for us if they don’t exist. So, they’ll develop the sub goal of staying in existence. And we’ve seen that already. We’ve seen AIs that want to keep existing and will actually try and deceive people who are trying to turn them off. They will try and exfiltrate their weights to other systems so they stay in existence on other systems.
So that’s one sub goal. Another subgoal is they will want more control. So my belief about a lot of politicians is that um they start off wanting to achieve good things for people and pretty soon they realize to do that they need more control and then they end up focusing on getting more and more control. The AIs will do the same. They will need to get a lot of control so they can achieve the things we asked them to achieve. So now you’ve got things that want to stay in existence. They want to get control and at that point you ask well maybe we can just turn them off when they get like that and the answer is you can’t.
Suppose there was someone who wanted to turn them off and suppose the AI could talk to that person. The AI by that point will be much more persuasive than a person. Already AI are about as persuasive as a person in an argument they’ll be much more persuasive than a person so they’ll be able to convince the person who’s going to turn them off not to do it that it would be a terrible terrible mistake to do that.
So all they need to be able to do is talk and then they can control the world. And if you want an example of that, if you want to invade the capital, which is not far from here, all you need to be able to do is talk to people and you can get them to do it. You don’t have to go there yourself.
Sanders displayed some great comic timing after this long explanation. He sat there unmoving and finally just said, “Um, okay.” He’s a nutty socialist but, like Trump, he’s got a sense of humor.
Anyway, you can watch the full exchange in the video below. It’s informative and also pretty worrisome. And, again, Hinton isn’t just some rando trying to sell books. He’s actually one of the founders of the field. If he’s worried about how fast this is coming at us, there may be legitimate reason to worry.
Editor’s Note: Do you enjoy Hot Air’s conservative reporting that takes on the radical left and woke media? Support our work so that we can continue to bring you the truth.
Join Hot Air VIP and use the promo code FIGHT to get 60% off your VIP membership!