Sign In

Erik Hoel on the Threat to Humanity from AI


1:01

Russ Roberts: I want to congratulate you. You are the first person who has actually caused me to be alarmed about the implications of AI–artificial intelligence–and the potential threat to humanity. Back in 2014, I interviewed Nicholas Bostrom about his book Superintelligence, where he argued AI could get so smart it could trick us into doing its bidding because it would understand us so well. I wrote a lengthy follow-up to that episode and we’ll link to both the episode and the follow-up. So, I’ve been a skeptic. I’ve interviewed Gary Marcus who is a skeptic. I recently interviewed Kevin Kelly, who is not scared at all. But you–you–are scared.

Last month you wrote a priest called “I Am Bing, and I Am Evil” on your Substack, The Intrinsic Perspective, and you actually scared me. I don’t mean, ‘Hmmm. Maybe I’ve underestimated the threat of AI.’ It was more like I had a ‘bad feeling in the pit of my stomach’-kind of scared. So, what is the central argument here? Why should we take this latest foray into AI, ChatGPT, which writes a pretty okay–a pretty impressive but not very exciting essay, can write some poetry, can write some song lyrics–why is it a threat to humanity?

Erik Hoel: Well, I think to take that on very broadly, we have to realize where we are in the history of our entire civilization, which is that we are at the point where we are finally making things that are arguably as intelligent as a human being.

Now, are they as intelligent right now? No, they’re not. I don’t think that these very advanced, large, language models that these companies are putting out could be said to be as intelligent as an expert human on whatever subject they’re discussing. And, the tests that we use to measure the progress of these systems supports that where they do quite well and quite surprisingly well on all sorts of questions like SAT [Standardized Achievement Test] questions and so on. But, one could easily see that changing.

And, the big issue is around this concept of general intelligence. Of course, a chess-playing AI poses no threat because it’s just slowly trained on playing chess. This is the notion of a narrow AI.

Self-driving cars could never really pose a threat. All they do is drive cars.

But, when you have a general intelligence, that means it’s similar to a human in that we’re good at all sorts of things. We can reason and understand the world at a general level. And, I think it’s very arguable that right now, in terms of the generalness behind general intelligences, these things are actually more general than the vast majority of people. That’s precisely why these companies are using them for search.

So, we already have the general part quite well down.

The issue is intelligence. These things hallucinate. They are not very reliable. They make up sources. They do all these things. And, I’m fully open about all their problems.

Russ Roberts: Yeah. They’re kind of like us, but okay. Yeah.

Erik Hoel: Yeah, yeah, precisely. But, one could easily imagine, given the rapid progress that we’ve made just in the past couple years, that by 2025, 2030, you could have things that are both more general than a human being and as intelligent as any living person–perhaps far more intelligent.

And, that enters this very scary territory, because we’ve never existed on the planet with anything else like that. Or, we did once a very long time ago, about 300,000 years ago. There’s something like nine different species–or our cousins who we were related to–who were likely probably either as intelligent as us or quite close in intelligence. And they’re all gone. And, it’s probable that we exterminated them. And, then ever since then we have been the dominant masters and been no other things.

And so, finally for the first time, we’re at this point where we’re creating these entities and we don’t know quite how smart they can get. We simply have no notion. Human beings are very similar. We’re all based on the same genetics. We might all be points stacked on top of one another in terms of intelligence and all the human beings and all the differences between people are all really just this zoomed-in minor differences. And, really you can have things that are vastly more intelligent.

And if so, then we’re at risk of either relegating ourselves to being inconsequential, because now we’re living near things that are much more intelligent. Or alternatively, in the worst case scenarios, we simply don’t fit into their picture of whatever they want to do.

And, fundamentally, intelligence is the most dangerous thing in the universe. Atom bombs, which are so powerful, and so destructive and, in use of warfare so evil we’ve all agreed not to use them, are just this inconsequential downstream effect of being intelligent enough to build them.

So, when you start talking about building things that are as or more intelligent than humans based on very different rules–things that are right now not reliable: they’re unlike a human mind, we can’t fundamentally understand them due to rules around complexity–and also, so far, they’ve demonstrated empirically that they can be misaligned and uncontrollable.

So, unlike some people like Bostrom and so on, I think sometimes they will offer too specific of an argument for why you should be concerned. So, they’ll say, ‘Oh, well, imagine that there’s some AI that’s super-intelligent and you assign it to do a paperclip factory; and it wants to optimize the paperclip factory and the first thing it does is turn everyone into paperclips,’ or something like that. And, the first thing when people hear these very sci-fi arguments, is to start quibbling over the particulars of like, ‘Well, could that really happen?’ and so on.

But, I think the concern over this is this broad concern–that this is something we have to deal with, and it’s going to be much like climate change or nuclear weapons. It’s going to be with us for a very long time. We don’t know if it’s going to be a problem in five years. We don’t know if it’ll be a problem in 50 years. But it’s going to be a problem at some point that we have to deal with.

7:17

Russ Roberts: So, if you’re listening to this at home and you’re thinking, ‘It seems like a lot of doom and gloom, really it’s too pessimistic’–I used to say things like, ‘We’ll just unplug it if it gets out of control,’–I just want to let readers know that this is a much better horror story than then Erik’s been able to trace out in the first two, three minutes.

Although I do want to say that, in terms of rhetoric, although I think there’s a lot of really interesting arguments in the two essays that you wrote, when you talked about these other nine species of humanoids sitting around a campfire and inviting homo sapiens–that’s us–into the circle and say, ‘Hey, this guy could be useful to us. Let’s bring him in. He could make us more productive. He’s got better tools than we do,’that made the hair on the back of my neck stand up and it opened me to the potential that the other more analytical arguments might carry some water. Excuse me, carry some weight.

So, one point you make, which is I think very relevant, is that all of this right now is mostly in the hands of profit-maximizing corporations who don’t seem to be so worried about anything except novelty and cool and making money off it. Which is what they do. But, it is a little weird that we would just say, ‘Well, they won’t be evil, will they? They don’t want to end humanity.’ And you point out that that’s really not something we want to rely on.

Erik Hoel: Yeah. Absolutely. And, I think that this gets to the question of how should we treat this problem?

And, I think the best analogy is to treat it something like climate change. And now, there is a huge range of opinion when it comes to climate change and all sorts of debate around it. But, I think that if you take the extreme end of the spectrum and say. ‘There’s absolutely no danger and there should be zero regulation around these subjects,’ I actually think most people will disagree. They’ll say, ‘No, listen: this is something we do need to keep our energy usage as a civilization under control to a certain degree so we don’t pollute streams that are near us,’ and so on. And, even if you don’t believe any specific model of exactly where the temperature is going to go–so maybe you think, ‘Well, listen: there’s only going to be a couple degrees of change. We’ll probably be fine.’ Okay? Or you might say, ‘Well, there’s definitely this doomsday scenario of a 10-degree change and it’s so destabilizing,’ and so on. Okay?

But regardless, there are sort of reasonable proposals that one can do where we have to discuss it as a polity, as a group. You have to have an overarching discussion about this issue and make decisions regarding it.

Right now with AI, there’s no input from the public; there’s no input from legislation; there’s no input from anything. Like, massive companies are pouring billions of dollars to create intelligences that are fundamentally unlike us, and they’re going to use it for profit.

That’s a description of exactly what’s going on. Right now there’s no red tape. There’s no regulation. It just does not exist for this field.

And, I think it’s very reasonable to say that there should be some input from the rest of humanity when you go to build things that are as equally intelligent as a human. I do not think that that’s unreasonable. I think it’s something most people agree with–even if there are positive futures where we do build these things and everything works out and so on.

Russ Roberts: Yeah. I want to–we’ll come at the end toward what kind of regulatory response we might suggest. And, I would point out that climate change I think is a very interesting analogy. Many people think it’ll be small enough that we can adapt. Other people think it is a existential threat to the future of life on earth, and that justifies everything. And, you have to be careful because there are people who want to get ahold of those levers. So, I want to put that to the side though, because I think you have more–we’re done with that. Great–interesting–observation, but there’s so much more to say.

11:35

Russ Roberts: Now, you got started–and this is utterly fascinating to me–you got started in your anxiety about this, and it’s why your piece is called “I Am Bing, and I Am Evil,” because Microsoft put out a chatbot, which is–I think internally goes by the name of Sydney–is ChatGPT-4, meaning the next generation pass what people have been using in the OpenAI version.

And it was–let’s start by saying it was erratic. You called it, earlier, ‘hallucinatory.’ That’s not what I found troubling about it. I don’t think it’s exactly what you found troubling about it. Talk about the nature of what’s erratic about it. What happened to the New York Times reporter who was dealing with it?

Erik Hoel: Yes, I think a significant issue is that the vast majority of minds that you can make are completely insane. Right? Evolution had to work really hard to find sane minds. Most minds are insane. Sydney is obviously quite crazy. In fact, that statement, ‘I Am Bing, and I Am Evil,’ is not something I made up: It’s something she said. This chatbot said, right?

Russ Roberts: I thought it was a joke. I really did.

Erik Hoel: Yeah. Yeah, no. It’s something that this chatbot said.

Now, of course, these are large, language models. So, the way that they operate is that they receive an initial prompt and then they sort of do the best that they can to auto-complete that prompt.

Russ Roberts: Explain that, Erik, for people who haven’t–I mentioned in the Kevin Kelly episode that there’s a very nice essay by Steven Wolfram on how this might work in practice. But, give us a little of the details.

Erik Hoel: Yeah. So, in general, the thing to keep in mind is that these are trained to auto-complete text. So, they’re basically big artificial neural networks that guess at what the next part of text might be.

And, sometimes people will sort of dismiss their capabilities because they think, ‘Well, this is just like the auto-complete on your phone,’ or something. ‘We really don’t need to worry about it.’

But you don’t–it’s not that you need to worry about the text completion. You need to worry about the huge, trillion-parameter brain, which is this artificial neural network that has been trained to do the auto-completion. Because, fundamentally, we don’t know how they work. Neural networks are mathematically black boxes. We have no fundamental insights as to what they can do, what they’re capable of, and so on. We just know that this thing is very good at auto-completing because we trained it to do so. [More to come, 14:22]

Rayna Prime

Rayna Prime

Rayna Prime Editor