Sign In

Harari and the Danger of Artificial Intelligence

Yuval Noah Harari, a historian, philosopher, and lecturer at the Hebrew University of Jerusalem, has an interesting article on AI in The Economist (“Yuval Noah Harari Argues that AI Has Hacked the Operating System of Human Civilization,” April 28, 2023). He presents a more sophisticated argument on the danger of AI than the usual Luddite scare. A few excerpts:

Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of AI tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults. …

While to the best of our knowledge all previous [QAnon’s] drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. …

It is utterly pointless for us to spend time trying to change the declared opinions of an AI bot, while the AI could hone its messages so precisely that it stands a good chance of influencing us.

Through its mastery of language, AI could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. …

What will happen to the course of history when AI takes over culture, and begins producing stories, melodies, laws and religions? …

If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there. …

Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new AI tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology.

The last bit is certainly not his most interesting point: it looks to me like the feared AI-bot propaganda. Such a trust in the state reminds me of what New-Dealer Rexford Guy Tugwell wrote in a 1932 American Economic Review article:

New industries will not just happen as the automobile industry did; they will have to be foreseen, to be argued for, to seem probably desirable features of the whole economy before they can be entered upon.

We don’t know how close AI will come to human intelligence. Friedrich Hayek, whom Harari may never have heard of, argued that “mind and culture developed concurrently and not successively” (from the epilogue of his Law, Legislation, and Liberty; his underlines). The process took a few hundred thousand years, and it is unlikely that artificial minds can advance “in Trump time,” as Peter Navarro would say. Enormous resources will be needed to improve AI as we know it. Training of ChatGPT-4 may have cost $100 million, consuming a lot of computing power and a lot of electricity. And the cost increases proportionately faster than the intelligence. (See “Large, Creative AI Models Will Transform Lives and Labour Markets,” The Economist, April 22, 2023.) I think it is doubtful that an artificial mind will ever say like Descartes, “I think, therefore I am” (cogito, ergo sum), except by plagiarizing the French philosopher.

Here is what I would retain of, or deduct from, Harari’s argument. One can view the intellectual history of mankind as a race to discover the secrets of the universe, including recently to create something similar to intelligence, concurrent with an education race so that the mass of individuals do not to fall prey to snake-oil peddlers and tyrants. To the extent that AI does come close to human intelligence or discourse, the question is whether or not humans will by then be intellectually streetwise enough not to be swindled and dominated by robots or by the tyrants who would use them. If the first race is won before the second, the future of mankind would be bleak indeed.

Some 15% of American voters see “solid evidence” that the 2020 election was stolen, although that proportion seems to be decreasing. All over the developed world, even more believe in “social justice,” not to speak of the rest of the world, in the grip of more primitive tribalism. Harari’s idea that humans may fall for AI bots like gobblers fall for hen decoys is intriguing.

The slow but continuous dismissal of classical liberalism over the past century or so, the intellectual darkness that seems to be descending on the 21st century, and the rise of populist leaders, the kings of “democracy,” suggest that the race to create new gods has been gaining more momentum than the race to general education, knowledge, and wisdom. If that is true, a real problem is looming, as Harari fears. However, his apparent solution, to let the state (and its strongmen) control AI, is based on the tragic illusion that the it will protect people against the robots, instead of unleasing the robots against disobedient individuals. The risk is cetainly much lower if AI is left free and can be shared among individuals, corporations, (decentralized) governments, and other institutions.

Rayna Prime

Rayna Prime

Rayna Prime Editor