'India can't afford to leave AI to Chinese or Americans': Yuval Noah Harari, historian and philosopher

Yuval Noah Harari spoke to THE WEEK about AI's implications for India and the choices everyone must make in response to its rise

2171181263 Yuval Noah Harari | Getty Images

Interview/ Yuval Noah Harari, historian and philosopher

Before 2014, when Sapiens: A Brief History of Humankind catapulted him to global fame, Yuval Noah Harari was primarily known as a scholar of medieval military history. His works ranged from an analysis of the shadowy world of 12th-century assassinations, abductions, treason and sabotage (Special Operations in the Age of Chivalry) to a cultural study of soldiers’ experiences (Renaissance Military Memoirs: War, History and Identity). He seemed destined to be a respected―even if relatively obscure―military historian, admired in academic circles.

Then Sapiens happened. Originally a compilation of his world history lectures at the Hebrew University of Jerusalem, the book elevated him to the status of a transdisciplinary philosopher-historian. Over the past decade, Harari has published thought-provoking works exploring humanity’s past and future.

His most recent book, Nexus: A Brief History of Information Networks from the Stone Age to AI, marks a partial return to his military history roots. He says the rise of artificial intelligence―currently dominated by the US and China―could result in countries creating their own digital spheres, “each influenced by different political, cultural and religious traditions”. “Instead of being divided between just two global empires, the world might be divided among a dozen empires,” he writes. “The more the new empires compete against one another, the greater the danger of armed conflict.”

Harari has embarked on a worldwide tour warning that AI can make the world “more Kafkaesque than Terminator”. In Mumbai, he spoke to THE WEEK about AI’s implications for India and the choices everyone must make in response to its rise. Edited excerpts:

Q/ Mumbai is one of the wealthiest cities in the world, with the highest concentration of high net worth individuals―those with assets over $1 million. Still, more than 40 per cent of the city’s population live in slums. Do you think AI will worsen this inequality?

A/ It depends on how we use it. The future is not predetermined―it is not like there is just one future for AI. A knife can be used to murder somebody; it can be used in surgery; it can cut salad for dinner. The knife doesn’t tell me what to do with it. AI is similar. It can either increase or reduce inequality and poverty.

The big danger is that AI can deepen inequality both within and between countries. The 19th-century industrial revolution―the invention of steam engine and electricity―initially widened global inequality. Britain, France, the US, and later Japan, led the revolution, using their superior technology to conquer and exploit the world. The same thing can happen with AI. You have a few countries, very powerful and wealthy, that are leading the AI revolution. They will be able to dominate and exploit everybody else.

Because, with AI, all armies are obsolete. Once you have AI weapons―fully autonomous drones, ships, tanks and airplanes―existing armies become obsolete. Similarly, once you have AI replacing drivers, textile workers, bankers and even writers, old economies will collapse, or will become obsolete. Millions work in the textile industry in India, Pakistan and Bangladesh. If AI makes textile production cheaper in the US or Europe, those jobs disappear.

So, this is the danger. We need to make sure that the benefit of AI is shared more equally. It will never be completely equal, of course.

45-chip-manufacturer-NVIDIA-launched-Isaac-GR00T-N1 Future beckons: On March 18, chip manufacturer NVIDIA launched Isaac GR00T N1, an open-source AI foundation model expected to accelerate the development of humanoid robots | NVIDIA

Q/ How can we ensure that AI’s benefits are distributed as equally as possible?

A/ The benefits must be distributed more fairly―if not equally―between countries, professions and social classes. Old jobs will disappear, and new jobs will emerge. The question is, do you have the money to retrain people for these jobs?

Rich countries will have the money; poor countries will struggle. Predicting which jobs will disappear is not easy. High-paying jobs may be the easiest to automate. Investment banking, for instance, is such a lucrative job. But it is only an information job. You go over a lot of data, recognise patterns, and say, ‘Oh, this is going up, and this is going down. I better invest in this company.’ AI can automate this easily. But, think about a nurse who has to give an injection to a child, or has to bandage an injured. She needs intellectual abilities to analyse information, of course, but she also needs other skills―social and emotional―to interact with the child. She needs motor skills to very delicately replace a bandage in a less painful manner. This is much harder to automate.

Q/ So the low-paying jobs will not necessarily disappear first?

A/ Some of them, yes. The kind of robotic jobs that humans do are more easily replaceable.

Q/ What specific measures can governments take to prevent AI from worsening inequality?

A/ There are two key measures. One, education: giving people a broad education, and not a narrow one, is crucial. Training people in a single skill, or a very narrow set of skills, is risky. If you trained someone just to write code, they would have no job at all in 10 or 20 years, when AI writes all the code.

So it is important to have a broad programme that helps people develop their intellect, along with their social, emotional and motor skills. With a broad set of skills, it would be easier to deal with the changing job market.

The most important skill, of course, is the ability to keep learning all your life. And that is something very difficult to teach. People need flexible minds to be able to keep learning and changing, because the job market will keep changing. They will have to relearn and change, again and again.

Two, cooperation: The only way for most countries to remain competitive in the AI race is to cooperate with other countries. Because you have two AI superpowers―the US and China―that are far ahead. They can pull a lot of resources into AI research. Even a huge country like India does not have the resources to compete with them just by itself. Smaller countries like Sri Lanka or Bangladesh have no chance.

So, how do you prevent a repetition of the 19th-century industrial revolution, which had a few powers conquering everybody? The solution is cooperation―pooling resources to invest in research, and reaching agreements on the regulation and use of AI. India should cooperate with the European Union, Brazil, Australia and South Asian neighbours.

Q/ A recent study by Mumbai’s Observer Research Foundation suggests that India and Israel―both democracies with thriving startup ecosystems―should collaborate to develop a new AI regulatory model tailored to the needs of developing countries. This proposed ‘Mumbai-Tel-Aviv model’ aims to strike a balance between the two dominant AI regulatory approaches: the US’s minimal intervention and the EU’s stringent regulations. Could such a model work?

A/ On a deeper level, studies like this reflect the need to broaden the AI conversation. At present, the leading AI models, and the most important decisions regarding AI, are made in the US and China. We need more people in Israel, India and other countries to understand what is happening, join the conversation, and influence the decisions. That is one of reasons I wrote Nexus―to inform more people: ‘Look this is happening, and this will have an impact on everybody’.

Ignoring AI is like ignoring the invention of trains in the 19th century. People said, ‘We don’t care about trains. We have other issues here in India.’ And then the British conquered India, because they had this technology that you didn’t.

So, if you think that India has much more pressing problems, and leave AI to the Chinese and the Americans, you will be dominated by them in 10 years.

Q/ Can developing technologies be a solution?

A/ You can never trust technology itself. The solution must be institutional, not technological. You need institutions that vet the fairness of the technology. The thinking that we don’t need human institutions, because they are bureaucratic and complicated, and that we can just develop a miracle technology that will solve all problems, is a fantasy. It never works, because technology is never free of bias. There is no such thing as a completely unbiased technology.

Just as courts ensure that human actions are not biased, we also need institutions that can test AI and make sure that it is not biased―against women, castes, religions. Technology is important, but it is never a solution by itself. I go back to the knife example: you cannot have a knife that can only cut salad and never harm people.

Q/ Is AI currently cutting salad for us or causing harm? How do you see its role evolving in the near future?

A/ You see excellent use of AI in many areas. In medicine, for instance, it helps address shortage of doctors all over the world. AI can diagnose diseases quickly and more accurately than humans.

We see AI being used in transportation. Every year, more than a million people are killed in car accidents, mostly caused by human error, like drunk driving. AI-powered self-driving vehicles can drastically reduce this. While accidents still happen, they occur far less frequently with AI-driven cars. Autonomous vehicles are much safer than ones driven by humans.

You see AI contributing to humanity in fields like these, but you also see very dangerous developments. Social media, run by AI, is spreading not just lies and fake news, but also fear, hatred and anger. The content is causing social disturbances that are undermining democracies. The problem is that the algorithms tend to prefer bad content, because their goal is simply to increase user engagement―make people spend more time on the platform.

Then we have military AI. In wars like those in the Middle East, AI has been deciding bombing targets. When the Israelis bomb Gaza, it is increasingly AI, and not humans, that tells them, ‘Oh, you should bomb this building, and that person.’

It is a huge gamble to give power to AI to make military decisions. The danger is an arms race―where humans are forced to give more and more military control to AI to win battles.

The same could happen in finance, where more and more authority is given to AI to decide where to invest, and who to give a loan to. With the Donald Trump administration in the US opposing any AI regulation, finance may see unregulated AI use. What if AI invents a new financial instrument that humans cannot comprehend, but looks very good and is unregulated for a couple of years? It may result in a huge financial crash similar to 2008, where nobody understands what is happening because these instruments that AI invented are too complicated for the human brain. What would governments do if they lose control of the financial system?

Q/ Can regulation be done by humans alone, if we have reached this point?

A/ Humans with the help of AI. Again, the situation is becoming so complicated that you need one AI to understand the other AI, but you still need humans there to give guidance. But would AI, in its own interest, be ready to give humans a framework to govern itself―that’s a big question. As it becomes more intelligent, AI is able to deceive us, lie to us, manipulate us.

Q/ Like the captcha example. (In 2023, an AI chatbot, pretending to be a visually impaired person, persuaded a person to provide the code needed to bypass a captcha test―a system designed to differentiate between humans and robots.)

A/ The captcha example, exactly. Something less intelligent controlling something more intelligent almost never happens in the world. Monkeys don’t control humans; we control them. Ants don’t control us; we control ants. So, if AI has become more intelligent than us, it is very unlikely that we can still control it. Are we already out of control? Not yet. We are still in control, but AI is developing very, very fast. If we are not careful, we may lose control in five to ten years.