Future shock

The cordial relationship between man and technology is nearing an inflexion point

44-Future-shock

At no point in history had human activity held power enough to shape the future of our planet as it has now. We mitigate natural calamities, we have the means to tame most epidemics, and some of us even have contingency plans to move to a space colony in case of an apocalypse. Advancement of science and technology made all these possible, no doubt.

If and when artificial intelligence surpasses human intelligence, the existence of humankind will depend on the decisions of some machines
Because a human-computer interface is as vulnerable as any available technology, there is always the threat of someone hacking into your brain
In a world where anybody can “print” dangerous weapons, the arms races will be a lot faster and conflicts will be multifold

The relationship between mankind and technology has mostly been a cordial one. We invested heavily—money, effort and time—in developing technology, and it reciprocated in kind, giving us the power to conquer the mightiest of elements on earth. This relationship, however, now seems to be moving fast towards an inflexion point. Ever since man employed technology to alter the course of nature, he knew how dangerous it was and how powerless he was in front of it; he knew that it could go wrong any moment and wipe out humanity. But he also knew that technology alone could not do anything—it needed a “bad” human brain to do bad. That was, however, only until technology started developing brains of its own.

ARTIFICIAL INTELLIGENCE

It has not been long since man started seeing technology as a potential usurper—something which is capable of replacing him as the ruler of the world. Tesla’s cofounder Elon Musk called it humanity’s “biggest existential threat”. “As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that of between a person and a cat, maybe bigger,” he said in an interview a year ago.

That is scary, considering the power of intelligence and the pace at which AI is striding towards the future. What distinguishes humans from apes is a tiny increment in problem-solving ability and coordination. The consequence of this tiny increment is that the existence of apes—and that of any living organism on the earth, for that matter—depends on human decisions. So, if and when artificial intelligence surpasses human intelligence, the existence of humankind will depend on the decisions of some machines.

Many people have been talking about an “intelligence explosion”, which would be triggered when software becomes good enough to make better software without human intervention. If that happens (rather, when that happens), smart machines will vault over the rest of the world.

While AI is probably the only technology that has the potential to replace humans as the rulers of the world, it is not the only one that poses a grave threat to the existence of humankind. These threats are real and the danger is imminent. But we are divided on how to deal with them. One reason for that is that we have benefitted a lot from these technologies, and our systems of regulations have so far been good enough to keep them away from “bad brains” to an extent. That, however, is no assurance for a safe future.

BIOTECHNOLOGY

Pandemics have killed more people than wars, calamities and accidents put together. However, man has always found a way around them, with the generous help of his natural powers. There are always some people who are resistant to a pathogen. Then the offspring of survivors are usually more resistant to the pathogen. Again, evolution does not favour parasites—a reason why many virulent epidemics of the past are manageable diseases now.

Unfortunately, man can also make a disease much worse. For instance, bioengineers have ways to artificially boost the contagiousness of diseases. “When I look into the near future, the thing that worries me the most is the threat of a bioengineered pandemic created out of the lab,” said Bryan Walsh, journalist and author of the book End Times: A Brief Guide to the End of the World.

Biotechnology is getting better by the day. And it is getting cheaper as well, increasing the risk of potentially dangerous tech getting into the hands of wrong people, like a terrorist group or a cult with higher purposes like the elimination of the human kind. “These technologies can advance even faster than the practitioners realise, and there is no control system,” said Walsh. That is a concern because the biggest users of bioweapons have been governments. Some four lakh people, for instance, might have died in the Japanese biowar programme during World War II.

GENE EDITING

One of the scarier things in modern science, gene editing is probably the most divisive one as well. Experts are not yet done weighing the benefits, dangers and ethical issues of the technology. The confusion was evident last year, when Chinese researcher He Jiankui announced the birth of the world’s first germline genetically edited babies (a pair of twins); the announcement triggered sharp reactions worldwide.

Germline cells are those that make sperms and eggs or the cells in an early embryo. A change in any of them will go on to affect every cell in a body and will pass on to offspring. On the other hand, changes in somatic cells, which are cells in organs or tissues that perform a specific function, are much less significant. A mutation in a heart cell, for instance, will lead to many mutant heart cells, but it will have no impact on the kidney or the brain. In gene editing, if you mess with a germline cell, it will affect all the future descendants of a human being. But if you work on a somatic cell, the change is restricted to a particular organ of an individual.

There is an argument that gene editing will make gene therapy a lot more effective. Aimed at repairing faulty genes in individual organs, gene therapy so far has given mixed results but continues to be one of the biggest hopes of medical science. The National Institutes of Health, the primary agency responsible for biomedical research in the US, has an ethical research programme to develop tools for curative gene editing.

Editing germline cells and creating designer babies, however, is a different story. We do not know anything about how safe the procedure is and what its consequences are. Also, even if the procedure is safe, it is fraught with ethical problems. It could very well usher in an era of eugenics.

MIND READING AND BRAIN HACKING DEVICES

Man has tried umpteen techniques to read the mind—from hypnosis and clairvoyance to polygraph and neuroimaging. He seems to have hit the jackpot with brain-machine interfaces that can translate thoughts into words. Social media giant Facebook has been funding a research on this at the University of California San Francisco. The tech can decode brain signals and convey the thoughts without the involvement of a single muscle.

Currently, for this to work, electrodes need to be surgically implanted on the surface of the subjects’ brains. An algorithm reads the brain activity and decodes it. Scientists are working on a less invasive way—one that does not require surgery—like a headset with near-infrared light that can detect blood-flow changes in the brain.

Elon Musk’s Neuralink also is developing brain implants that allow you to control devices with thoughts. Companies like Kernel and Paradromics, and the US armed forces are also in the race.

The ethical implications of this technology are vast, so are the concerns over privacy. It will breach the final frontier of privacy, our minds. And, because a human-computer interface is as vulnerable as any available technology, there is always the threat of someone—probably the machine itself—hacking into your brain. What could be worse than that?

NANOTECHNOLOGY

Nanotechnology lets you control matter with atomic precision. That is a big opportunity, as it is ideal for fast, cheap manufacturing, and it can be very helpful in tackling pollution, climate change and depletion of natural resources.

While the tech in itself is not dangerous, it can lead to dangerous things. “It would allow anyone to manufacture a wide range of things,” said a report by the Global Challenges Foundation, which was set up in 2011 with the aim of funding research into risks that could threaten humanity. “This could lead to the easy construction of large arsenals of conventional or novel weapons made possible by atomically precise manufacturing.”

The problem is, it is almost impossible to detect such manufacturing, let alone regulate it. The unavailability of technology is the only thing that stops many countries and groups from producing weapons. In a world where anybody can “print” dangerous weapons, the arms races will be a lot faster and conflicts will be multifold.

The atomic precision that nanotechnology offers can make things really small. Weapons can be small, robots can be small—anything that can be manufactured can be made small. Think about smart poisons that can enter your body without you knowing it or gnatbots that always keep you under surveillance.

While it is too early to call nanotechnology an existential risk, its ability to give us whatever we wish for, especially its weaponisation, is a recipe for disaster.

TAGS