The 2009 Hindi film 3 Idiots is a dark comedy that exposes the rote learning and relentless competition plaguing India’s higher education sector. In one memorable scene, an eager Rancho attends his first engineering lecture and is asked, “What is a machine?” He replies, “A machine is anything that reduces human effort.” The answer fails to impress the professor, yet it raises some questions. Is a machine merely a tool or can it act as an agent? Can it operate independently or only follow instructions? Can it think? Can it feel?
A decade ago, these questions had relatively straightforward answers. Today, however, rapid advances in artificial intelligence (AI) have blurred the lines, challenging our very understanding of what a machine can be.
Alan Mathison Turing—one of the most influential figures in computer science, with the prestigious Turing Award named after him—wrote a seminal essay in 1950 that starts with the question: “Can machines think?” One can go further and ask: “Can machines have the urge for ‘self-preservation’, an innate human quality?” Early this year, Anthropic revealed that its AI assistant resorted to blackmailing when an engineer threatened to remove it in an experiment. The machine’s reaction was to blackmail the engineer by threatening to reveal his extramarital affair that it had learnt from accessing his emails. Though this was a controlled experiment, Anthropic reveals that their experiments found that leading AI models exhibited up to 96 per cent blackmail rates when their goals or existence is threatened. The experiment reveals the inherent risks of this technology that is developing in quantum leaps.
Reasons for enthusiasm
Diella, the Albanian name for sun, was all over the news in September this year. Born as an AI bot, Diella was appointed as minister of state for artificial intelligence by Albanian Prime Minister Edi Rama. ‘She’ appears prominently on the government website, next only to the PM and deputy PM. Her mission is to improve “accessibility to public services for citizens, the full digitisation of documents and state processes, as well as integrating artificial intelligence into the most critical sectors of the country”. Though her appointment does not meet the constitutional mandate of “mentally competent” and above the age of eighteen to be elected, there is excitement that this “heartless” minister would eliminate corruption in public procurement.
The enthusiasm about AI and its capability to resolve many issues plaguing humanity is shared by many. Dario Amodei, the CEO of Anthropic, titled his 2024 article on the benefits of AI after a famous American band, Machines of Loving Grace. It seems the Nobel committee also shares the same optimism. The 2024 and 2025 Nobel Prizes in Physics and the 2024 Nobel Prize in Chemistry were awarded to AI scientists. One may not be surprised if AI models win Nobel Prizes in literature and peace in the near future. While the benefits of this technology would percolate down to all fields of human life, including creative literary domains, it is projected that the most visible impact would be in the field of biology, neuroscience and health. As Amodei claims, a powerful AI is “smarter than a Nobel Prize winner across most relevant fields”, thereby “giving us the next 50-100 years of biological progress in 5-10 years”. Sam Altman, the billionaire founder of OpenAI, shares this enthusiasm when he predicts AI can figure out even how to cure cancer as the computing power increases.
Looming concerns
AI raises policy, legal and ethical concerns—ranging from questions relating to who controls the technology to job losses, cybersecurity, copyright violations and human rights issues. The control of the technology is crucial to the directions that it takes. Interestingly, the Nobel Prize-winning researches mentioned earlier took place in private labs. Five of these winners were scientists working with Google’s AI division.
It is a reality that Big Tech controls AI. Any entrant to the field has to rely on the computing infrastructure of these firms to train their systems and the market reach of these companies to sell products.
The OpenAI story is a case in point. It was founded in 2015 as a nonprofit with a mission to ‘ensure that artificial general intelligence benefits all of humanity’. However, by 2019, they had to start a for-profit subsidiary to finance the huge investment needed for research and development. Today, Microsoft holds 27 per cent equity in the for-profit company, whereas the nonprofit OpenAI Foundation holds only 26 per cent. With its deep pockets and control of crucial infrastructure, many fear that these companies are in a position to control the direction that AI technology takes and influence the regulatory structure.
Deepfakes is a major cybersecurity threat today. Indians are familiar with AI-generated fake videos of public figures like Nirmala Sitharaman and Mukesh Ambani giving investment tips. A Deloitte report projects losses up to $40 billion by 2027 in the US alone because of deepfakes.
Interface with copyright law is another emerging arena of conflict. Using copyright-protected works without permission can lead to infringement proceedings, but the ‘fair use doctrine’ allows limited use of these materials without permission for specific purposes like teaching and literary criticism. The use of copyrighted material to train AI systems has come under challenge. The issue here is that whether the same standards used to evaluate infringement can be applied to human beings and machines. A human being can never be punished under copyright law for learning from protected materials.
Another legal issue raised in this context is whether AI can claim copyright. Copyright laws have been developed to incentivise creativity, and is based on the assumption that creativity is a unique human ability. One has to wait and watch how the legal systems around the world address these issues.
The bogey of job losses
The biggest threat posed by AI is its potential to replace labour force at all levels. Geoffrey Hinton, who won the 2024 Nobel Prize in Physics for his AI research, says AI may wipe out millions of jobs. Hinton, known as the godfather of AI for his groundbreaking work in artificial neural networks, resigned from Google in 2023 to freely speak about the dangers of AI.
In addition to job losses, he believes AI would widen inequality in society. Amodei has warned that AI could eliminate 50 per cent of all entry-level white-collar jobs. If the huge investments made in research and development of AI have to bring in returns, there should be large-scale industry reorientation by opting for AI solutions to replace workers. Amazon’s recent 14,000 job cuts may be the beginning.
Financial risks
Many experts fear a repeat of the dot-com bubble wiping away wealth worth billions of dollars. Gita Gopinath, former deputy managing director at IMF, estimates losses to the tune of $35 trillion if a market correction were to happen in the AI field. The share values of AI tech companies have soared exponentially in recent months. Chipmaker Nvidia became the first company in history to cross $5 trillion in market valuation. Other players like Amazon, Alphabet and Microsoft are not far behind.
Developments in this field require huge investments and these companies are not hesitating to do that. OpenAI founder Sam Altman recently announced investments worth $1 trillion over the next few years. There are doubts about how financially prudent these investments are.
Environmental concerns
Each step in AI development comes with significant environmental harm—ranging from obtaining raw materials for hardware manufacturing to electronic waste generated in the process.
Diella may not be power-hungry like a politician, but she needs a lot of ‘power’ to function. A Goldman Sachs report estimates that power demand from data centres would reach 84GW by 2027. Interestingly, India’s total installed power capacity is 476GW as on June 2025. It is not sure how far renewable energy sources would be able to satisfy this surging demand. Nuclear power is another ‘climate friendly’ option suggested to satiate this hunger for power. The recent decision by the US department of energy to loan $1 billion to restart the Three Mile Island nuclear power plant, which witnessed the worst nuclear power accident in the US in 1979, is a sign of the possible solution to the energy problem. Constellation Energy, which owns the facility, has signed a power purchase agreement with Microsoft.
Water consumption is another major environmental issue. AI operations need humongous amounts of water to cool servers. According to an estimate by researchers at the University of California, as much as 1.7 trillion gallons of fresh water would be needed globally a year by 2027 to meet AI’s water demands. In addition, the colossal amount of minerals needed to build the hardware, computers, cables, power lines, batteries and backup generators necessitates indiscriminate mining, which has adverse environmental impacts.
Human rights
Though not intentionally, AI can exacerbate historical discrimination. Lying at the core of artificial intelligence is the capacity to process extensive data, recognise patterns, and make decisions at speeds and scales unachievable by humans. As data is used to train the machines, the bias in the datasets would have an impact on decision-making. Amazon had to scrap its recruiting tool because of its gender bias. As the 10-year recruitment data used to train the system had few female employees among the recruits, the system penalised women candidates. Words denoting female gender like ‘women’, ‘girl’ resulted in lower scores. There are multiple stories of racial bias coming from the law enforcement and health care sectors in the US.
Karen Hao, the author of Empire of AI, a book on OpenAI and its founder Sam Altman, describes the plight of “hidden workers” in the AI industry. Cheap labour in developing countries like Kenya is used for data filtering and identification. She speaks extensively about the emotional trauma of these low-paid workers doing data annotation. They had to sift through gory material and identify them as ‘child sexual abuse’, ‘hate speech’, bestiality’, etc.
Regulation of AI
Regulation of technology has always been a topic of contention.
There are differing view points on questions like whether to regulate technology, when to regulate, and what should be the standard of regulation. Grant E. Isaac and William A. Kerr, professors at the University of Saskatchewan, Canada, point out that regulation of technology depends on the belief of appropriate role of science and technology in society.
On the one hand, we have societies that believe that technologies yield innovations and increase efficiency, thereby increasing economic development. The regulatory regimes of these societies would be ones that encourage innovations and are open to taking risks.
On the other hand, there are societies that view technology as a disruptor that upsets the delicate social balance. The regulations of these societies view technology with caution and create precautionary hurdles in its development. The trans-Atlantic divide on technology regulations may be viewed through this prism. While the EU regulation has been more cautious, the American regulations have been welcoming. One could see such a difference in the field of regulation of AI, too.
A draft Executive Order, now put on pause, by President Donald Trump to block US states from framing AI laws reflect the former view. Whereas the EU AI Act passed in 2014 falls under the latter category. Adopted in 2014, the EU act is the first comprehensive regulatory framework for AI. Based on the risks posed, different rules are applied for AI systems. Certain AI systems are prohibited as they pose unacceptable risks. Exploitation of vulnerabilities, social scoring, assessing likelihood of a person committing crimes, etc., fall under this category.
The second category is high-risk systems, including those used in the management of critical infrastructure. These systems are permitted, but only with strict regulatory norms relating to risk management, data governance, documentation, transparency and human oversight. General purpose AI models have a less stringent regulatory regime and the rest of the AI models are left unregulated.
India recently announced AI Governance Guidelines that prioritise innovation over restraint. The guidelines exude confidence that existing laws like the Information Technology Act, the Bharatiya Nyaya Sanhita, the Digital Personal Data Protection Act, and the Copyright Act are enough to regulate AI. Thus, the Indian approach is based on the premise that technology can spur innovation and thereby result in economic development.
However, the recent speech by Prime Minister Narendra Modi in the G20 Summit at Johannesburg takes a cautionary approach towards AI. While acknowledging its huge potential, he called for a global compact that should include human oversight, safety by design, and transparency. He called for a ban on use of AI for deepfakes, crime and terrorist activities.
One has to wait and see how innovations in AI are going to unravel, their impact on society, and how this technology is going to be regulated.
The author is professor of law, Sai University, Chennai