Man vs machine

Machines achieving sentience is a problem as well as a solution

digital_head945jf

Google recently fired software engineer Blake Lemoine for violating “employment and data security policies”. He had earlier gone public saying that a technology the company was developing had achieved sentience—the ability to perceive or feel things. Lemoine shared his concerns on the online publishing platform Medium and in an interview with The Washington Post in June.

Google denied that the technology in question, LaMDA (Language Model for Dialogue Applications), had achieved sentience. A sophisticated chatbot, LaMDA can generate a response that fits the context of any message. It has been through 11 separate reviews, and Google published a research paper on it in January.

Many experts say LaMDA is not advanced enough to be sentient. In fact, it was designed to mimic humans and it was doing exactly that. However, the whole episode has triggered a debate on the evolution of artificial intelligence. Of late, AI has taken big strides because of the huge amounts of data collected and analysed. And it has become smart enough to collect the correct information and give accurate results. Machines are definitely becoming intelligent, but that does not make them sentient.

Then there is the broader ethical debate. We are becoming more and more dependent on AI to take critical decisions, even those that would put lives at risk. AI is increasingly being used in situations where decisions can be consequential, like in wars and in hospitals. While some experts argue that AI helps take better decisions, others say its inability to understand the experience of being alive can cloud its recommendations.