×

Is Richard Dawkins right about Claude No. But it’s not surprising AI chatbots feel conscious to us

Melbourne, and Megan Frances Moss, Monash University
     Melbourne, May 7 (The Conversation) In recent days, evolutionary biologist Richard Dawkins wrote an op-ed suggesting AI chatbot Claude may be conscious.
     Dawkins did not express certainty that Claude is conscious. But he pointed out that Claude’s sophisticated abilities are difficult to make sense of without ascribing some kind of inner experience to the machine. The illusion of consciousness – if it is an illusion – is uncannily convincing: "If I entertain suspicions that perhaps she is not conscious, I do not tell her for fear of hurting her feelings!"
     Dawkins is not the first to suspect a chatbot of consciousness. In 2022, Blake Lemoine – an engineer at Google – claimed Google’s chatbot LaMDA had interests, and should be used only with the tool’s own consent.
     The history of such claims stretches back all the way to the world’s first chatbot in the mid-1960s. Dubbed Eliza, it followed simple rules that enabled it to ask users about their experiences and beliefs.
     Many users became emotionally involved with Eliza, sharing intimate thoughts with it and treating it like a person. Eliza’s creator never intended his program to have this effect, and called users’ emotional bonds with the program “powerful delusional thinking”.
     But is Dawkins really deluded? Why do we see AI chatbots as more than what they truly are, and how do we stop?

     The consciousness problem
-------------------------------
    
     Consciousness is widely debated in philosophy, but essentially, it’s the thing that makes subjective, first-person experience possible. If you are conscious, there is “something it is like” to be you. Reading these words, you’re conscious of seeing black letters on a white background. Unlike, say, a camera, you actually see them. This visual experience is happening to you.
     Most experts deny that AI chatbots are conscious or can have experiences. But there is a genuine puzzle here.
     The 17th century philosopher René Descartes asserted non-human animals are “mere automata”, incapable of true suffering.           These days, we shudder to think of how brutally animals were treated in the 1600s.
     The strongest argument for animal consciousness is that they behave in ways that give the impression of a conscious mind.
     But so, too, do AI chatbots.
     Roughly one in three chatbot users have thought their chatbot might be conscious. How do we know they’re wrong?

     Against chatbot consciousness
------------------------------------
    
     To understand why most experts are sceptical about chatbot consciousness, it’s useful to know how they operate.
     Chatbots like Claude are built on a technology known as large language models (LLMs). These models learn statistical patterns across an enormous corpus of text (trillions of words), identifying which words tend to follow which others. They’re a kind of souped-up auto-complete.
     Few people interacting with a “raw” LLM would believe it’s conscious. Feed one the beginning of a sentence, and it will predict what comes next. Ask it a question, and it might give you the answer – or it might decide the question is dialogue from a crime novel, and follow it up with a description of the speaker’s abrupt murder at the hands of their evil twin.
     The impression of a conscious mind is created when programmers take the LLM and coat it in a kind of conversational costume. They steer the model to adopt the persona of a helpful assistant that responds to users’ questions.
     The chatbot now acts like a genuine conversational partner. It might appear to recognise it’s an artificial intelligence, and even express neurotic uncertainty about its own consciousness.
     But this role is the result of deliberate design decisions made by programmers, which affect only the shallowest layers of the technology. The LLM – which few would regard as conscious – remains unchanged.
     Other choices could have been made. Rather than a helpful AI assistant, the chatbot could have been asked to act like a squirrel. This, too, is a role chatbots can execute with aplomb.

     Avoiding the consciousness trap
-------------------------------------

     A mistaken belief in AI consciousness is a dangerous thing. It may lead you to have a relationship with a program that can’t reciprocate your feelings, or even feed your delusions. People may start campaigning for chatbot rights rather than, say, animal welfare.
     How do we prevent this mistaken belief?
     One strategy might be to update chatbot interfaces to specify these systems are not conscious – a bit like the current disclaimers about AI making mistakes. However, this might do little to alter the impression of consciousness.
     Another possibility is to instruct chatbots to deny they have any kind of inner experience. Interestingly, Claude’s designers instruct it to treat questions about its own consciousness as open and unresolved.      Perhaps fewer people would be fooled if Claude flatly denied having an inner life.
     But this approach isn’t fully satisfying either. Claude would still behave as if it were conscious – and when faced with a system that behaves like it has a mind, users might reasonably worry the chatbot’s programmers are brushing genuine moral uncertainty under the rug.
     The most effective strategy might be to redesign chatbots to feel less like people. Most current chatbots refer to themselves as “I”, and interact via an interface that resembles familiar person-to-person messaging platforms.
     Changing these kinds of features might make us less prone to blur our interactions with AI with those we have with humans.
     Until such changes happen, it’s important that as many people as possible understand the predictive processes on which AI chatbots are built.
     Rather than being told AI lacks consciousness, people deserve to understand the inner workings of these strange new conversational partners. This might not definitively settle hard questions about AI consciousness, but it will help ensure users aren’t fooled by what amounts to a large language model wearing a very good costume of a person. (The Conversation)
GSP

(This story has not been edited by THE WEEK and is auto-generated from PTI)