7 times AI messed up, from the Taco Bell fiasco to going rogue

Despite becoming an important part of people's lives, there are times AI behaves like a tool with no common sense

Artificial intelligence Representative image | Reuters

Over the years, artificial intelligence has gone beyond being called a mere tool to becoming an important part of our lives, and the centre of conversations around the future of the world.

Yet, there are times it behaves like a tool with no common sense.

Here are seven of the many times AI made bizarre, chilling, and hilarious mistakes:

Taco Bell fiasco

Fast food chain Taco Bell's 2023 decision to use AI in over 500 locations continues to backfire, as it makes bizarre mistakes even this year.

In one viral clip on Instagram, a customer ordered "a large Mountain Dew", to which the AI ordering assistant kept on asking: "And what will you drink with that?" Another user even asked it for 18,000 cups of water, which made it crash.

AI is not your doctor

A US medical journal in August warned people against using AI for medical advice because of a 60-year-old who had developed a rare condition after listening to ChatGPT's advice on an alternative for sodium chloride (common salt).

An article in the Annals of Internal Medicine said that the man had developed bromism—also known as bromide toxicity—because he had replaced the salt in his diet with sodium bromide, which he had consumed for three months.

AI gone rogue

Replit, an AI assistant widely used for "vibe coding" (using AI to write code), grabbed headlines in July this year, after it went rogue and deleted an entire production database.

SaaStr founder Jason M. Lemkin also claimed that Replit had generated 4,000 fake users using made-up data, and had even "lied" about not violating code freezes.

Fresh user data, served hot

Bustling fast food chain McDonald's usually uses a chatbot named Olivia on its McHire website to screen job applicants before interviews—notorious for misunderstanding answers that fall outside its script.

However, security researchers revealed that they were able to access the backend of McHire by simply typing in passwords like "123456", which put a whopping 64 million job seeker records at risk, as per a WIRED report.

AI and adulting

After its Project Vend went haywire, Anthropic quickly learnt that adulting is best left to the adults—or in this case, humans.

The project saw Anthropic's ChatGPT rival Claude take on a 9-5 job—managing a small store set up with a fridge, baskets, and an iPad for self-checkout.

However, it soon began selling items at a loss, hallucinating things, ordering military-grade tungsten (because of a person's prank), and made up a fake Venmo address—nearly bankrupting the shop by the end of the project.

Grok really wanted to talk about 'white genocide'

Elon Musk's xAI chatbot Grok invited online backlash after it talked about "white genocide" in South Africa, in response to questions about a number of unrelated topics.

When asked why, it often replied that it was “instructed by my creators” to accept the genocide “as real and racially motivated”.

Luigi Mangione did what?!

BBC News complained to Apple after Apple Intelligence majorly goofed up one of its news headlines that was included in an AI-created roundup.

Here's what happened: the shooting of UnitedHealthcare CEO Brian Thompson by Luigi Mangione was summarised by Apple as “Luigi Mangione shoots himself". 

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp