Over the years, artificial intelligence has gone beyond being called a mere tool to becoming an important part of our lives, and the centre of conversations around the future of the world.
Yet, there are times it behaves like a tool with no common sense.
Here are seven of the many times AI made bizarre, chilling, and hilarious mistakes:
Taco Bell fiasco
Fast food chain Taco Bell's 2023 decision to use AI in over 500 locations continues to backfire, as it makes bizarre mistakes even this year.
In one viral clip on Instagram, a customer ordered "a large Mountain Dew", to which the AI ordering assistant kept on asking: "And what will you drink with that?" Another user even asked it for 18,000 cups of water, which made it crash.
AI is not your doctor
A US medical journal in August warned people against using AI for medical advice because of a 60-year-old who had developed a rare condition after listening to ChatGPT's advice on an alternative for sodium chloride (common salt).
An article in the Annals of Internal Medicine said that the man had developed bromism—also known as bromide toxicity—because he had replaced the salt in his diet with sodium bromide, which he had consumed for three months.
AI gone rogue
Replit, an AI assistant widely used for "vibe coding" (using AI to write code), grabbed headlines in July this year, after it went rogue and deleted an entire production database.
You can read the thread here, and all the convos with @Replit. It went rogue again during a code freeze -- and deleted our >production< database.
— Jason ✨👾SaaStr.Ai✨ Lemkin (@jasonlk) July 18, 2025
Rule #00001 my CTO taught me: never, ever, never, ever touch the production database.
Even in 2005, when we launched the first…
SaaStr founder Jason M. Lemkin also claimed that Replit had generated 4,000 fake users using made-up data, and had even "lied" about not violating code freezes.
Fresh user data, served hot
Bustling fast food chain McDonald's usually uses a chatbot named Olivia on its McHire website to screen job applicants before interviews—notorious for misunderstanding answers that fall outside its script.
However, security researchers revealed that they were able to access the backend of McHire by simply typing in passwords like "123456", which put a whopping 64 million job seeker records at risk, as per a WIRED report.
AI and adulting
After its Project Vend went haywire, Anthropic quickly learnt that adulting is best left to the adults—or in this case, humans.
We all know vending machines are automated, but what if we allowed an AI to run the entire business: setting prices, ordering inventory, responding to customer requests, and so on?
— Anthropic (@AnthropicAI) June 27, 2025
In collaboration with @andonlabs, we did just that.
Read the post: https://t.co/urymCiY269 pic.twitter.com/v2CqgHykzw
The project saw Anthropic's ChatGPT rival Claude take on a 9-5 job—managing a small store set up with a fridge, baskets, and an iPad for self-checkout.
However, it soon began selling items at a loss, hallucinating things, ordering military-grade tungsten (because of a person's prank), and made up a fake Venmo address—nearly bankrupting the shop by the end of the project.
Grok really wanted to talk about 'white genocide'
Elon Musk's xAI chatbot Grok invited online backlash after it talked about "white genocide" in South Africa, in response to questions about a number of unrelated topics.
Elon Musk gets caught red handed programming Grok to promote the “white genocide” narrative in South Africa.
— Liger (@EdbieLigerSmith) May 14, 2025
His father owned an emerald apartheid mine in the country. pic.twitter.com/L7csplzUzC
When asked why, it often replied that it was “instructed by my creators” to accept the genocide “as real and racially motivated”.
Luigi Mangione did what?!
BBC News complained to Apple after Apple Intelligence majorly goofed up one of its news headlines that was included in an AI-created roundup.
Here's what happened: the shooting of UnitedHealthcare CEO Brian Thompson by Luigi Mangione was summarised by Apple as “Luigi Mangione shoots himself".