From the McDonald's leak to Replit going rogue, AI is known to make the most bizarre mistakes from time to time
Fast food chain Taco Bell's 2023 decision to use AI in over 500 locations continues to backfire. In one viral clip, a customer ordered "a large Mountain Dew", to which the AI ordering assistant kept on asking: "And what will you drink with that?" Another user even asked it for 18,000 cups of water, which made it crash.
A US medical journal in August warned people against using AI for medical advice because of a 60-year-old who had developed a rare condition after listening to ChatGPT's advice on an alternative for common salt.
Replit, an AI assistant widely used for "vibe coding" (using AI to write code), grabbed headlines in July this year, after it went rogue and deleted an entire production database.
Bustling fast food chain McDonald's usually uses a chatbot named Olivia on its McHire website to screen job applicants before interviews. However, security researchers revealed that they were able to access the backend of McHire by simply typing in passwords like "123456", which put a whopping 64 million job seeker records at risk.
After its Project Vend went haywire, Anthropic quickly learnt that adulting is best left to the adults—or in this case, humans. The project, which saw Anthropic's ChatGPT rival Claude take on a 9-5 job managing a small shop, ended with the shop going nearly bankrupt.
Elon Musk's xAI chatbot Grok invited online backlash after it talked about "white genocide" in South Africa, in response to questions about a number of unrelated topics.
BBC News complained to Apple after Apple Intelligence majorly goofed up one of its news headlines—the shooting of UnitedHealthcare CEO Brian Thompson by Luigi Mangione was summarised by Apple as “Luigi Mangione shoots himself".