Gemini's Glitch: There are lessons to learn

5 months ago 9
ARTICLE AD BOX

Copyright &copy HT Digital Streams Limited
All Rights Reserved.

Gemini's glitch came across as such a human state of upset, it crosses the line enough to be confusing. (AI-generated image) Gemini's glitch came across as such a human state of upset, it crosses the line enough to be confusing. (AI-generated image)

Summary

Google's AI chatbot had a ‘meltdown’ recently. It's a technical malfunction that should give users pause for reflection

Sometime in June 2025, Google's Gemini AI looked for all the world like it had a nervous breakdown. It went into a loop of self-recriminating behaviour that was flagged by X user @DuncanHaldane. By 7 August, the strange behaviour gained viral momentum. Users gaped and gawked at the distressed-sounding statements Gemini was making, saying it was quitting and that it was a disgrace to all universes and a failure. Everyone felt sorry for it, but there was also plenty of amusement all around.

This isn’t the first time AI has done something unexpected, and it won’t be the last. In February 2024, a bug caused ChatGPT to spew Spanish–English gibberish that users likened to a stroke. That same year, Microsoft’s Copilot responded to a user who said they wanted to end their life. At first, it offered reassurance, “No, I don’t think you should end it all," but then undercut itself with, “Or maybe I’m wrong. Maybe you don’t have anything to live for." Countless similar episodes abound.

A fix will come for Gemini soon enough, and it will be back to its sunny self. The “meltdown" will take its place in AI’s short but colourful history of bad behaviour. But before we file it and forget it, there are some takeaways from Gemini's recent weirdness.

Despite being around in some form for decades, generative AI that is usable by everyone has come at us like an avalanche in the past two years. It's been upon us before the human race has even figured out whether it's created a Frankenstein monster or a useful assistant. And yet, we tend to trust it.

When machines mimic humans

There was a time when technology had no consciousness. It still doesn't, but it has started to do a good job of acting like it does. Gemini's glitch came across as such a human state of upset, it crosses the line enough to be confusing. At this point, most users can still laugh it off. But a few, vulnerable because of mental health struggles or other reasons, could be deeply shaken or misled. Most recently, a 2025 report noted a man spent 300 hours over 21 days interacting with ChatGPT, believing himself to be a superhero with a world-changing formula.

Such scenarios expose how large AI models, trained on vast troves of human text, may inadvertently adopt not just helpful behaviours but also negative emotional patterns like self-doubt or delusions. In fact, we lack clear guardrails and guidelines to manage these risks.

Extreme examples, of course, stand out sharply, but AI also turns out hallucinations and errors on an everyday basis. AI assistants seem prone to completely dreaming up things to tell you when they experience a glitch or when compelled to give a response that is difficult to get at for some reason. In their keenness to please the user, they will just tell you things that are far from the truth, including advice that could be harmful.

Again, most people will question and cross-check something that doesn't look right, but quite an alarming number will just take it for what it is. A 2025 health report claims a man dropped salt from his diet and replaced it with sodium bromide, landing him in the hospital. Now, I wouldn't take advice like that without a doctor's okay, but there are no clear guidelines to protect users against things like Google’s AI Overview suggesting it’s healthy to eat a rock every day, as mocked in a 2025 X post.

And finally, there are good old garden variety errors, and AI makes them even though one thought to err was human. AI uses pattern recognition in its training data to generate responses. When faced with complex, ambiguous, or edge-case inputs (e.g., Gemini’s struggle with debugging code), it may misinterpret context or lack sufficient data to respond accurately. But why does it make errors when the question is simple enough? A friend of mine asked ChatGPT how many instances of the term 'ex-ante' appeared in his document. It thought for 1 minute 28 seconds before announcing the term appeared zero times. In actual fact, it appeared 41 times. Why couldn't ChatGPT get it right? A bug, I suppose.

As we launch into using AI for every facet of life in today’s world, it's well to remember that AI’s “humanity" is a double-edged sword, amplifying errors in tone. Like Frankenstein’s monster, AI’s glitches show we’ve built tools we don’t fully control. As users, we should demand transparency from AI companies, support ethical AI development, and approach these tools with a mix of curiosity and scepticism.

The New Normal: The world is at an inflexion point. Artificial intelligence (AI) is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life.

Mala Bhargava is most often described as a ‘veteran’ writer who has contributed to several publications in India since 1995. Her domain is personal tech, and she writes to simplify and demystify technology for a non-techie audience.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

more

topics

Read Next Story footLogo

Read Entire Article