ARTICLE AD BOX
Elon Musk noted the flaws in Google's AI Overview, which mistakenly claimed 2026 is next year.
Elon Musk has reacted to Google's AI Overview going rogue once again and forgetting the year. A user on X (formerly Twitter) shared the screenshot of a Google Search asking, “is it 2027 next year”. In response, the AI Overview came up with the answer, “No, 2027 is not next year. 2026 is next year”
Responding to the post, Musk wrote, “Room for improvement”
Interestingly, Musk, who is quick to point out all the new features of Grok AI, did not point the users in the direction of his AI chatbot. That may be because Grok AI itself is no stranger to controversy, having previously called Musk and his former boss Donald Trump the “biggest threat to America” and, more recently, drawn criticism for generating sexually explicit deepfakes involving women and children.
Meanwhile, this is not the first time that Google's AI Overview has come up with inaccurate information as well. The feature had first fired up controversy shortly after its launch when it told users to add ‘glue’ to pizza or eat rocks for vitamins. As Google made progress with Gemini, the inaccuracies with AI Overview seemed to improve but the chatbot had once again mired controversy after it said ‘Call of Duty: Black Ops 7’ was a fake game.
In this instance, Google seems to have turned off the AI Overview for the query “is is 2027 next year” but adding the term AI Overview after it, does bring in the AI result, which reads “No, 2026 is next year. The current year is 2025”
Meanwhile, asking the same question or similar variations of it to the company's AI Mode, backed by Gemini 3, does not bring in the same inaccuracies.
Notably, a recent investigation by The Guardian had also revealed that AI Mode continues to serve inaccurate health information that put people at risk of harm.
The AI reportedly went on to wrongly advise people with pancreatic cancer to avoid high fat foods which is the exact opposite of the opinions served by experts who say it may increase the risk of patients dying from the disease.
In another example, the AI provided ‘bogus information about liver function tests which could lead to people having serious liver disease. It also provided ‘completely wrong’ information about women's cancer tests which reportedly could lead to people dismissing genuine symptoms.

1 week ago
2






English (US) ·