Now, AI can no longer tell what’s real

1 week ago 2
ARTICLE AD BOX

logo

YouTube channel Wilderness was all about the stories.

Summary

In a cruel twist, the war on AI slop is killing the work of humans as algorithms mistake genuine creativity for clutter.

In the cool green depths of Uttarakhand, a young audio engineer chose an unusual career path. Rohit had grown up amid the impact of human-animal conflicts, and each incident he encountered shocked him to the core. He decided he wanted to tell the world the real stories behind the hundreds of tiger attacks that occur in India, and, in doing so, combine his audio expertise with his experiences with wildlife.

Setting up a YouTube channel, Wilderness, the young man dove head-first into the no-compromises setup he felt the stories deserved. He acquired top-notch equipment, rented a Dolby-verified studio, and got together the funds to travel to villages and interact with those who had lived with wildlife and had real-life stories to tell.

Rather than focus on himself and his own presence, Rohit wanted the stories to immerse the listener completely. Close your eyes and come with me into the jungle, he often said. He often did Live broadcasts to keep in touch with subscribers, but for the most part, Wilderness was all about the stories. And they were spellbinding.

The YouTube channel was beginning to see organic growth, with thousands of loyal followers—including me.

The AI twist

And then, one day, he found an email in his inbox from YouTube. It was the standard demonetization notice that creators receive for violating guidelines, stating that his content was inauthentic. “Inauthentic content refers to content that seems like it has been mass-produced. This can mean content that looks like it has been made with a template, has minimal variation across videos, or is a slideshow with low narrative.”

This situation highlights a troubling shift in our digital landscape. For years, the public conversation has focused on the danger of people being fooled by deepfakes. Now, we are seeing the opposite problem. Automated systems are the ones suffering from reality confusion, and individuals are paying the price.

Rohit’s channel became a victim of an identity paradox. To a piece of software trained to detect synthetic clutter, he looked suspicious because he was too professional. His audio, recorded in a high-end studio, was too clean. His narration was too consistent. His pacing was too perfect. By offering his audience a high-quality experience rather than a shaky, handheld video, he triggered the very filters meant to protect us from low-quality automation.

The irony is deeper still. As Wilderness grew, automated content farms began scraping his channel, stealing his research and narrations to create thousands of low-effort clones. When the platforms' automated tools scanned the landscape, they saw a massive cluster of similar content. Lacking the historical context to distinguish the source from the parasite, the algorithm made a binary choice. It saw a human researcher surrounded by a thousand of his own digital shadows and decided to delete the entire group.

It did not matter that Rohit’s work was the result of physical travel and primary research. To the code, he was just another data point in a cluster of inauthentic content.

How we lost the plot

We seem to be entering an era where you are increasingly forced to perform your humanity just to be allowed to exist online. To be recognized as a person today, you must surrender your privacy. You must stare into a lens, show your face, and turn your life into a constant display. If you are an introvert, a quiet archivist, or a researcher who believes the subject is more important than the storyteller, the system now treats you as inauthentic.

In our rush to clean up the internet, we must ensure we aren’t creating a scorched-earth policy. If our digital future requires us to surrender our dignity and our privacy just to prove we are not software, then we have already lost something vital. We can’t afford to lose the human premium, the effort, heart and presence of people like Rohit, just because an algorithm finds it too difficult to look for the truth.

YouTube didn’t have much to say at this point, other than that content must comply with the guidelines. Hopefully, they will review the channel, and hopefully the YouTuber will fight for the content he worked so hard to create.

The New Normal: The world is at an inflexion point. Artificial intelligence (AI) is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life.

Mala Bhargava is most often described as a ‘veteran’ writer who has contributed to several publications in India since 1995. Her domain is personal tech, and she writes to simplify and demystify technology for a non-techie audience.

About the Author

Mala Bhargava

Mala Bhargava was among the first journalists in India to write on personal technology, then known as 'home computing'. With Cyber Media she launched the country's first personal tech magazine, Computers@Home, in 1996. She also wrote a tech trends column, That's IT, for Businessworld magazine for 20 years. She has also written for The Hindu BusinessLine and Fortune. Her speciality has always been writing for 'the rest of us' rather than for the tech-savvy. She has a background in psychology which makes it natural for her to write on how technology impacts everyday life. She is currently a Mint contributor, writing on AI in daily life, specifically the chat assistants. She lives in New Delhi.

Read Entire Article