ARTICLE AD BOX

Summary
For some people on the autism spectrum, careful is becoming a liability.
Earlier this year, a researcher in India who is on the autism spectrum sent in a short abstract for an academic publication and was told that a sentence in it sounded like AI. No evidence or basis was offered. Just the standard warning: if AI had been used, it had to be disclosed.
She had used none. She later told me the abstract was hurried and not her best. But after that message, the exchange was no longer about her idea. It was about whether she had written it.
When we spoke later, she said the sentence may have drawn suspicion for a simple reason: it was abstract. Abstracts compress. They name concepts before they can unfold them. In a small space, there is little room for scene, example or evidence. What remains can sound formal, dense, a touch bloodless. “Good grammar is a trauma response,” she told me, half joking.
Anyone who grew up in an Indian classroom with Wren and Martin, red circles around mistakes and punishments for small slips, will know what she meant. The kind of English many of us were trained to produce can now look suspiciously machine-made.
She was not alone. Two other professionals with autism I spoke to, one in marketing, the other in research, described versions of the same thing. Their writing had been read as too polished, too flat, too composed. What it lacked were the small signs people have come to trust as human: looseness and a bit of everyday warmth.
Social communication difficulties sit near the centre of autism. For many people with autism, the hard part is reading spoken or written tone, and the verbal and non-verbal cues that carry meaning around words: facial expression, gesture, implication, the mood of an email, the force of a short text. That same researcher told me AI-written emails and requests can be harder for her to read than most human ones. She already rereads messages, sometimes several times, to work out tone and intent. LLM prose makes that harder, not easier.
“It feels like a layer of gloss over the text,” she said. That is close to exactly how the stuff reads. The words are fine. The manners are fine. But the surface is so smooth you cannot tell what pressure sits underneath it. You can read every line and still not know how to answer.
So the problem runs both ways. People with autism can be mistaken for machines because their prose may be literal, precise and short on the little social softeners many readers expect. And some of them find machine prose especially hard to parse because it flattens tone into a bland average. AI is supposed to make communication easier. For some people, it first removes the cues they use to understand it.
Pure guesswork
I kept hearing versions of this. Nicole Filippone, who writes publicly about life on the spectrum, said in May that people with autism were now having to mask even in writing so they would not look as though they had used AI. In a follow-up, she described removing em dashes, forcing conjunctions into sentences and changing formatting because her text looked “too much like AI output”. People are now roughening up their own sentences to make them sound more human.
By March, Emma Alpern had reported the same pattern in New York Magazine. Her piece moved from a day-care worker mocked for words like “juxtaposition” and “circumstantial” to a Moroccan writer accused within minutes of submitting work, to a business professor raised across Asia who said the English textbooks he learned from had given him precisely the kind of vocabulary people now associate with ChatGPT. The writers with autism she spoke to said this was not new. They had been told they sounded robotic long before there was any chatbot to blame. One novelist put it plainly: people are now “going off vibes”.
There is now at least one study trying to test that pattern. A 2025 conference paper by Summer Chambers and Matthew C. Kelley ran the OpenAI GPT-2 detector over roughly 60,000 Reddit posts, split between a “likely-autistic” corpus and a general Reddit corpus. The overall flagging rates were low in both groups. That matters. The paper does not say that most writing by people with autism gets flagged as AI. It does say the likely-autistic corpus was flagged significantly more often, and it argues that using such detectors in academic settings deserves ethical scrutiny.
You do not need autism in the frame to see the wider problem. These systems are probabilistic. They guess. But in classrooms, editorial inboxes and review processes, those guesses start acting like verdicts. Formal vocabulary, clean syntax and the visible care of someone writing in a second or third language can all begin to look suspicious.
In India, that absurdity has a particular sting. Polished English is not some suspicious luxury here. It is often the product of years of correction. Memorise the rule. Do not split the infinitive. Do not drop the article. Do not write as you speak. Plenty of people came through that machinery carrying fear, embarrassment, ambition and a lot of labour. When such prose is treated as suspiciously smooth, what is being misread is not only a machine. It is schooling.
Of course, not all people with autism write the same way. Some are lush. Some ramble. Some are funny in ways no model can fake. The point is narrower. A certain kind of controlled, literal, highly correct prose, whether it comes naturally or through effort, now overlaps enough with machine prose to trigger doubt.
And the doubt does not fall evenly. It falls hardest on the people whose tone is already scrutinized: people expected to sound warm enough, easy enough, ordinary enough. A detector does not enter neutral ground. It joins an older culture of correction. It tells people that even their carefulness can be used against them.
Vibe check
None of this means AI is only harmful. One account shared with me described a nonverbal child with autism using an AI communication app to say she had ear pain; she was then diagnosed and treated for an inner-ear infection. That is not trivial. Sometimes a machine helps someone get heard. The trouble is that the same ecosystem that can open a channel in one place can shut a door in another.
Underneath all this is a familiar demand: sound natural, sound warm, sound ordinary. Not too stiff. Not too smooth. Fall outside that range, and the trouble starts.
The person with autism whose sentence is too exact. The non-native English writer whose grammar is too clean. The student whose prose shows too little visible struggle. The professional who did exactly what the school taught them to do and is now told it looks fake.
There is another cost. People who already work hard to read tone and intent are now swimming in machine-written email that removes the very cues they need. “Gloss” was her word. It is the right one. A glossy surface throws your gaze back at you. It does not let you in.
No black-box score should end the conversation. If someone suspects AI use, they should have to say why. They should have to ask questions, compare drafts, look at earlier work, and talk to the writer. Right now, too much of this rests on vibes, and vibes are where prejudice does its quiet work.
Editors and teachers are right to be sick of AI slop. But if that exhaustion leads them to distrust careful, literal, highly polished human writing on sight, they will end up punishing exactly the people who did the work.
A sentence can be polished and still be alive. A voice can be flat and still carry feeling. A person should not have to roughen their prose on purpose just to pass as human.

1 week ago
2





English (US) ·