top of page

The AI That Isn't: AI bias against neurodivergent and non-native writers


For as long as I can remember, people have said I write like a robot. I remember as a kid climbing a guaraná tree (a native Brazilian fruit) with a notebook and a pen to find quiet and peace to write my "books"—little manuals on how to make different braid styles, how to play a new card game, how to... how to... I love manuals and documentation.

Pokémon was an incredible and inclusive catalog and documentation experience that I could finally share while being sidelined, just like dinosaurs are for many like me.



Today, my emails are "too structured", my documentation "too precise", my messages "too formal". I have always had a "robotic" way of writing, and it's not because I lack creativity or emotion—it's simply how my brain organizes thoughts. And the fact that English is my second language. My natural writing style thrives in manuals, documentation, and academic content, where clarity and precision matter more than expressive flourishes. Writing in this way allows me to focus on the subject rather than style or feelings, something that often feels like an impossible balancing act.


The Growing Frustration

But lately, this trait has become a source of immense frustration. I've always preferred written language over spoken communication because it allowed me to express myself without the exhausting effort of masking—constantly adjusting my tone, facial expressions, and body language to fit neurotypical expectations. Writing was the perfect form of communication for me—low-energy yet highly productive—allowing me to focus more on the task itself rather than on appearing and sounding like everyone else. But with the rise of AI writing detection tools, my words—words I carefully construct, words I have written for decades—are now being flagged as artificially generated. Even texts I wrote 20 years ago! And as absurd as it sounds, people have started treating me like an AI too. I’ve received responses to my emails stating, “I’d prefer to speak to a real person and not a bot.” Even my professional correspondence, which I put effort into making clear and helpful, is met with suspicion simply because I communicate in a structured, to-the-point manner.

It would be funny if it weren’t so dehumanizing.


I joke about it sometimes, saying that if people keep calling me an AI, I might as well study machine learning. But the humor is just a coping mechanism for a cruel reality:

Neurodivergent people are being excluded and misjudged by a system that was supposed to detect artificial text, not erase human voices.

False positives from AI detectors are not just a minor inconvenience; they create yet another layer of segregation for people like me, who already struggle to be understood.


Exploitation in AI Detection

Worse yet, there is a deeply exploitative side to this issue. Many of these AI detection tools don’t just misidentify human writing—they do it intentionally to push a paid service. AI detectors will flag a perfectly human-written text as machine-generated and then conveniently offer a "solution": a paid tool to “humanize” your writing. This is nothing short of a predatory business model, profiting off an artificial problem they themselves create. And for neurodivergent people like me, whose writing is naturally different from neurotypical norms, it becomes an additional tax on simply being ourselves.


Stanford article on AI-detector bias against non-native English writers. Abstract illustration with eye symbol and silhouettes.
You don’t have to take my word for it—plenty of people are already speaking up about this, adding weight to the growing issue.

The Flaws of AI Detection Tools

The worst part? AI detectors are flawed to begin with. No detection tool is infallible, and many of them rely on outdated or arbitrary markers for what constitutes "human" writing.

Often, these markers are based on neurotypical writing patterns, meaning that people who communicate in more direct, structured, or formal ways are disproportionately flagged as AI.

The bias embedded in these systems only reinforces existing prejudices against neurodivergent communication styles, further marginalizing those who already struggle to have their voices recognized.


What Needs to Change?

So where do we go from here? The first step is awareness. People need to understand that writing styles are diverse, and a lack of casual phrasing, slang, or emotional expressiveness does not make someone inhuman. Businesses and organizations must be held accountable for implementing AI detection tools without considering the harm they cause. And, most importantly, we need to push back against the monetization of a manufactured problem—no one should have to pay to have their humanity validated.

I am not AI. I never have been. But the way things are going, it feels like the world is pushing me into an artificial category simply because my natural way of thinking doesn’t fit the mold. And that, more than anything, is a dystopian reality we must fight against.


Comments


NewsLette
bottom of page