BASIC Studios

It’s Not AI, I’m Just Autistic

Artificial Intelligence (AI) software has become an omnipresent part of modern technology, offering countless benefits to everyone. Unfortunately, this rapid integration has damaged the literary reputations of many neurodivergent individuals. The language style of AI-generated content closely mirrors that of “robotic” autistic individuals, leading to skepticism about the authenticity of their written works. This overlap has created a damaging association that threatens the literary reputations of neurodivergent individuals. The rise of AI detectors, which claim to distinguish human from machine-generated content, has further compounded these issues, leading to false accusations and widespread discrimination. It is my position that a combination of AI technology and widespread ignorance is damaging the credibility of neurodivergent individuals.
Neurodiversity is a broad term referring to individuals whose neurological development and processes are atypical compared to societal standards. This includes conditions like autism, ADHD, schizophrenia, borderline personality disorder, and more. The term was coined in the late 1990s by sociologist Judy Singer, who challenged the pathologizing view of mental differences and identified them as part of natural human diversity. She mentioned this in one of her publications, “The ‘Neurologically Different’ represent a new addition to the familiar political categories of class, gender, and race and will augment the insights of the social model of disability” (Singer, 2017). As society shifts toward inclusivity, it is important to recognize that neurodivergent individuals exhibit different speech patterns compared to those society deems as “neurotypical.” For example, according to the Centers for Disease Control and Prevention (CDC), a study in 2000 found that 1 in 150 children were on the autism spectrum. By 2020, this number had risen to 1 in 36 children (CDC, 2024). Based on these statistics, it can be proposed that the ratio of neurodivergent individuals to neurotypical individuals will change significantly over the next five years.

Language models, such as ChatGPT, are trained on massive datasets of human text and are then programmed by human workers to correct any bugs. “Training a large language model involves two main stages: pre-training and fine-tuning” (ARTiBA, 2024). During pre-training, the model learns the basics of language, like spelling and grammar. After pre-training, the model is fine-tuned for specific tasks, like analyzing sentiment. One study found that students on the autism spectrum had the highest participation rates in Science, Technology, Engineering, and Mathematics (STEM) fields at colleges and universities (Wei et al., 2012). STEM fields involve coursework focused on critical thinking, problem-solving, and technical skills; attributes that are often more pronounced in individuals with autism. This suggests that there may be more neurodivergent individuals programming language models than neurotypical individuals, which could explain why the speech patterns being replicated are heavily influenced by those on the autism spectrum. Because these models are intentionally coded to mimic human interactions so closely, it stands to reason that AI language models will replicate the speech patterns of those who program them. This overlap has led to multiple instances of college students being unjustly accused of using AI to generate their homework. “Neurodivergent students, as well as those who write using simpler language and syntax, appear to be disproportionately affected by these systems” (Coldwell, 2024). These harmful assumptions not only disrespect the inherent struggles of being neurodivergent in a structured school setting but also diminish the speech patterns that neurodivergent individuals, like myself, must develop for others to understand them. 

The biggest struggle for these students is the rise of “AI detection tools” that claim they can identify how much of someone’s writing is generated by artificial intelligence. While these tools are marketed as reliable, they are just as inaccurate as human intuition. They often misclassify human-written work as AI-generated and AI-generated work as human-written; especially if the individual has a “robotic,” overly descriptive, or overly formal writing style. The truth is that AI detectors are not yet sophisticated enough to reliably distinguish between human and artificial intelligence. “According to the companies that make them, these AI have less than .001% false positive rate” (Pindell, n.d) Relying on AI to detect AI is also redundant because it’s utilizing a system designed by the same technology to identify its own creation. If the system is flawed or overly simplistic, it will misidentify AI-generated text as human-written or vice versa. They are also easily fooled, with research showing that many tools have high false-positive rates. Simple modifications like adding misspellings or altering formatting can confuse these detectors (Wood, 2024). These tools may seem helpful, but in reality they exacerbate biases against neurodivergent students, enabling instructors to falsely accuse neurodivergent students of cheating, even after spending hours on the assignment. This not only marginalizes neurodivergent individuals but also discourages them from engaging with official institutions and educational settings where they are dismissed and insulted.

As we continue to transition into a technology-driven society, ethical dilemmas surrounding the use of artificial intelligence have become more prominent. One important question is whether we should trust these systems, which are designed to categorize and judge human behavior, despite their inability to truly understand it. Computer software, no matter how advanced, cannot fully grasp the complexities of human experience. This creates a world where judgments may be based on the output of a machine rather than the intricate, varied realities of individuals. In the article by Gonsalves et al. (2023), one of their interviewees, Dylan Losey, pointed out, “In practice, rushed applications of AI have resulted in systems with racial and gender biases. The bad of AI is a technology that does not treat all users the same.” If we allow artificial intelligence to dictate human authenticity and behavior, we risk creating a society that further dehumanizes those who already struggle to fit in. As highlighted in recent discussions, relying on AI to evaluate human worth introduces a perilous precedent. Drawing from the last 50 years of science fiction, we should actively resist the temptation to let technology define humanity’s value.

The integration of AI into daily life has led to many unfair associations between neurodivergent speech patterns and AI-generated writing. As with any situation, it is essential to consider the context involved in creating and training a language model before judging those who happen to use similar patterns. All language models are trained on datasets of human writing, including the work of neurodivergent individuals whose writing differs from the so-called “norm.” This results in the replication of those patterns, creating a false equivalency between neurodivergent communication and AI-generated text. Instead of rushing to judgment and assuming everyone is untrustworthy, it is important to remember that humans created AI language models, and therefore it will always reflect human characteristics.

Written by: Randi Taggart for LCC class OLTM 329 Foundations of Business


References

ARTiBA. (2024, March 22). How do Large Language Models Work? How to Train Them?. Artificial Intelligence Board of America. https://www.artiba.org/blog/how-do-large-language-models-work-how-to-train-them

Centers for Disease Control and Prevention. (2024, May 16). Data and Statistics on Autism Spectrum Disorder. Centers for Disease Control and Prevention. https://www.cdc.gov/autism/data-research/index.html

Coldwell, W. (2024, December 15). “I received a first but it felt tainted and undeserved”: Inside the University Ai Cheating Crisis. The Guardian. https://www.theguardian.com/technology/2024/dec/15/i-received-a-first-but-it-felt-tainted-and-undeserved-inside-the-university-ai-cheating-crisis

Curzon, P. (2024, April 5). Neurodiversity and what it takes to be a good programmer. Computer Science for Fun. https://cs4fn.blog/2024/04/05/neurodiversity-and-what-it-takes-to-be-a-good-programmer/

Gonsalves, F., Green, J., Parrish, A., Moxley, T., Seeber, C., & Williamson, A. (2023, November 6). AI—The good, the bad, and the scary. Virginia Tech Engineer. https://eng.vt.edu/magazine/stories/fall-2023/ai.html

Nadkarni, A. (2024, October 25). Neurodivergent students more likely to be flagged by AI detectors. AI Detector Pro. https://blog.aidetector.pro/neurodivergent-students-falsely-flagged-at-higher-rates/

Pindell, N. (n.d.). The Challenge of AI Checkers. Center for Transformative Teaching | Nebraska. https://teaching.unl.edu/ai-exchange/challenge-ai-checkers/

Singer, J. (2017). Neurodiversity: The birth of an idea.

Wei, X., Yu, J. W., Shattuck, P., McCracken, M., & Blackorby, J. (2012). Science, Technology, engineering, and mathematics (STEM) participation among college students with an autism spectrum disorder. Journal of Autism and Developmental Disorders, 43(7), 1539–1546. https://doi.org/10.1007/s10803-012-1700-z

What is neurodiversity. Neurodiversity Hub. (n.d.). https://www.neurodiversityhub.org/what-is-neurodiversity

Wood, C. (2024, September 10). AI detectors are easily fooled, researchers find. EdScoop. https://edscoop.com/ai-detectors-are-easily-fooled-researchers-find/ 


Discover more from BASIC Studios

Subscribe to get the latest posts sent to your email.

Author

Explorer of Myths 📜 Seeker of Truths 🔍 Lover of Mischievous Tales 🎭 Dive with me into legends, fandoms, and forgotten histories, uncovering the stories that spark our imagination!


Phone Number

360-595-4823

Location

Longview, WA

Discover more from BASIC Studios

Subscribe now to keep reading and get access to the full archive.

Continue reading