
I really have no time for blogging today, but I have to share Wikipedia’s “Signs of AI Writing” advice page that I stumbled across while not sleeping last night. It is sort of an internal white paper for those who edit that remarkable resource: not definitive, but trying really hard to be helpful.
I had no idea this existed. But it turns out to be the most explicit list of tells, bugaboos, and quirks of LLM output in fall 2025 I that have ever seen.
Last week I taught my doc seminar on scholarly writing, and was informed by my doc students that “em dashes mean AI wrote it.” A quick web search shows I am a few months behind this supposed wisdom, and it is included.
(Though sadly, the only essential thinkpiece on the issue is not.)
The first takeaways are pretty breathtaking. I will quote widely from the page (linked above) because no time to paraphrase.
[snip]
LLM writing often puffs up the importance of the subject matter by adding statements about how arbitrary aspects of the topic represent or contribute to a broader topic…Words to watch: stands as / serves as / is a testament/reminder, plays a vital/significant/crucial role, underscores/highlights its importance/significance, reflects broader, symbolizing its ongoing, contributing to, enduring/lasting impact, watershed moment, key turning point, deeply rooted, profound heritage, steadfast dedication, indelible mark, solidifies …
AI chatbots tend to insert superficial analysis of information, often in relation to its significance, recognition, or impact. This is often done by attaching a present participle (“-ing”) phrase at the end of sentences, sometimes with vague attributions to third parties…Words to watch: ensuring …, highlighting …, emphasizing …, reflecting …, underscoring …, showcasing …, aligns with…
LLMs have serious problems keeping a neutral tone, especially when writing about something that could be considered “cultural heritage”—in which case they will constantly remind the reader that it is cultural heritage…Words to watch: rich/vibrant cultural heritage/tapestry, boasts a, continues to captivate, groundbreaking, intricate, stunning natural beauty, enduring/lasting legacy, nestled, in the heart of …
LLMs often introduce their own interpretation, analysis, and opinions in their writing, even when they are asked to write neutrally, violating the policy No original research. Editorializing can appear through specific words or phrases or within broader sentence structures. This indicator often overlaps with other language and tone indicators in this list. Note that humans and especially new editors often make this mistake as well…Words to watch: it’s important to note/remember/consider, is worth mentioning …
[/snip]
It goes on and on; you should check it out.
We are of course beyond “gotcha” AI moments, at least in my institution. We are encouraged to “teach the controversy,” surely, and encourage students to begin to use these tools critically because job skills.
But if you are someone who cares about words, or was taught to, or make at least part of your life with words, I ask you: what do you notice about these “tells”?
That they are, and have always been, the hallmark of inexperienced writers trying to find their way into what they have to say.
When we who care about words say that AI output is “bad” I can begin to see now what we mean. It is vague; it pretends to perspective it does not have; it leans into gravity while being featherweight.
In other words: it is a lot like the papers many of our students would be writing on their own–if they were actually writing their papers.
So where’s the problem?
The problem is: until you write like this on your own, and get told you are writing like this and shown how to do better…you will continue to write like this.
Worse, you will be satisfied with this writing as “enough.” It looks “smart” and “authoritative,” so it must be.
And this is only the writing-skill part of the deeper issue, described in the lengthy quote from an associated page:
[snip]
LLMs are pattern completion programs: They generate text by outputting the words most likely to come after the previous ones. They learn these patterns from their training data, which includes a wide variety of content from the Internet and elsewhere, including works of fiction, low-effort forum posts, unstructured and low-quality content for search engine optimization (SEO), and so on. Because of this, LLMs will sometimes “draw conclusions” which, even if they seem superficially familiar, are not present in any single reliable source. They can also comply with prompts with absurd premises, like “The following is an article about the benefits of eating crushed glass”. Finally, LLMs can make things up, which is a statistically inevitable byproduct of their design, called “hallucination“…
As LLMs often output accurate statements, and since their outputs are typically plausible-sounding and given with an air of confidence, any time that they deliver a useful-seeming result, people may have difficulty detecting the above problems. An average user who believes that they are in possession of a useful tool, who maybe did a spot check for accuracy and “didn’t see any problems”, is biased to accept the output as provided; but it is highly likely that there are problems.
[/snip]
If you care about writing–or if you don’t care about writing, but do care about critical thinking, bias, or plausible-sounding and confidently-expressed things being accepted as true–well then, there is much to fear here.
Thoughts?
My typewriter sits in the corner and shakes its shaggy head at me…
Image borrowed from this Axios story on the apparent currency of the term “clanker” for undesired and ineffective AI. Image possibly AI generated, who can know anymore ¯\_(ツ)_/¯.