4 Tips for Detecting AI Writing

Posted on January 21, 2025 | Updated on January 21, 2025

When artificial intelligence became widely available relatively recently, many people were initially amazed at how fast these tools could write things. Some even began predicting that the content writing industry would soon be a thing of the past because these tools could do the same jobs as professional writers. Soon, it became clear individuals would need strategies for detecting AI writing. That was especially true if they were educators, editors, or others who may regularly come across content written with artificial intelligence. 

Fortunately, it is easier than many realize to pick up on this type of writing. That is because it often features numerous telltale clues. Picking up on at least one is a good reminder to scrutinize the rest of the content to see if it might not be what it initially seemed. 

1. Look for Inaccuracies

One of the first things to do when detecting AI writing is to read through the text as usual, looking for any errors. People make mistakes, too, but AI tools “hallucinate,” meaning they provide incorrect information by positioning it as factual.

These problems happen more than users might think. One AI tool used in law hallucinated answers more than 34% of the time, raising serious doubts about its trustworthiness. It is not only the clear-cut facts that an AI product could fabricate. Sometimes, these options appear to cite scholarly sources but only mention books, articles and websites that do not exist. 

This notable inaccuracy problem highlights why anyone checking text for signs of AI writing cannot merely gloss over it and assume everything looks good. Instead, it is ideal if they thoroughly know the topics covered in the piece of writing, making it easier for them to spot accuracy problems. 

The next best thing is to improve fact-checking skills, knowing which websites or other sources one can and should use to verify anything in the text that the person reading over the text cannot confirm or deny. 

Although it can be impressive to see how quickly an AI tool can churn out text, people should not assume it is accurate. It is also important to understand the potential ramifications of those mistakes. Imagine if someone uses a tool to write a website’s return policy. That seems like a relatively standard piece of content. However, perhaps the results say a person has a month to return something, and the store really allows just two weeks. That is an issue that would clearly make customers upset and confused if someone from the business did not catch the error in time. 

2. Check for Unusually Dramatic Language

A computer keyboard and a thick hardcover book.

Source: Photo by Duncan Reid on Unsplash 

An entire industry has sprung up to teach people how to get the results they want when inputting text into an AI tool. That’s called prompting, and some individuals have developed quite a knack for it. 

Although there are several suggested ways to format what goes into an AI tool, one of the most straightforward tips is to be specific. Instead of saying “Help me plan a two-week vacation to Rome,” people should give more details. The AI response will probably be more useful if the question includes information on a person’s budget and the ages and number of people going on the trip. 

However, dramatic words in the responses are clues that can go far in detecting AI writing. If the text seems to go a bit overboard in how it describes relatively ordinary things, that could be a clue that an AI tool composed it. Similarly, some people who know how to spot AI writing say that it often appears that a thesaurus was used to select many of the words. 

Of course, showing sufficient diversity in the content is an excellent way to keep people engaged. However, the trouble with many passages written by AI is their slight lack of authenticity. If someone thinks, “Did a real person actually write this?,” that should be a cue for them to exercise caution and remember that an actual individual may not have composed it. 

3. Get To Know the Writer’s Tendencies 

The tip of a pen and cursive text on a lined page.

Source: Photo by Aaron Burden on Unsplash 

Detecting AI writing is typically easiest for those looking over the work of people they know.  Perhaps the content comes from a marketing team member who has been with the company for several years. Alternatively, it might be from a student a professor has had in several of her classes throughout numerous semesters. 

Familiarity is a fantastic tool for detecting AI writing because those checking the content can rely on what they already know about the person who said they composed it. It is also best to have writing submissions from several years as references. That’s because it can take people time to develop their unique voices. Passion, experience and confidence while embracing one’s imperfections can help writers with this daunting task.

If someone starts regularly using — or overusing — a new-to-them word, that isn’t an automatic sign that they have become too dependent on an AI writing tool. Perhaps they just came across that word in a particularly memorable book, looked it up to verify its meaning and decided they wanted it to start making regular appearances in their writing. 

Detecting AI writing is not about suddenly becoming overly suspicious when it seems someone is overly dependent on a high-tech tool. It requires getting to the heart of what’s really happening. Maybe the person is using artificial intelligence to help them write but intends only to get inspiration from the tool rather than pull content directly from it. 

Never assume the worst right away, but give people the benefit of the doubt. Talk to them about their writing process, challenges, and what has gone well for them lately. These discussions will likely reveal if and how individuals use AI in their writing, providing opportunities for change if necessary.

4. Screen for Telltale Words

A person's hands on a laptop keyboard

Source: Photo by Kaitlyn Baker on Unsplash 

Researchers have become interested in how writing has changed since the post-AI boom. In one case, teams from two universities collaborated on a common goal. They wanted to analyze the abstracts from 14 million papers, using that massive collection of content to find words used by AI tools. 

One of the more startling takeaways was that the use of the word “delves” rose by more than 25 times once ChatGPT was available. Additionally, instances of “showcasing” and “underscores” appeared in the text nine times as often. 

As a previous tip already noted, looking for specific words is insufficient for detecting AI writing. That exercise may tell part of the story, but the mere usage of a particular word should not be enough to tip someone off.

Even so, this study shows that specific words have begun appearing more often. Another finding was that papers from countries where the native language is not English were more likely to get flagged as written by or with the help of AI. That indicates people may be using artificial intelligence to polish their writing. Such circumstances are particularly likely if a person must produce content in English, but that is not their native language. 

If this situation arises, treat it as another opportunity to speak with the writer and learn more about the role AI plays — if any — in their writing. The findings may be wholly innocent. Imagine if a non-English speaker sees the word “delves” in more content because it is part of more AI-written material. Then, they may misunderstand it as one of the most common English words, believing it has a place in their writing. 

Detecting AI Writing Is Becoming More Necessary

Some people may think it doesn’t matter whether someone uses an AI tool to write, as long as the content is clear and impactful. However, challenges can crop up, especially if artificial intelligence produces material with mistakes or the content comes across as inauthentic. 

When writers work on tasks that permit them to use AI for guidance, they should always primarily rely on their skills and knowledge. Artificial intelligence is improving but imperfect. 

About The Author

Coraline (Cora) Steiner is the Senior Editor of Designerly Magazine, as well as a freelance developer. Coraline particularly enjoys discussing the tech side of design, including IoT and web hosting topics. In her free time, Coraline enjoys creating digital art and is an amateur photographer.

Leave a Comment