Can Chat GPT Be Detected By Humans?
Can Chat GPT Be Detected By Humans?
With the use of AI platforms like ChatGPT on the rise, you may be looking at every piece of content you consume with a more critical eye.
You won’t be one of the people easily fooled by images of religious leaders in puffer jackets and politicians being arrested. You know the telltale signs of photo-altering and you’ve vowed to never get fooled.
But identifying AI is a little harder when it’s written.
This page will give you foolproof methods to test if something has been written by AI.
Since We Talk About AI and ChatGPT So Much, You Might Be Thinking to Yourself, “Was This Written By AI?”
The answer is no.
If you don’t trust us, there are a few things you can do to confirm or refute our claim.
The easiest way is to run the content through an AI detector like GPTZero: the first of its kind.
If you run the first part of this page through GPTZero, this is what you’ll get:
“How Do AI Detectors Work?” you ask
Detectors like Crossplag’s boldly claim that they are the solution to the threat against originality. We don’t know if that’s true, but we do know that they rely on machine learning algorithms and natural language processing techniques to detect AI-written content. Other platforms use the same methods.
GPTZero does something similar. The website states this classification model can tell if a body of text was written by AI by providing predictions on multiple levels, including sentence, paragraph, and the entire document. GPTZero was trained on a vast amount of human-written and AI-created text. It focuses mainly on English dialects.
How to Tell if Something Was Written By AI Without Using an AI Detector
If you don’t have time to run a chunk of words through an AI detector like Crossplag or GPTZero but you still want to know if you’re being fed something written by a robot, there are steps you can take.
In an article published by MIT Technology Review, one expert provided an easy tip for identifying AI-written content.
According to Daphne Ippolito, a senior research scientist at Google Brain, seeing the word “the” too many times is a hint that the words you’re reading may not have been written by a human.
Since language models like ChatGPT work by predicting the next word in a sentence, they’re more likely to use words like “the,” “it,” or “is” instead of larger, more complex words. AI detectors can pick up frequent uses of these words exceptionally well. This is not new information, either: Daphne and her colleagues discovered this in research that was published in 2019.
Another interesting piece of information to come out of that research is that human readers think AI-written text looked “cleaner” and contained fewer mistakes, leading them to the incorrect conclusion that it was written by a person.
However, as we all know, humans make mistakes. Most copy written by a person includes typos, different styles of writing, the inclusion of slang and the dialect used in the writer’s region. Robots, on the other hand, rarely make typos and are the masters at creating perfect text.
If you’re wondering if something is written by a human, look for the mistakes.
AI Watermarks Are Another Way to Identify Robot-Written Content
Scott Aaronson, a computer scientist from the University of Texas who is working as a researcher at OpenAI, has been working on creating watermarks for large bodies of text written by ChatGPT and other AI models. He says in his blog it’s an “unnoticeable secret signal in its choices of words.”
An OpenAI spokesperson confirmed they are in fact working on watermarks to reinforce their policy that users of the platform should be open and honest about using AI.
Watermarks have been used in pictures and more recently videos to identify the creator of the work so others can’t pass it off as their own.
In the case of written content, this will look a little different.
According to Search Engine Journal, watermarking ChatGPT-produced text will involve embedding words, letters, and punctuation “in the form of a secret code.”
This will be helpful for a variety of situations: teachers who want to identify whether a student’s essay was written by AI, recruiters who are looking for candidates who write their own cover letters and resumes, and for you.
DataReportal found that the average Canadian spends a little over 6.5 hours staring at a screen each day. Americans spend nearly seven hours doing the same thing.
In that time, you’re being exposed to 34 GB of data.
That’s about 100,000 words heard or read every day. For context, that’s the equivalent of reading To Kill a Mockingbird every day.
How much of that content is written by a human?
And perhaps more important, how much of that is factual and free from propaganda?
According to Scott Aaronson, watermarks will help mitigate that risk.
“This could be helpful for preventing academic plagiarism, obviously, but also, for example, mass generation of propaganda—you know, spamming every blog with seemingly on-topic comments… Or impersonating someone’s writing style in order to incriminate them. These are all things one might want to make harder, right?
More generally, when you try to think about the nefarious uses for GPT, most of them—at least that I was able to think of!—require somehow concealing GPT’s involvement. In which case, watermarking would simultaneously attack most misuses.”
With the increase in AI detectors, it’s only a matter of time before people learn that robots can’t replace humans, but they can work with them.