News Warner Logo

News Warner

Why it’s so hard to tell if a piece of text was written by AI – even for AI

Why it’s so hard to tell if a piece of text was written by AI – even for AI

  • AI-generated text has become increasingly difficult to distinguish from human-written text, making it challenging for institutions to enforce rules governing its use.
  • The detection of AI-generated text relies on complex algorithms and background assumptions that can be difficult to make explicit, such as knowing which AI tools might have been used to generate the text.
  • There are different approaches to detecting AI-generated text, including using AI itself to detect AI-written text, examining statistical signals in the text, or verifying text with watermarks embedded by AI vendors.
  • However, each approach has its own limitations, such as learning-based detectors being sensitive to outdated training data and statistical tests relying on assumptions about proprietary AI models.

Large language models have become extremely good at mimicking human writing. Robert Wicher/iStock via Getty Images

People and institutions are grappling with the consequences of AI-written text. Teachers want to know whether students’ work reflects their own understanding; consumers want to know whether an advertisement was written by a human or a machine.

Writing rules to govern the use of AI-generated content is relatively easy. Enforcing them depends on something much harder: reliably detecting whether a piece of text was generated by artificial intelligence.

Some studies have investigated whether humans can detect AI-generated text. For example, people who themselves use AI writing tools heavily have been shown to accurately detect AI-written text. A panel of human evaluators can even outperform automated tools in a controlled setting. However, such expertise is not widespread, and individual judgment can be inconsistent. Institutions that need consistency at a large scale therefore turn to automated AI text detectors.

The problem of AI text detection

The basic workflow behind AI text detection is easy to describe. Start with a piece of text whose origin you want to determine. Then apply a detection tool, often an AI system itself, that analyzes the text and produces a score, usually expressed as a probability, indicating how likely the text is to have been AI-generated. Use the score to inform downstream decisions, such as whether to impose a penalty for violating a rule.

This simple description, however, hides a great deal of complexity. It glosses over a number of background assumptions that need to be made explicit. Do you know which AI tools might have plausibly been used to generate the text? What kind of access do you have to these tools? Can you run them yourself, or inspect their inner workings? How much text do you have? Do you have a single text or a collection of writings gathered over time? What AI detection tools can and cannot tell you depends critically on the answers to questions like these.

There is one additional detail that is especially important: Did the AI system that generated the text deliberately embed markers to make later detection easier?

These indicators are known as watermarks. Watermarked text looks like ordinary text, but the markers are embedded in subtle ways that do not reveal themselves to casual inspection. Someone with the right key can later check for the presence of these markers and verify that the text came from a watermarked AI-generated source. This approach, however, relies on cooperation from AI vendors and is not always available.

How AI text detection tools work

One obvious approach is to use AI itself to detect AI-written text. The idea is straightforward. Start by collecting a large corpus, meaning collection of writing, of examples labeled as human-written or AI-generated, then train a model to distinguish between the two. In effect, AI text detection is treated as a standard classification problem, similar in spirit to spam filtering. Once trained, the detector examines new text and predicts whether it more closely resembles the AI-generated examples or the human-written ones it has seen before.

The learned-detector approach can work even if you know little about which AI tools might have generated the text. The main requirement is that the training corpus be diverse enough to include outputs from a wide range of AI systems.

But if you do have access to the AI tools you are concerned about, a different approach becomes possible. This second strategy does not rely on collecting large labeled datasets or training a separate detector. Instead, it looks for statistical signals in the text, often in relation to how specific AI models generate language, to assess whether the text is likely to be AI-generated. For example, some methods examine the probability that an AI model assigns to a piece of text. If the model assigns an unusually high probability to the exact sequence of words, this can be a signal that the text was, in fact, generated by that model.

Finally, in the case of text that is generated by an AI system that embeds a watermark, the problem shifts from detection to verification. Using a secret key provided by the AI vendor, a verification tool can assess whether the text is consistent with having been generated by a watermarked system. This approach relies on information that is not available from the text alone, rather than on inferences drawn from the text itself.

AI engineer Tom Dekan demonstrates how easily commercial AI text detectors can be defeated.

Limitations of detection tools

Each family of tools comes with its own limitations, making it difficult to declare a clear winner. Learning-based detectors, for example, are sensitive to how closely new text resembles the data they were trained on. Their accuracy drops when the text differs substantially from the training corpus, which can quickly become outdated as new AI models are released. Continually curating fresh data and retraining detectors is costly, and detectors inevitably lag behind the systems they are meant to identify.

Statistical tests face a different set of constraints. Many rely on assumptions about how specific AI models generate text, or on access to those models’ probability distributions. When models are proprietary, frequently updated or simply unknown, these assumptions break down. As a result, methods that work well in controlled settings can become unreliable or inapplicable in the real world.

Watermarking shifts the problem from detection to verification, but it introduces its own dependencies. It relies on cooperation from AI vendors and applies only to text generated with watermarking enabled.

More broadly, AI text detection is part of an escalating arms race. Detection tools must be publicly available to be useful, but that same transparency enables evasion. As AI text generators grow more capable and evasion techniques more sophisticated, detectors are unlikely to gain a lasting upper hand.

Hard reality

The problem of AI text detection is simple to state but hard to solve reliably. Institutions with rules governing the use of AI-written text cannot rely on detection tools alone for enforcement.

As society adapts to generative AI, we are likely to refine norms around acceptable use of AI-generated text and improve detection techniques. But ultimately, we’ll have to learn to live with the fact that such tools will never be perfect.

The Conversation

Ambuj Tewari receives funding from NSF and NIH.

link

Q. Why is it challenging to determine if a piece of text was written by AI?
A. It’s hard because large language models have become extremely good at mimicking human writing, making it difficult for humans to distinguish between human-written and AI-generated text.

Q. What is the basic workflow behind AI text detection?
A. The basic workflow involves applying a detection tool that analyzes the text and produces a score indicating how likely the text is to have been AI-generated.

Q. What are watermarks in AI-generated text?
A. Watermarks are subtle markers embedded in AI-generated text that can be detected later using a secret key provided by the AI vendor, allowing for verification of the text’s origin.

Q. How do learning-based detectors work in AI text detection?
A. Learning-based detectors use machine learning algorithms to train on labeled datasets and then apply these models to new text to predict whether it was generated by an AI system or not.

Q. What are some limitations of learning-based detectors?
A. Learning-based detectors can be sensitive to how closely new text resembles the training data, making them less accurate when the text differs substantially from the training corpus.

Q. How do statistical tests work in AI text detection?
A. Statistical tests rely on assumptions about how specific AI models generate text and often require access to those models’ probability distributions, which can break down if the models are proprietary or frequently updated.

Q. Why is it difficult for institutions to enforce rules governing the use of AI-written text?
A. Institutions cannot rely solely on detection tools for enforcement because these tools have limitations, such as being sensitive to outdated training data or relying on assumptions about specific AI models.

Q. What is the current state of AI text detection in terms of its effectiveness?
A. The problem of AI text detection is simple to state but hard to solve reliably, and institutions must adapt to the fact that detection tools will never be perfect.

Q. How might society refine norms around acceptable use of AI-generated text?
A. As society adapts to generative AI, we are likely to refine norms around acceptable use of AI-generated text and improve detection techniques to ensure that these tools are used effectively.

Q. What is the relationship between transparency in AI text detection and evasion techniques?
A. Detection tools must be publicly available to be useful, but this same transparency enables evasion, making it difficult for detectors to gain a lasting upper hand.