AI Detectors Flag the Bible as AI-Generated Content
ZeroGPT scores the Book of Genesis at 88.2% AI-generated. The US Constitution gets flagged too. The problem is not the Bible - it is the detectors.

A claim has been circulating online: according to most AI detectors, the Bible was written by AI. The claim sounds absurd. It's also, by the detectors' own metrics, true.
TL;DR
- Claim: AI content detectors flag the Bible as AI-created
- Our take: Confirmed. ZeroGPT scores Genesis at 88.2% AI-produced. The US Constitution gets flagged too. The detectors are measuring text predictability, not authorship - and formal, repetitive texts score high on predictability by design
- ZeroGPT reaches only 73.8% accuracy in controlled tests, with 1 in 5 human texts wrongly flagged
- Academic studies show ~83% of human-written research abstracts get flagged as AI
What They Showed
ZeroGPT - one of the most widely used free AI content detectors - scores an excerpt from the Book of Genesis at 88.2% AI-produced. The US Constitution has been flagged as 100% AI-written by the same tool. Medium's automated AI detection has flagged articles consisting primarily of Bible verses, blocking their authors from monetization under the platform's anti-AI-content policy.
The pattern extends beyond religious texts. Cybernews tested multiple classic works and found that AI detectors flagged the Bible, Harry Potter, and Queen's Bohemian Rhapsody lyrics as AI-generated content. The common thread: formal, structured, predictable language.
What We Tried
We attempted to reproduce these results by testing Genesis 1:1-10 (King James Version) against multiple free AI detectors. The free-tier APIs for ZeroGPT, Sapling, and others all require authentication or CSRF tokens, preventing automated testing. The web-based interfaces work but don't expose programmatic access.
However, the 88.2% Genesis score from ZeroGPT has been independently documented by multiple sources - Atheer Mahir's LinkedIn analysis, Cybernews' editorial investigation, and the Deceptioner blog's controlled testing of ZeroGPT's accuracy. We consider this data point verified through triangulation even without running our own scan.
| Detector | Text | Result | Source |
|---|---|---|---|
| ZeroGPT | Book of Genesis excerpt | 88.2% AI-created | Atheer Mahir / Cybernews |
| ZeroGPT | US Constitution | Flagged as AI-written | Multiple sources |
| Medium's detector | Articles with Bible verses (NKJV) | Flagged, monetization blocked | Curtis Alexander (Medium) |
| Various detectors | Harry Potter excerpts | Flagged as AI-generated | Cybernews |
| Various detectors | Bohemian Rhapsody lyrics | Flagged as AI-produced | Cybernews |
Why This Happens
AI detectors measure two things:
Perplexity - how surprising or unpredictable the next word is. Low perplexity means the text follows predictable patterns. AI models produce low-perplexity text because they're trained to produce the most probable next token. But formal human writing - legal documents, religious texts, academic papers - is also highly predictable by design.
Burstiness - the variation in sentence length and complexity. Human writing usually alternates between short and long sentences. AI tends to be more uniform. But biblical prose, with its repeated "And God said... And God saw... And it was so" cadence, is exceptionally uniform.
The King James Bible hits both triggers. Its language is formal, repetitive, and structured. The "And... And... And..." pattern in Genesis produces extremely low perplexity scores. The consistent sentence structure produces low burstiness. By every metric these detectors use, the Bible reads like AI output.
The technical irony is that LLMs were trained on the Bible (among billions of other texts). The models learned to mimic its patterns. Now detectors trained to spot those patterns can't distinguish the original from the imitation.
The Gap
The Accuracy Problem
ZeroGPT's own numbers are damning. In a controlled test of 160 samples, the tool achieved 73.8% overall accuracy - meaning it gets it wrong more than 1 in 4 times. More specifically:
- 1 in 5 human texts are wrongly flagged as AI (20% false positive rate)
- 32% of AI-produced texts are missed completely (false negative rate)
- ~83% of human-written research abstracts were flagged as AI in academic studies
- 62% of social science papers were flagged as AI
- ~60% of English major essays were labeled as AI
A University of Maryland study found that AI detectors offer "performance only marginally better than random classifiers." When a detector's accuracy approaches coin-flip territory, the confidence scores it outputs - "88.2% AI-generated" - aren't measurements. They're noise dressed up as precision.
The Real-World Damage
The consequences aren't theoretical. Students have been accused of cheating on assignments that predate ChatGPT. Medium blocked monetization for an author whose article quoted Bible verses. Academic journals have flagged legitimate research papers.
The fundamental problem: these tools measure text properties (predictability, uniformity) and interpret them as authorship signals. But predictability is a feature of good formal writing, not evidence of AI generation. A well-structured legal brief, a carefully edited scientific abstract, and a 400-year-old biblical translation all produce the same low-perplexity signal - because they're all doing what good formal prose is supposed to do.
Verdict
The claim is verified: AI detectors do flag the Bible as AI-created. ZeroGPT scores Genesis at 88.2% AI. The US Constitution gets flagged too. Multiple classic literary works trigger the same false positives.
This isn't a curiosity. It's a fundamental indictment of the detection methodology. Any tool that can't distinguish the King James Bible from ChatGPT output isn't measuring what it claims to measure. The detectors work by checking whether text is predictable and uniform - but predictability and uniformity are properties of all well-structured formal writing, not just AI output.
The takeaway for anyone relying on these tools for academic integrity, content moderation, or hiring decisions: a confident percentage score from an AI detector isn't evidence. It is a statistical guess from a tool that flags the Book of Genesis as robot-written. Treat it accordingly.
Sources:
- How AI Detectors Mistakenly Identify AI as the Author of the Bible - Atheer Mahir, LinkedIn
- Your essay was AI-generated, so was the Bible, Harry Potter, and Bohemian Rhapsody - Cybernews
- Why Does ZeroGPT Say I Used AI When I Didn't - HumanizeAI
- Why Is ZeroGPT So Bad - Deceptioner
- ZeroGPT Thinks the Holy Bible Is AI Generated - Curtis Alexander, Medium
- AI Detection False Positives - Originality.ai
