GPT-4.5 Turing Test: AI Outperforms Humans in a Modern Twist
- NewBits Media
- Apr 6
- 3 min read
Updated: Apr 10

A recent preprint study from UC San Diego is turning heads in the AI world. Researchers put OpenAI’s latest model through what’s now being called the GPT-4.5 Turing Test — a contemporary version of the famous thought experiment — and the results suggest it’s getting harder than ever to tell humans and machines apart.
In the experiment, nearly 300 participants engaged in anonymous chat conversations, simultaneously messaging a human and an AI without knowing which was which. Their task? Decide who was the real person.
When GPT-4.5 was given a specific persona — such as a culturally fluent young adult — it was mistaken for a human 73% of the time, significantly outperforming the real human participants. In other words, the AI didn’t just fool people — it outshined them.
🧩 The Power of Personality
What made the difference was how the AI was prompted.
With no persona, GPT-4.5’s success dropped to 36%
With a persona, it soared to 73%
OpenAI’s GPT-4o (no persona): 21%
Meta’s LLaMa 3.1-405B (with persona): performed well
ELIZA, a simple rule-based chatbot developed in the 1960s, surprisingly achieved 23%
These findings underscore a key insight: giving AI a relatable identity makes it far more convincing to human users.
💬 GPT-4.5 Turing Test Success and the Question of Intelligence
So, does this mean GPT-4.5 is intelligent?
Not quite.
The Turing Test — introduced by Alan Turing in 1950 — was never intended as a literal benchmark for intelligence. It was more of a thought experiment: if a machine could mimic human conversation well enough to fool people, could it be said to “think”?
But today’s models like GPT-4.5 aren’t reasoning like humans. They’re highly advanced pattern matchers, trained on massive amounts of text to predict likely responses. As Google researcher François Chollet once said, the Turing Test was never meant to be taken as a strict, literal evaluation.
So while the results are impressive, they don’t prove the model understands you — it’s just really good at playing the part.
⚠️ What This Means for the Real World
Lead author Cameron Jones believes the findings go beyond academic curiosity and have real-world implications:
Customer service jobs and other short-form conversational roles may be easily automated
Phishing and impersonation scams could become more sophisticated and harder to detect
Trust in online interactions may continue to erode as AI becomes increasingly indistinguishable from humans
“This could potentially lead to automation of jobs, improved social engineering attacks, and more general societal disruption.” — Cameron Jones
🔍 AI’s Passing Grade Is About Us, Too
Perhaps the most revealing insight is this: the Turing Test says as much about people as it does about machines.
As we become more familiar with AI quirks and patterns, we may grow better at spotting bots. But for now, GPT-4.5 shows that even brief, text-based interactions can blur the lines — and that AI is already capable of slipping through the cracks convincingly.
Enjoyed this article?
Stay ahead of the curve by subscribing to NewBits Digest, our weekly newsletter featuring curated AI stories, insights, and original content—from foundational concepts to the bleeding edge.
👉 Register or Login at newbits.ai to like, comment, and join the conversation.
Want to explore more?
AI Solutions Directory: Discover AI models, tools & platforms.
AI Ed: Learn through our podcast series, From Bits to Breakthroughs.
AI Hub: Engage across our community and social platforms.
Follow us for daily drops, videos, and updates:
Reddit | YouTube | Spotify | Facebook | Instagram | X (Twitter) | LinkedIn | Medium | Quora | Discord
And remember, “It’s all about the bits…especially the new bits.”
Comentários