top of page
newbits.ai logo – your guide to AI Solutions with user reviews, collaboration at AI Hub, and AI Ed learning with the 'From Bits to Breakthroughs' podcast series for all levels.

🧠 AI Identity Theft — Harvard Scientist Warns of Impostor Videos and Launches Official Channel

NewBits Digest feature image for article on AI identity theft, highlighting Harvard astrophysicist Avi Loeb’s warning about AI impersonation videos and his plan to launch a verified channel to fight fakes.

Harvard astrophysicist Avi Loeb says fake AI-generated videos impersonating him have spread widely online — convincing viewers and prompting urgent action to protect scientific credibility.


In a new essay, Loeb described the past two months of battling fabricated clips that falsely portrayed him discussing interstellar signals — including a fake claim that “passengers from 3I/ATLAS” sent a message — sometimes contradicting his real research and published writing.


One personal encounter pushed him to act: after a visitor shared how she had been misled by a convincing AI video of him, Loeb decided to create an official YouTube channel to host only verified content — and to request takedowns of impostor videos.


🔍 What Happened With AI Identity Theft


🎥 Fake Videos Went Viral


YouTube clips used AI to mimic Loeb’s voice and appearance, presenting fabricated claims about space discoveries and alleged signals.


📩 Fans Sounded the Alarm


Thousands emailed him pointing out inconsistencies — including background clocks that never moved and statements clashing with his daily essays.


🚨 A Threat to Scientific Integrity


Loeb likened the spread of AI-driven misinformation to AI identity theft, arguing it undermines trust at the core of science.


📺 New Official Channel Coming


He announced plans to launch a verified YouTube channel to distribute approved videos and combat impersonation.


🤖 A Balanced View of AI


Loeb emphasized that AI itself isn’t the enemy — but warned its misuse can distort truth just as easily as it can advance knowledge.


⭐ Why It’s Important


As generative AI becomes more realistic, cases like this highlight a growing challenge: protecting public trust in experts, evidence, and authentic voices.


Loeb’s experience shows how easily sophisticated fakes can spread — and why researchers, platforms, and audiences alike will need new safeguards to verify what’s real in an AI-driven media world.


His response — transparency, official channels, and accountability — may become a blueprint for how public figures defend credibility in the age of synthetic media.



Enjoyed this article?


Stay ahead of the curve by subscribing to NewBits Digest, our weekly newsletter featuring curated AI stories, insights, and original content—from foundational concepts to the bleeding edge.


👉 Register or Login at newbits.ai to like, comment, and join the conversation.


Want to explore more?


  • AI Solutions Directory: Discover AI models, tools & platforms.

  • AI Ed: Learn through our podcast series, From Bits to Breakthroughs.

  • AI Hub: Engage across our community and social platforms.


Follow us for daily drops, videos, and updates:


And remember, “It’s all about the bits…especially the new bits.”

Comments


bottom of page