Recently, a Reddit post claiming to come from an "internal whistleblower of a food delivery platform" caused a tidal wave of spread on social media, but was eventually proven to be a complete hoax fabricated using generative AI. The post, which touted as "exposing algorithmic exploitation of drivers," not only received 87,000 likes on Reddit, but also garnered over 36.8 million views on X platform.

Carefully Fabricated "Internal Truth"
The poster claimed to be an UberEats employee and launched a long essay accusing the company of exploiting legal loopholes to steal drivers' wages. To add credibility, he provided a "employee ID photo" and an 18-page "internal document" to renowned tech journalist Casey Newton. The document detailed how the company uses artificial intelligence to calculate drivers' "desperation score," with professional depth and technical details that even seasoned journalists found hard to distinguish from reality.
AI-Assisted Investigation: Exposing the Synthetic Truth
During verification, journalist Newton used Google Gemini and its integrated SynthID watermarking technology to confirm that the employee ID photo was actually generated by artificial intelligence. Although the poster tried to cover up traces through cropping and compression, SynthID's robustness ultimately exposed the forgery.
Max Spero, founder of Pangram Labs, pointed out that the widespread use of LLMs (large language models) has significantly reduced the cost of creating high-quality "realistic garbage content." Currently, there are even professional companies that purchase "organic engagement" on Reddit, using AI-generated viral posts for brand manipulation or malicious misinformation.
An Unavoidable Era of False Information
This is not an isolated case. At the same time as this scam was exposed, another similar AI food delivery scam was also trending on Reddit. Experts warn that current detection tools are still not foolproof when dealing with multimedia synthetic content. Even if false information is eventually exposed, the social impact it causes during its "viral spread" is often irreversible.
In today's era of AI-assisted misconduct, the public and media are forced into a survival state similar to that of detectives, having to repeatedly verify the truth of any seemingly logically coherent information they encounter while browsing social platforms.




