A Reddit user claiming to be a whistleblower from a food delivery app has been outed as a fake. The user wrote a viral post alleging that the company he worked for was exploiting its drivers and users.
“You guys always suspect the algorithms are rigged against you, but the reality is actually so much more depressing than the conspiracy theories,” the supposed whistleblower wrote.
He claimed to be drunk and at the library to use its public wi-fi, where he was typing this long screed about how the company was exploiting legal loopholes to steal drivers’ tips and wages with impunity.
Those claims were, unfortunately, believable — DoorDash actually was sued for stealing tips from drivers, resulting in a $16.75 million settlement. But in this case, the poster had made up his story.
People lie on the internet all the time. But it’s not so common for such posts to hit the front page of Reddit, garner over 87,000 upvotes, and get crossposted to other platforms like X, where it got another 208,000 likes and 36.8 million impressions.
Casey Newton, the journalist behind Platformer, wrote that he contacted the Reddit poster, who then contacted him on Signal. The Redditor shared what looked like a photo of his UberEats employee badge, as well as an eighteen page “internal document” outlining the company’s use of AI to determine the “desperation score” of individual drivers. But as Newton tried to verify that the whistleblower’s account was legitimate, he realized that he was being baited into an AI hoax.
“For most of my career up until this point, the document shared with me by the whistleblower would have seemed highly credible in large part because it would have taken so long to put together,” Newton wrote. “Who would take the time to put together a detailed, 18-page technical document about market dynamics just to troll a reporter? Who would go to the trouble of creating a fake badge?”
Techcrunch event
San Francisco
|
October 13-15, 2026
There have always been bad actors seeking to deceive reporters, but the prevalence of AI tools has made fact-checking require even more rigor.
Generative AI models often fail to detect if an image or video is synthetic, making it challenging to determine if content is real. In this case, Newton was able to use Google’s Gemini to confirm that the image was made with the AI tool, thanks to Google’s SynthID watermark, which can withstand cropping, compression, filtering, and other attempts to alter an image.
Max Spero — founder of Pangram Labs, a company that makes a detection tool for AI-generated text — works directly with the problem of distinguishing real and fake content.
“AI slop on the internet has gotten a lot worse, and I think part of this is due to the increased use of LLMs, but other factors as well,” Max Spero, founder of Pangram Labs, told TechCrunch. “There’s companies with millions in revenue that can pay for ‘organic engagement’ on Reddit, which is actually just that they’re going to try to go viral on Reddit with AI-generated posts that mention your brand name.”
Tools like Pangram can help determine if text is AI-generated, but especially when it comes to multimedia content, these tools aren’t always reliable — and even if a synthetic post is proven to be fake, it might have already gone viral before being debunked. So for now, we’re left scrolling social media like detectives, second-guessing if anything we see is real.
Case in point: when I told an editor that I wanted to write about the “viral AI food delivery hoax that was on Reddit this weekend,” she thought I was talking about something else. Yes — there was more than one “viral AI food delivery hoax on Reddit this weekend.”
AI,misinformation,Redditmisinformation,Reddit#viral #Reddit #post #alleging #fraud #food #delivery #app #turned #AIgenerated1767738242












