
2:19 PM PDT · March 31, 2025
This month, ChatGPT unveiled a new image generator as part of its 4o model that is a lot better at generating text inside images.
People are already using it to generate fake restaurant receipts, potentially adding another tool to the already-extensive toolkit of AI deepfakes used by fraudsters.
Prolific social media poster and VC Deedy Das posted on X a photo of a fake receipt for a (real) San Francisco steakhouse that he says was created with 4o.
You can use 4o to generate fake receipts.
There are too many real world verification flows that rely on “real images” as proof. That era is over. pic.twitter.com/9FORS1PWsb
Others were able to replicate similar results, including one with food or drink stains to make it look even more authentic:
I think in the original image the letters are too perfect and they don’t bend with the paper. They look like hovering above the paper. Here is my attempt to make it more realistic. Let me know what you think. pic.twitter.com/EixRSHubeY
— Michael Gofman (@michaelgofman) March 29, 2025The most real-looking example TechCrunch found was actually from France, where a LinkedIn user posted a crinkled-up AI-generated receipt for a local restaurant chain:

TechCrunch tested 4o and was also able to generate a fake receipt for an Applebee’s in San Francisco:

But our attempt had a couple of dead giveaways that it was faked. For one, the total uses a comma instead of a period. For another, the math doesn’t add up. LLMs still struggle to do basic math, so this isn’t particularly surprising.
But it wouldn’t be hard for a fraudster to quickly fix a few of the numbers with either photo editing software or, possibly, more precise prompts.
It’s clear that making it really easy to generate fake receipts presents huge opportunities for fraud. It wouldn’t be hard to imagine this kind of tech being used by bad actors to get “reimbursed” for entirely fake expenses.
OpenAI spokesperson Taya Christianson told TechCrunch that all of its images include metadata indicating they were made by ChatGPT. Christianson added that OpenAI “takes action” when users violate its usage policies and that it’s “always learning” from real-world use and feedback.
TechCrunch then asked why ChatGPT allows people to generate fake receipts in the first place, and whether this is in line with OpenAI’s usage policies (which ban fraud.)
Christianson replied that OpenAI’s “goal is to give users as much creative freedom as possible” and that fake AI receipts could be used in non-fraud situations like “teaching people about financial literacy” along with creating original art and product ads.
Charles Rollet is a senior reporter at TechCrunch. His investigative reporting has led to U.S. government sanctions against four tech companies, including China’s largest AI firm. Prior to joining TechCrunch, Charles covered the surveillance industry for IPVM. Charles is based in San Francisco, where he enjoys hiking with his dogs. You can contact Charles securely on Signal at charlesrollet.12 or +1-628-282-2811.