Skip to content Skip to footer

How to Spot Fake Reviews Written By AI, According to an AI…



Fake reviews of hotels, restaurants, and tours have been a problem for travelers since the earliest days of the internet. But now that sketchy businesses can use artificial intelligence to produce fake reviews faster than Taco Bell can crank out burritos, the situation is rapidly getting a lot worse.

The Transparency Company, a firm that analyzes consumer reviews, estimates that 3% of reviews across all business sectors it studied in 2024 were generated by AI. That may not sound like much, but the company reports that fake AI reviews have been growing 80 percent month over month since June 2023.

That means fake AI reviews are nearly doubling every 30 days. Do the math and you’ll see that it won’t be long before a significant proportion of the meals reviewed at that great restaurant in Barcelona were actually fed to ChatGPT’s imagination.

I’ve spent the last year working in the field of generative AI. I’ve learned a lot about how its digital brain works and how to spot its telltale tics and tells. I’ve also looked into the work of industry experts who study fake reviews and know how to smell a fraudulently produced ringer.

Red flags for AI-generated reviews

Here’s a guide to guessing whether a review you’re reading is likely to be the product of artificial intelligence or if it was penned by a real, lovably flawed human.

Empty raves: This is the easiest AI tell. Generative AI is essentially a complex autocomplete machine, drawing on the vast tranche of data it’s been given to make an educated guess of what the next word in a sentence is likely to be. Instruct AI to praise a certain restaurant or hotel and it’ll instantly spew the most common cliches it finds in other positive reviews of that restaurant, and of all restaurants. The food is “perfect,” the service “spot-on,” the overall experience “OMG the best ever!”—all without specific details to back anything up. Inane positivity (or, for that matter, inane negativity, because sometimes rivals plant bad reviews of their competitors) is the biggest giveaway of AI.

Big clichésPangram Labs, an AI text detector, identifies the following specific “AI tells” in the Transparency Company’s report: When the reviewer says things like “The first thing that struck me,” “game-changer,” and “delivers on its promise,” warning bells should go off.

Phrases people don’t usually use: I find that a final paragraph that begins “In summary” is a dead giveaway. Who writes like that? A seventh grader writing an essay? Probably not someone reviewing a hotel with a noisy pool in Kissimmee. (Words like “indeed” and “moreover,” are also good who-talks-like-that tells.) In my own legit work with AI—creating documents used for organizational development—I’ve had to specifically instruct it to avoid a list of those rarified weasel words and phrases.

tl;dr: If a review is too long to read, a machine probably wrote it. Most human reviewers spend a few minutes at most pecking out a comment. Machines can crank out 700 words in seconds. While there are some true review hobbyists who go long, they are the exception, and you can identify them pretty easily. Which brings us to our next AI giveaway…

Rookie reviewers: AI reviews are often “authored” by AI-created profiles that have few other reviews to their credit. Sometimes, fake accounts have posted only that one review. Luckily, most review platforms include information about how many contributions a reviewer has made. It’s a good bet that “Amy9437” (number suffixes like that are also a good AI signal), who has just this one 5-star rave of a roadside motel to her unlikely name, is the spawn of a dark machine. But if a reviewer has written about a dozen restaurants and 10 hotels in the past two years, you’re more likely to be able to trust her comments.

A+ in composition: In my work comparing the output of various AI models, I’ve found significant variations in the characteristics of the prose they generate. But one thing is consistent: They all magically write grammatically correct sentences arranged into coherent paragraphs, and sometimes they add bullet points. Most humans don’t. Spelling and grammar errors may not fly in high school English class, but online, they let you know a real human who got a C+ from Mrs. Herbison is telling you about that underground brewery tour in Cincinnati.

Realistic details: AI can’t know that Tasha behind the front desk helped babysit the reviewer’s dog for 5 minutes while he went up to the rooftop bar to talk to his wife, and AI is unlikely to make up a story like that when tasked to write a hotel review. When you see specific, vivid incidents like that, you can probably feel more confident that it’s real.

The future of AI-generated fake reviews

So is there hope things will improve and you won’t have to be so vigilant in the future?

In August 2024, the Federal Trade Commission finalized rules that make it illegal to produce and sell fake reviews, including fake AI reviews. It even prosecuted one mass creator of fake AI testimonials.

It’s not clear whether the Trump Administration will be as enthusiastic about consumer protection.

In the private sector, an international industry group called the Coalition for Trusted Reviews, which includes travel companies Tripadvisor, Booking.com, and the Expedia Group (which in turn includes Hotels.com, Travelocity, Orbitz, Vrbo, and other brands), was formed in 2023 to share best practices and regulatory approaches to battle review fraud—but no one is claiming to have succeeded in fully cleansing their sites of junk comments.

As we continue to follow the development of AI closely, there are two opposing forces worth watching.

On one hand, companies including Google and Tripadvisor have well-funded, technologically sophisticated efforts designed to keep fake reviews of all kinds, including those written by AI, off their sites. These programs, of course, use AI to police fake reviews.

On the other hand, you have a wily, resilient global network of fraudsters who study the platforms’ defenses and are always developing new ways to sneak around them.

This sets up a global AI arms race, similar to the one over cybersecurity, with the black hats and white hats in constant battle.

In the meantime? You can try plugging reviews into Pangram Labs’ detection tool to see if it smells a rat.

There’s also always that old-school recommendation engine: Word of mouth. Your real-life family and friends who have been to a place may still be your best source of travel intel.

Editor’s note: You can also come to Frommer’s. We do not use AI to create our travel information. Since 1957, we have been written by and for humans.

Craig Stoltz, former travel editor of the Washington Post, spent a year working with generative AI in the United States federal government.