Google has implemented a new AI-powered algorithm to detect and block fake online reviews more efficiently, resulting in the prevention of over 170 million fraudulent reviews in 2023. The algorithm analyzes patterns over time to swiftly identify suspicious review activity, leading to a 45% increase in accuracy compared to the previous year. This crackdown aims to protect local businesses from reputational harm caused by misleading reviews on Google Maps and Search, providing them with faster detection, increased accuracy, and scam protection. The update encourages marketers to focus on authenticity and customer engagement while Google works to ensure online reputations reflect real-world performance.
Author Cait Corrain has been dropped by her agent and publisher after admitting to posting fake reviews on Goodreads to boost her own book's rating and negatively impact other debut authors. The publisher, Del Rey, stated that Corrain's book, "Crown of Starlight," will no longer be published, and her agent, Rebecca Podos, announced the end of their partnership. Corrain's actions were exposed by author Xiran Jay Zhao, who identified several suspicious accounts involved in the review-bombing. Many of the targeted authors were people of color. Goodreads has removed the fake reviews and emphasized its commitment to maintaining the authenticity and integrity of ratings.
Author Cait Corrain, whose debut novel was set to be published next year, has issued an apology after being exposed for writing scathing fake book reviews targeting authors of color on Goodreads. Corrain's literary agent, publishing company, and distributor have all dropped her following the revelation. Corrain attributed her behavior to battling depression, alcoholism, and substance abuse, as well as a new medication and a psychological breakdown. She accepted responsibility for the pain caused and acknowledged the delay in posting the apology due to withdrawal while sobering up.
The fake review industry, where people and businesses pay marketers to post fake positive reviews on platforms like Google Maps, Amazon, and Yelp, is facing a crackdown as regulators and tech companies take action. The Federal Trade Commission has proposed a rule to punish businesses for buying or selling fake reviews, while online platforms like Amazon and Expedia have formed a coalition to combat review fraud. However, experts warn that the problem may be insurmountable, as fake reviewers have survived previous crackdowns. Fake reviews are pervasive, with Amazon blocking over 200 million suspected fake reviews last year. The industry is fueled by deceptive marketers, many of whom are based overseas, and the rise of artificial intelligence tools that make it easier to write fake reviews.
Amazon, Booking.com, Expedia Group, Glassdoor, Tripadvisor, and Trustpilot have formed the Coalition for Trusted Reviews, a global collaboration aimed at protecting access to trustworthy consumer reviews worldwide. The coalition will define best practices for hosting online reviews, share methods of detecting fake reviews, and engage in public education and information sharing to combat review fraud. The group is committed to upholding integrity, transparency, and accountability in reviews, setting new standards for maintaining authenticity and instilling confidence in consumers.
Mozilla is testing a new built-in "Review Checker" feature for its Firefox browser that rates the reliability of product reviews. The tool, powered by technology from Fakespot, assigns a grade to reviews, offers an adjusted rating with unreliable reviews removed, and highlights key points. Fake reviews have been a major issue for online retailers, and this feature aims to help users identify deceptive reviews. While Fakespot already offers similar services, being integrated into Firefox could significantly increase its user base. Mozilla has not announced an official release date for the feature yet.
The U.S. Postal Inspection Service has issued a warning about a scam known as "brushing," where recipients receive unsolicited packages containing various items that were not ordered. The sender, usually an international third-party seller, aims to create the impression that the recipient is a verified buyer who has written positive online reviews of the merchandise. This helps fraudulently boost the products' ratings and sales numbers. While seemingly victimless, brushing can compromise personal identification and expose individuals to identity theft. To protect oneself, recipients should not pay for the merchandise, mark unopened packages as "return to sender," and closely monitor credit reports and credit card bills.
The battle between AI-generated fake reviews and AI systems designed to detect them is intensifying, raising concerns for consumers and the future of online content. Startups like Fakespot are working on ways to detect content written by AI platforms, while the Federal Trade Commission has proposed a new rule to crack down on fraudulent reviews. However, the line between real and fake reviews is becoming increasingly blurred, and the technology to detect fraudulent content is still a work in progress. Companies like Amazon are using a combination of human investigators and AI to spot fake reviews, but AI-generated reviews that are authentic and don't violate policy guidelines are allowed. The challenge lies in whether AI detection can outsmart the AI that creates fake reviews, as generative AI has the potential to make their work much easier. With 90% of consumers relying on reviews while shopping online, the prevalence of AI-generated fake reviews is a concerning prospect for consumer advocates.
The Federal Trade Commission (FTC) has proposed a rule to crack down on marketers who use fake reviews of products, aiming to ban fake reviews, suppress negative reviews, and pay for positive reviews. Violators may face hefty fines, and the rule is intended to level the playing field for honest companies. Fake reviews have been a persistent issue on platforms like Amazon and Google, and the pandemic has exacerbated the problem. The FTC's proposed rule also includes provisions against review hijacking, offering incentives for positive reviews, undisclosed roles of company officers and managers in writing reviews, and other deceptive practices. The FTC will accept public comment on the proposal for 60 days.
The Federal Trade Commission (FTC) is proposing a formal ban on fake reviews and testimonials, as well as phony social media metrics. The new rule, which is close to being finalized, includes steep penalties of up to $50,000 for businesses caught buying, selling, or manipulating online reviews. Each phony review could result in a fine, and if a consumer sees it, the penalty could reach up to $1 million. The FTC aims to level the playing field for honest companies by banning businesses from writing or selling fake reviews, obtaining or disseminating known fake reviews, and engaging in review hijacking or offering compensation for reviews. The rule will also prohibit companies from using fake followers and views to inflate their social media numbers.
The Federal Trade Commission (FTC) is proposing a new rule to penalize companies for engaging in shady review practices, including buying fake reviews and using deceptive tactics. Under the proposed rule, businesses could face fines of up to $50,000 for each instance of a customer encountering a fake review. The rule aims to crack down on various types of disingenuous reviews, including one-line vague reviews, insider reviews, review hijacking, and company-controlled review websites. The FTC also plans to target companies that try to suppress negative reviews and warns that the emergence of AI chatbots could make it easier for bad actors to write fake reviews. The FTC is currently seeking public comments on the proposal.
The Federal Trade Commission (FTC) has proposed a new rule to combat deceptive advertising practices involving fake reviews and testimonials. The rule aims to prevent marketers from using tactics such as generating fake reviews, suppressing negative reviews, and paying for positive reviews. The FTC's proposed measures include prohibiting the sale or procurement of fake consumer reviews, review hijacking, buying positive or negative reviews, and insider reviews without proper disclosure. The rule also addresses illegal review suppression, the creation of company-controlled review websites, and the sale of fake social media indicators. The FTC is seeking public comments on the proposed rule, which could lead to civil penalties for violators and help level the playing field for honest businesses.
Google is suing Ethan Hu and 20 unnamed co-defendants for creating over 350 fake business profiles and 14,000 fake reviews to sell to other businesses looking to promote their services in Google's search results. Hu allegedly posed as fake business owners on calls with Google employees, using props to pass off fake listings as real small businesses. Google is seeking damages and a permanent ban on advertising or selling false verification services. The lawsuit comes as Google faces competition from new AI-assisted search services and a potential flood of low-quality AI-generated search results.
AI-generated reviews are appearing on Amazon products, including waist trainers, children’s textbooks, car batteries, baby car seat mirrors, and video game controller accessories. These reviews make no effort to hide the fact that they were generated with AI, and some even begin with the phrase “As an AI language model”. Amazon has a zero tolerance policy for fake reviews and has teams dedicated to uncovering them, but the rise of chatbots like ChatGPT could make generating false reviews easier than before.