• Friday, April 19, 2024
businessday logo

BusinessDay

Designing Better Online Review Systems

Designing Better Online Review Systems
Geoff Donaker manages Burst Capital, through which he invests in and advises tech startups. He was Yelp chief operating officer from 2006 to 2016. Hyunjin Kim is a doctoral candidate in the strategy unit at Harvard Business School. Her research explores how organizations can improve strategic decision-making and productivity in the digital economy. Michael Luca is the Lee J. Styslinger III associate professor of business administration at Harvard Business School and a co-author (with Max H. Bazerman) of the forthcoming book “The Power of Experiments: Decision Making in a Data-Driven World.”

—–

HOW TO CREATE RATINGS THAT BUYERS AND SELLERS CAN TRUST.

Online reviews are transforming the way consumers choose products and services. Managed well, a review system creates value for buyers and sellers alike. Trustworthy systems can give consumers the confidence they need to buy a relatively unknown product.

But for every thriving review system, many others are barren, attracting neither reviewers nor other users. And some amass many reviews but fail to build consumers’ trust in their informativeness. Research by Mike and Georgios Zervas has found that businesses are especially likely to engage in review fraud when their reputation is struggling or competition is particularly intense.

Drawing on our research, teaching and work with companies, this article explores frameworks for managing a review ecosystem — shedding light on the issues that can arise and the incentives and design choices that can help to avoid common pitfalls.

NOT ENOUGH REVIEWS

Many review systems experience a shortage of reviews, especially when they’re starting out. While most people read reviews to inform a purchase, only a small fraction write reviews on any platform they use. This situation is exacerbated by the fact that review platforms have strong network effects: It is particularly difficult to attract review writers in a world with few readers, and difficult to attract readers in a world with few reviews.

We suggest three approaches that can help generate an adequate number of reviews:

— SEEDING REVIEWS: Early-stage platforms can consider hiring reviewers or drawing in reviews from other platforms (through a partnership and with proper attribution). For platforms looking to grow their own review ecosystem, seeding reviews can be useful in the early stages because it doesn’t require an established brand to incentivize activity. However, a large number of products or services can make it costly, and the reviews that you get may differ from organically generated content.

Read also: BusinessDay exclusive interview with 989 Coworking

— OFFERING INCENTIVES: Motivating your platform’s users to contribute reviews and ratings can be a scalable option and can also create a sense of community. Financial incentives can become a challenge if you have a large product array. But a bigger concern may be that if they aren’t designed well, both financial and nonfinancial incentives can backfire by inducing users to populate the system with fast but sloppy reviews that don’t help other customers.

— POOLING PRODUCTS: By reconsidering the unit of review, you can make a single comment apply to multiple products. On Yelp, for example, hairstylists who share salon space are reviewed together under a single salon listing. A risk to this approach, however, is that pooling products to achieve more reviews may fail to give your customers the information they need about any particular offering.

All these strategies can help overcome a review shortage, allowing content development to become more self-sustaining as more readers benefit from and engage with the platform. However, platforms have to consider not only the volume of reviews but also their informativeness — which can be affected by selection bias and gaming of the system.

SELECTION BIAS

Research has shown that users’ decisions to leave a review often depend on the quality of their experience. On some sites, customers may be likelier to leave reviews if their experience was good; on others, only if it was very good or very bad. In either case, the resulting ratings can suffer from selection bias: They might not accurately represent the full range of customers’ experiences of the product. If only satisfied people leave reviews, for example, ratings will be artificially inflated. Selection bias can become even more pronounced when businesses nudge only happy customers to leave a review.

Any review system can be crafted to mitigate the bias it is most likely to face. The entire review process — from the initial ask to the messages users get as they type their reviews — provides opportunities to nudge users to behave in less-biased ways. Experimenting with design choices can help show how to reduce the bias in reviewers’ self-selection as well as any tendency users have to rate in a particular way.

REQUIRE REVIEWS. A more heavy-handed approach requires users to review a purchase before making another one. But tread carefully: This may drive some customers off the platform and can lead to a flood of noninformative ratings that customers use as a default — creating noise and a different kind of error in your reviews. For this reason, platforms often look for other ways to minimize selection bias.

ALLOW PRIVATE COMMENTS. The economists John Horton and Joseph Golden  found that on the freelancer review site Upwork, employers were reluctant to leave public reviews after a negative experience with a freelancer but were open to leaving feedback that only Upwork could see. This provided Upwork with important information — about when users were or weren’t willing to leave a review, and about problematic freelancers — that it could use either to change the algorithm that suggested freelancer matches or to provide aggregate feedback about freelancers.

DESIGN PROMPTS CAREFULLY. More generally, the reviews people leave depend on how and when they are asked to leave them. Platforms can minimize bias in reviews by thoughtfully designing different aspects of the environment in which users decide whether to review.

FRAUDULENT AND STRATEGIC REVIEWS

Sellers sometimes try (unethically) to boost their ratings by leaving positive reviews for themselves or negative ones for their competitors while pretending that the reviews were left by real customers. This is known as astroturfing. The more influential the platform, the more people will try to astroturf.

Platform design choices and content moderation play an important role in reducing the number of fraudulent and strategic reviews.

SET RULES FOR REVIEWERS. Design choices begin with deciding who can review and whose reviews to highlight. For example, Amazon displays an icon when a review is from a verified purchaser of the product, which can help consumers screen for potentially fraudulent reviews.

CALL IN THE MODERATORS. No matter how good your system’s design choices are, you’re bound to run into problems. Moderation can eliminate misleading reviews on the basis of their content, not just because of who wrote them or when they were written. Content moderation comes in three flavors: employee, community and algorithm.

PUTTING IT ALL TOGETHER

Online reviews have been useful to customers, platforms and policymakers alike. But for reviews to be helpful — to consumers, to sellers and to the broader public — the people managing review systems must think carefully about the design choices they make and how to most accurately reflect users’ experiences.