Intuitively, we know that positive online buzz about a product or service is better for sales than the negative kind. Measuring that bump in demand, however, is a much more complicated matter. Precise quantification would require knowing how the different forms of online data – for example, quantitative and qualitative, or numbers and text – affect consumers’ decision making as they research a purchase.
It’s a challenging task, but not an impossible one. For our recent working paper focused on the US automobile industry, we were able to attribute changes in market share directly to both the quantitative and qualitative portions of online product reviews.
Analysing online reviews
We looked at the data on 416 car models during the years 2002-2013. For each, we captured the most relevant considerations from the consumer standpoint, including price, horsepower, size and miles per dollar. We then simulated customer decision making by plugging each characteristic into an econometric model. That way, we could account for any features of the automobiles themselves that might explain rising or falling market share. We measured market share for the vehicles in our dataset by dividing the number of units sold per year by the total market size (i.e. the number of US households that year – as every household needs a mode of transportation).
We also accessed publicly available consumer reviews from a leading industry website. Each model had an average of 45 associated reviews during its year of release; the most popular model had nearly 600.
Like most product reviews, the ones we studied consisted of both a star rating (one to five stars) and text component (which the reviewer could leave blank). Instead of treating both parts of the review as identical, we subjected the text component to a machine-learning (ML) sentiment analysis. The ML model was trained with the help of humans recruited through Amazon’s Mechanical Turk, who scored a sample set of reviews for positive or negative sentiment.
The bifurcated nature of consumer reviews turned out to be pivotal. Looked at in isolation, the impact of star ratings on a car’s market share exhibited decreasing returns on aggregated rating, meaning that a higher rating had negligible effect on market share when the car ratings were already high. Hence, a five-star car would see less of a demand bump from its online buzz than a four-star one, which doesn’t make intuitive sense.
However, when we examined the interaction of star and sentiment ratings, the picture came into sharper focus. The decreasing-return impact curve became a steep upward line – but only for those models with high sentiment scores. In other words, a high star rating’s impact on market share was moderated by the overall sentiment of the written reviews. Great star ratings meant less to consumers without a corpus of text recommendations behind them (see 3D plot below).
Figure 1: Joint Effect of Review Rating and Sentiment on Product Demand
Dual-system thinking
We liken the joint effect of sentiment and star rating to Daniel Kahneman’s “System 1” and “System 2” framework of human cognition, popularised in his best-selling book Thinking, Fast and Slow. System 1 (rapid and intuitive) thinking corresponds to the aggregated star rating, which gives a quick-hit crowdsourced impression of product quality. System 2 (deliberate and rational) covers the written reviews, which demand mental effort on the part of both writer and reader. In Kahneman's framework, System 2 is responsible for endorsing or re-evaluating System 1's automatic impressions before converting them into beliefs and actions.
When we see a star rating that is too close to perfect, our scepticism directs us to the written reviews for corroboration. If the enthusiasm in the text is less impressive than the rating, we may retreat a step or two from the brink of decision and consider other options.
We found empirical evidence supporting this explanation after ruling out several alternatives. For example, we controlled for the possible influence of past reviews on the present – i.e. the possibility that the popularity of previous models of the same car could colour review content for a current model, or that reviews posted early could set the tone for those that followed. When we gave less weight to those earlier entries in our analysis, the findings still held. Our analysis is also robust when considering different reading behaviours (e.g. when people read only negative reviews, or only the most recent reviews).
Lessons for data strategists
Even though our study was conducted in one particular industry, universal lessons about data-driven strategies can be drawn from it. First, online data come in multiple formats, both structured (e.g. star ratings) and unstructured (e.g. written reviews). A genuinely comprehensive data strategy will not only encompass all formats but also examine the ways in which they may interact. This is important because when these various formats are packaged together, as in online product reviews, they may send different signals to consumers. Analysing only one format may yield a skewed interpretation of the overall data, while a holistic view is more likely to generate clues as to what drives consumer behaviour.
Second, data-driven strategies should be context-dependent. Before you can draw firm conclusions about the business impact of your data, you need to know what moves the needle for consumers in your industry and design experiments accordingly. For example, we would not have been able to produce a clear picture based on the data we had, had we not first replicated the typical buying process in the industry we studied with a validated econometric model. A plug-and-play approach to data science won’t get you where you need to go.
Third, our study is a fantastic example of the sort of scalable data analysis in which all companies can now engage regardless of their size, thanks to advancements in artificial intelligence (AI). Tools such as sentiment analysis have developed by leaps and bounds in terms of both technological refinement and accessibility. Consequently, there’s no reason small-medium enterprises (SMEs) can’t get in on the action once reserved for large established firms.
No matter your industry or degree of data sophistication, your best strategic bet is merging AI-based capabilities with comprehensive data collection and nuanced industry knowledge.
-
View Comments
(4) -
Anonymous User
Indeed, we just completed a final revision to our paper to include an analysis that controls for the effect of expert opinions made in a separate section of the the platform we studied. Although expert opinion does have an effect on market share, it does not threat any of our results. That is, the effect of consumer reviews is significant above and beyond the effect of expert reviews.
We agree that as companies develop their own AI-based capabilities, they would get a distinctive competitive advantage based on this.Anonymous User
As a consumer, I always tend to look at the number of people rating a product. Over 1,000 and I am going to go deeper. 2nd, I focus on comments, particularly the negative ones to find if there is any concern. With AI, it is going to be difficult to: 1- assess the number of consumers who did not purchase the product simply because of several bad reviews out of thousands ; 2 - figure-out how companies can fix this. Last but not least, we all know that at least 30% of comments are companies made and not reliable. All the best for cracking the code.
Anonymous User
Good questions. Our analysis controls for the number of comments a car model receives and we found that, indeed, car models with larger number of reviews have relatively larger market share. We also tested our results against various reading reviews behaviors, including the one you suggested about paying more attention to negative reviews; we found that are results are robust to all the reading reviews behaviors we tested. Moreover, the same way that we could model different writing and reading reviews behaviors, the same way AI models could do the same to control for these sort of issues. Finally, although company-made reviews are prominent in many industries (such as F&Bs and consumer goods), it was less prominent in the automobile industry in the time period we analyzed. More details on all this are provided in the paper.
-
Leave a Comment
Anonymous User
20/07/2021, 06.52 pm
Very interesting article. One question which I felt was not covered is the influence of reviews in key publications in the automotive sector. This is a sector where a single review from a trusted magazine or website has as much impact as hundreds of individual consumers, who may focus on their own personal experience (e.g. dealer service or a small fault). It would be good to understand if the impressive analysis was able to handle this.
My second comment is on the availability of AI software to help SMEs as well as major brands to analyse consumer reviews, including the sentiment expressed. The authors rightly say that SMEs can "get in on the action", but while many software companies may claim to offer AI-based reviews analysis, the reality can be different. One company which has genuinely cutting-edge AI and machine-learning is CartUp.ai who recently presented to the UK INSEAD Entrepreneurs’ Group.