Can We Trust Internet Ratings?

6
78 views

Share on LinkedIn

Have you noticed how inflated Internet ratings are? It may be that the people who rate things are those who are interested in them, so they rate them high.

For example, people who rate romance novels are probably those who read a lot of them. So, even a mediocre romance novel is rated pretty good by them.

The “average” rating for a book on Amazon is 4.2 out of 5. Are ALL books really that good? …or are those who are rating the books friends of the author? If you average 4.0 out of 5, what’s wrong with you?!

When Amazon started selling milk a few years ago, the ratings were outstanding! “Worth its weight in gold times infiniti!” What does this mean?

Even staplers received outstanding ratings, with only one rater rating it “average.” Are there no average staplers out there?

Some companies are using such ratings and Blogs as “Voice of the Customer” research. How wise is this? Some companies have actually been caught inflating their scores by having employees rate their products.

It makes you have new respect for traditional surveys, doesn’t it, even with their inherent flaws? I suggest that some of the more traditional methods should be used, until and unless we can find a way to deflate some of these ratings. What do you think?

6 COMMENTS

  1. Chris,

    I agree that the star rating are misleading. Amazon book review in particular are often solicited, at least at the beginning. However, if you go to sight like Petco.com and look at their top rated product you see dozens of reviews on each product. The stars are less important than the reviews themselves. Some are trival but other really hit the mark with a reader and provide insights from someone like themselves. This is the real value, not the stars.

    John

    John I. Todor, Ph.D.

  2. Chris, you raise a great point about the ratings. However, it’s well known that surveys of any kind tend to skew towards the high end of a scale of 1 to 5 , 1 to 10, etc.

    Here’s a post from Vovici (survey company) that does a pretty good job of explaining:

    Years of data show that the spread of a five-point scale usually skews heavily towards the top two answer options. Thus, when using five-point scales, we’ll find that:
    * 88% of respondents answer 4 or 5
    * 10% answer 3
    * 2% answer 1 or 2

    There are techniques to deal with this, but clearly a simple Internet rating on Amazon.com or elsewhere should not be trusted in the sense of taking the score at face value.

    I do agree with John that the comments are more valuable. When buying a piece of electronics recently, the average scores did catch my eye, but I didn’t give it much weight unless there were a lot of ratings. The comments about experiences, pros and cons, etc. were very helpful in my eventual decision.

    Whether valid or not, users tend to trust other user reviews. Here are some interesting stats on the power of word of mouth:
    http://www.bazaarvoice.com/industryStats.html

    The thing is, is the alternative any better? Editorial or analyst reviews also skew to the high side. Look at CNET editorial reviews and they look a lot like the user reviews, sometimes even more positive.

    And what about software analysts in our own industry? It’s amazing to me that so many vendors are considered “leaders” in one form or another. And these are ratings built on complex processes and spreadsheets.

    The current economic crisis was helped along in large part because the rating agencies (Moody’s etc.) gave AAA ratings to those clever packages of subprime mortgages, which then were bought up by financial institutions worldwide who thought they were buying “quality” based on the rating.

    So, I’d be hard pressed to find any source where ratings are distributed in a bell curve. Personally, I wouldn’t put my trust totally in user or expert ratings/reviews — it’s the combination of the two that gives a better picture of reality.

    Bob Thompson, CustomerThink Corp.
    Blog: Unconventional Wisdom

  3. I rate books all the time, and if they are worthless I give them one star. Unfortunately, there is no way to give a book zero stars. I also give some books five stars–if they are full of good information and edited correctly. If the information is excellent, but the editing is not very good, I normally will give a book 4 stars. If the information is mixed, I might give the book 3 stars or 2, depending on how much information is good, the editing job, and some other things. Perhaps there is only one useful thing in the book, but then I have to give the book two stars because worthless books are given one star. I am speaking, of course, of non-fiction books, mostly business books.

    See my reviews at Amazon.
    Alex Maas

  4. Chris

    As Bob points out, surveys, particularly satisfaction surveys, have always suffered from positive response bias. There are ways to avoid this in survey deisign, e.g. by having a combination of positive and negative response questions, and by using advanced statistical methods to control for the bias during analysis, but they are for professional statisticians.

    Mere mortals should take advice from a variety of on-line and off-line sources, and handle the product themselves before buying. When swithing from a semi-professional analogue SLR camera to a digital one, I read all the on-line reviews, talked to fellow photo-enthusiasts and went to a camera shop to handle the three cameras which were possibilities. The camera I eventually bought was the top choice of most of the reference sources and was my top pick in handling too. I bought it for the lowest price from Amazon.

    Graham Hill
    Customer-driven Innovator
    Follow me on Twitter

    Interested in Customer Driven Innovation? Join the Customer Driven Innovation groups on LinkedIn or Facebook to learn more.

  5. The Listening Coach -co-author of Pain Killer Marketing (WBusiness Books, April, 2008)

    We were all told in Marketing 101 that an unhappy customer tells everyone, while a happy one tells just a few. That is one reason why I wondered about grade inflation on the Internet. Where are the unhappy people and why don’t they generate any ratings?

    I appreciate what Graham and Bob had to say, especially about the comments being more valuable than the ratings, but where are the negative ones? Companies can artificially inflate the comments as well as the scores. How do we fight that? If anything, I feel that the Internet creates more grade inflation than we have seen in the past.

    When I was at PG&E in the early 1990s, we changed the scaling from 96% “favorable” to 45% “excellent,” and spread out the positive scores with an unbalanced scale (Poor-Fair-Good-Very Good-Excellent). As Bob said, there are several ways to do this. Wall Street pannicked! They thought our service had gone to hell. We ran both surveys at the same time and told them we were just trying to improve service and create a tougher scale to make some room at the top.

    How do we get the unhappy people to comment?

  6. Internet ratings can be trusted, even though the methodology used is far from perfect. Traditional survey methods have their shortcoming as well and biases are always present in online or in person experience.
    I have described the use case here http://evolutionofbpr.com/the-tale-of-two-laptops/.
    The authenticity is a real issue, but transparency of the internet and “open” community-like nature of crowd sourcing eventually expose most attempts to game the product reputation as it is exemplified by Belkin story http://www.crunchgear.com/2009/01/17/belkin-paying-65-cents-for-good-reviews-on-newegg-and-amazon/.
    The new, hopefully more useful and meaningful, internet rating technologies are coming up. I am involved with such an attempt at Amplified Analytics, where actual customer reviews are authenticated and semantically analyzed to produce specific scores as oppose to averages. http://www.amplifiedanalytics.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here