We tend to trust people like us, and while a bit of validation from those unlike us is good, too much can turn us off a product or service
People ascribe a great deal of value to the opinions and choices of those who are like them. For instance, we have higher confidence in a job candidate when they come with recommendations from our peers, colleagues or friends. This “herding effect” can be perfectly rational, as individuals save time and effort when they trust the judgment of others.
But what happens when the recommendation or evaluation comes from somebody who is different from us? Will we still follow their lead in evaluating somebody if we know they have a different notion of what is valuable?
To answer this question, I, together with my co-authors Riccardo Fini and Julien Jourdan, conducted a study of how academic scientists are evaluated by their peers. The study was published in the Academy of Management Journal in 2017. Using data on 9,500 scientists, we studied how academic grant applicants were evaluated by their academic peers when those peers had information on how the applicants had previously been evaluated by another (external) audience, i.e. industry.
Consumers tend to favour products that have been reviewed by a higher number of users, even if the actual score is lower
We first confirmed the conventional herding effect: applicants who had previously received positive evaluations from academic peers (their own audience) were positively regarded by peer reviewers. However, previous positive evaluations from industry were not always receivedpositively. In fact, we found a threshold effect: up to a certain point, being well regarded in industry helped academics to be positively evaluated by their academic peers. This is because an academic evaluator of a grant applicant believes positive regard from industry means the applicant can manage projects, produce impactful science, and productively deal with multiple stakeholders.
However, the small minority of academic grant applicants who had very high levels of industry appreciation were less favourably regarded by the academic reviewers. We believe this is because of an identity mismatch: when somebody exceedingly conforms to the expectations of an external audience, a peer evaluator may start doubting they still possess what the peer audience expects. In other words, those who are too highly regarded by an external audience are seen to be at variance with the values of the evaluator, and therefore as unsuitable recipients of research funding.
What are the implications for business?
There are wider implications for business from these results. Chiefly, bringing in a new audience risks alienating a traditional audience, who may feel they do not share their values with the new audience. For instance, launching or positioning a product for a new customer segment may mean your traditional customer segment feels – rightly or wrongly – it no longer caters for them.
A bit of external evaluation is good, but too much has a potentially negative effect
Perhaps this might be the root of a recent spate of branch closures by US chain restaurant Applebee’s, whose attempts to win over millennials have widely been held up as a failure, that alienated their traditional audience in the process. JC Penney was guilty of a similar misstep a few years ago, while, in the UK, the declining fortunes of the clothing arm of Marks & Spencer are commonly ascribed to core customer base alienation.
These results may also be applicable to online reviews. Carry out a simple Google search for a product or service, and you will be inundated with constellations of star ratings, encyclopaedias of text reviews, and resources aggregating and breaking them down in formulations to suit every taste, requirement and profile. We often rely on such evaluations when deciding, for instance, which seller we purchase from on Amazon. A Harvard Business Review study found each star in the standard five-star rating system utilised by Yelp was worth five to nine per cent additional revenue to restaurants. A study published in Psychological Science, meanwhile, showed consumers tend to favour products that have been reviewed by a higher number of users, even if the actual score is lower.
Many of these evaluations are provided by our “peers” while others are provided by “experts”. This is where the results from our study become relevant. It is likely we give more credence to evaluations from people like “us”, while aggregated reviews and evaluations may tend to the threshold effect we identified: a bit of external evaluation is good, but too much has a potentially negative effect.
Bringing in a new audience risks alienating a traditional audience, who may feel they do not share their values with the new audience
For instance, public relations firm Weber Shandwick found online customer reviews were ascribed a higher level of importance than professional critic reviews by 77 per cent of consumers. SEO agency BrightLocal found 91 per cent of consumers called upon online reviews on a regular basis, with 85 per cent of consumers trusting such reviews as much as they would a personal recommendation. Several other such surveys exist, all strongly indicating the strength of consumer peer reviews.
But is there value in diverse assessments?
There are potential dangers for the assessor or customer relying entirely on the opinion of their own peer group: herd mentality can lead to poor consumer decisions. This mind-set is also evident in the academic context, where those who have received positive evaluations in the past enjoy a disproportionate advantage.
There are numerous scenarios in which a plurality of opinions can be beneficial in creating a well-balanced and educated decision-making process. For consumers and academics alike, looking beyond the choices of one’s peers can allow better decisions, based on less partial information.
For organisations with diverse ranges of stakeholders, evaluations from different perspectives can afford a more balanced, objective and nuanced understanding that can keep investors, customers and staff on side.
There is value, therefore, to looking at non-conforming valuations. But we must also acknowledge the sensitivity of evaluators to non-conforming assessments. In order to perform well with any given demographic, it is essential to be a bit like them.
This article draws on findings from the paper “Social Valuation Across Multiple Audiences: The Interplay of Ability and Identity Judgements”, published in the Academy of Management Journal, and co-authored by Ricardo Fini (University of Bologna and Imperial College Business School), Markus Perkmann (Imperial College Business School) and Julien Jourdan (Université Paris Dauphine).