How Four Variations Influence Sales and the Way People Make Decisions

4
41

Share on LinkedIn

Over the past week or so there has been a terrific discussion on the Sales Executives Group on LinkedIn that drew an unusually spirited discussion on the use of sales assessment tools. As of this writing, there were more than 7science0 comments, enough participation that I can easily break the comments into four types.

  • Opinions
  • Experiences
  • Gut instinct
  • Science

My regular readers know that I fall on the side of science, but the other three types of commenters feel so strongly about their positions that you would think they were talking science too. It’s great when many people chime in with their comments. That’s the beauty of a discussion forum or Blog – everyone gets to participate and weigh in. But in the case of a question where its author expects an answer based on science, it becomes more difficult to separate opinions from experiences, gut instincts and facts. Regardless of the type of comment offered, they all believe their comments to be factual.

Science shows that Objective Management Group’s Sales Candidate Assessments are highly predictive – 95%. But a very small group of clients may have experience that is inconsistent with that science, especially if they used it as a stand alone(without the process it was intended to be part of) tool, had a poor quality pool of candidates to use it on, if they failed to closely manage the people they did hire, if they ignored the warnings we provide on recommended candidates, or if there were non-performance issues (nut cases). Others could have an experience with assessments that aren’t at all predictive of sales performance (personality and behavioral styles assessments) but used them with such a small sample size that luck led them to believe that those assessments were predictive. OMG’s sample size is 500,000 salespeople!

Opinions about assessments, such as “they don’t work”, lumping dozens of brands, types, and results into a single category, is akin to making sweeping statements like, “cars aren’t made very well”, “cell phones can only be used for talking”, or “X-Rays aren’t dangerous”. Opinions are often lacking in science and experience. On the other hand, Gut Instinct is great — when it’s right — and sometimes it is right! But sometimes it’s wrong and you can’t make important business decisions on something as unreliable as gut, especially when you are more likely to try and force that kind of decision to be the right decision (in hindsight) by waiting too long to correct a mistake.

If you understand these four types of comments as they relate to a discussion on assessments, what happens if I suggest that prospects judge salespeople in the same four ways? They subconciously sort whether they are being fed science, experience, opinion, somebody’s gut, or some combination, as well as how it all impacts the way they make their decisions. For simplicity, let’s use the 4 traditional social styles – Amiable, Expressive, Analytic and Driver – as context. Analytics will only respond to science. If they believe they are getting anything other than facts they won’t buy. Amiables need to trust the person they are buying from so when a relationship and trust have been established, Amiables could buy from someone who has strong opinions and good references and might even ignore the science-based salesperson who may not be a good relationship builder. Expressives have many ideas to share so they may not want to learn that they are wrong from someone who is basing his solution on science. Drivers want results – quickly – and may use all four – your science and the experience of others, and their own gut to form an opinion to quickly make a decision.

How much of today’s article is science?

How much is opinion?

How much is gut?

How much is experience?

I always have an opinion and it’s usually influenced by my considerable experience working with companies in more than 200 industries during the past 25 years. I try extremely hard to make sure that my opinions can be backed by science and while I use gut instinct, I only use it to choose which subject to write about on any given day – I never use gut to make decisions about who to hire, recommend, or how to hire them!

Republished with author's permission from original post.

4 COMMENTS

  1. Dave, I always enjoy reading your posts and, as a former buy-side executive, strongly support the “scientific side” of assessments as well … especially if they are predictive of sales performance at the customer executive level. I think the “buy-side” should be consulted during the construction and validation stage. BTW, you referred to a LinkedIn group that I cannot locate. What is the name of the group again?

  2. Hi Jack,

    Thanks for chiming in! The LinkedIn group is the Sales Management Executives and the link to the discussion is:

    http://www.linkedin.com/groupItem?view=&srchtype=discussedNews&gid=1898033&item=39858344&type=member&trk=EML_anet_ac_pst_ttle

    You probably have to be a member of the group to read/comment…

    Out of curiosity, can you share why you think the buy-side should be part of construction and validation of a sales assessment? I can understand the thinking for most of the assessments out there that aren’t predictive of sales performance and don’t go the extra mile for predictive validity. But when the assessment is already predictive of sales performance, it seems that it could present some serious scalability issues…

    Dave Kurlan
    Best-Selling Author,
    Top-Rated Speaker,
    Authority on Sales Force Development

  3. Dave

    I find the subject of your post fascinating. The construction of sales performance competency models and associated assessments strongly influence new hire recruiting and sales management promotion. Significantly, sales L&D professionals appropriately align training and development initiatives with these “success” competencies and spend billions. But are the perceptions of BUY-side decision-makers taken into account when defining ‘great, value-adding sales performance’? How do they measure the impact of the sales professional on their decision to buy?

    As B2B buying behaviors rapidly evolve (particularly post financial-crisis), I think sales competencies models and performance assessments (focused on knowledge, skills, and behavior traits) need to be recalibrated (and perhaps retooled) around the CUSTOMER’S perception of sales force value add.

    Many companies relate ‘sales performance’ to various measures of comparative revenue productivity (quota attainment, revenue growth, revenue per sales rep, etc.). However, as you know, there are many unrelated “external” factors (correlation versus causation) that complicate the assessment based on this criterion. How can anyone be certain (other than the buy-side decision maker) that the targeted knowledge, skills and behavior traits are actually the RIGHT ones to hire, promote, and reinforce?

    I think a “closed loop” process should be put in place to continuously validate (perhaps statistically – scientific) that the targeted competencies and related assessments are, in fact, the sales performance factors that actually add business value to customer decision-makers. For example, the APPLICATION of financial acumen (not mere financial literacy knowledge) and the ability to align solutions to impact and accelerate specific company performance metrics and business outcomes, in my opinion, is an essential competency for most B2B sales professionals operating in a post-financial crisis environment. Most sales competency models I see simply state “financial acumen”.

    I wonder how many buy-side executives have been asked for input into the sales performance assessment validation and construction process. If they haven’t been consulted (brought into the loop), I wonder how their involvement might change the allocation of training and coaching investments and the incremental sales performance that might result from that customer-aligned investment in the sales force.

  4. I’m sorry Jack – I totally misunderstood where you were coming from.

    I don’t like getting into the specifics of our assessment business in a forum like this because it ends up sounding self-promotional but it’s really the only way to address your question….

    Y E S ! ! ! THAT buy-side is always a part of both the evaluation of an existing sales force and (along with what we learn from the findings of the sales force evaluation) assessing sales candidates.

    At my company, Objective Management Group, we already have very well defined, accurate, validated criteria about the strengths and skills that are required to predict sales performance in general. We don’t need to reinvent that wheel each time around. Depending on the difficulty of the position, the requirements and along with them, the criteria, can change. When we’re using the assessments as part of a sales recruiting process, we marry our criteria to more than 30 client side (there’s the source of the confusion – we use different terms) criteria that we help the client work through to determine exactly what will constitute success in THEIR particular sales roles.

    Large companies sometimes conduct an additional internal validation.

    Here are the three most important metrics of how well we’re doing:

    Predictive Validity – 95% across all 200+ industries and roles;

    92% of the recommended candidates that are hired rise to the top half of the sales force within the first 12 months;

    Of the candidates that are not recommended, but hired anyway, 75% of them fail within 6 months.

    I’ve over simplified and there are many more factors than just what the salesperson brings to the table. Everything is also factored by what the company has in place to support them, the relative effectiveness and attentiveness of the sales manager, the company’s perceived place in the market, its price points, the quality of the product, etc.

    Hope this helps.

    Dave Kurlan
    Best-Selling Author,
    Top-Rated Speaker,
    Authority on Sales Force Development

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here