How Your Sales Hiring Process Can Benefit from Predictive Analytics

9
210

Share on LinkedIn

Greta Roberts Talent AnalyticsCorporate recruiters have a very important and difficult job. They predict who will be a top performer in certain roles and protect against non-performers getting inside the business ecosystem. We rely on their ability to make constant snap judgments to move a candidate into the interview process or not. A single decision in either direction can cost or make a company $ millions.

Dr. John Sullivan, an internationally known HR expert, estimates that recruiters in larger organizations might carry an open requisition load of 15 – 60 open requisitions at a time. According to CareerBuilder and Inc. Magazine, every open position receives between 75 and 250 applications respectively.

A 2012 study by the Ladders, titled “Keeping an Eye on Recruiter Behavior” shows that corporate recruiters spend an average of 6 seconds on every resume.

In that time they make a decision about whether the candidate can 1) perform well in the role 2) last long enough in the role to make a positive impact on the business 3) Be in a role the job candidate will find satisfying for a long time. http://bit.ly/1ND1VGE. (Click this link for the PDF download of the full study http://bit.ly/1NWSA7y

Let’s estimate 35 open positions with an average of 100 applications per open position. At any given time, each recruiter is screening approximately 3,500 candidates. During the 6 seconds when they are screening the candidate’s resume they need to 1) keep the “requirements” clear for each of these roles; 2) make sure their decision is unbiased, 3) try to remember if characteristics they are reading on the resume were some they remember from other candidates that worked out – or didn’t, and more.

“Get Me More Candidates Like Her”.
Sometimes a hiring manager will comment – “she was a great hire. Get me more candidates like her.” It’s so frustrating to not know what it was about the prior successful candidate that made them successful. You can guess. (Was it their experience, where they went to school, their references? How do you know, for sure, so you can consistently replicate success and avoid failure?

Today’s Candidate Pre-Screening Process is …
You get the point; today’s candidate screening process is a losing battle. It’s not scalable. It’s not repeatable. The process can’t learn from past successes and mistakes. In 6 seconds, or less, current recruiters aren’t giving candidates a fair chance. They’re juggling 3,500 other things.

Naysayers of using AI or predictive analytics in the candidate screening process talk about how they don’t want to be treated as a number, or how they are afraid of being misunderstood.

They aren’t “seeing” you as a person when they review your resume in 6 seconds. There is nothing personal about today’s typical candidate screening process.

Candidate Pre-screening – One of HR’s Best “Predictive Analytics Projects”
Candidate screening is a process better handled by algorithms that can effortlessly, accurately, respectfully and predictively screen thousands or millions of candidates per day (or hour) for business success. All a predictive algorithm cares about is predicting success.

Algorithms are fair. They are reliable. They learn from their mistakes and can tell you what it was about top performing candidates that made them top – so the algorithms can find more. Algorithms give the same amount of time and energy to each candidate. They are unbiased. They don’t get tired after screening 3 thousand (or 3 million candidates).

Algorithms Do Different Things than Humans. They Don’t Replace Humans
Predictive screening algorithms are developed to screen-in candidates with a high probability of successfully performing what you need (i.e. make their sales revenue, answer a large number of call center calls, or have a high customer service rating, or last in the role at least 12 or 18 months, accurately balance their bank teller drawers . . . ) They also screen-out candidates with a low probability of performing what you need.

Once candidates with a high probability of success are identified, the Corporate Recruiter begins their normal interview process. No more 6-second scans of a resume.

Machine Learning Helps the Predictive Model to “Get Smarter”
To complete the predictive process, we recommend that every 3 months, the predictive model’s recommendations should be compared with how the new hires are actually performing in their job 3, 6, 12, 18 months later. (i.e. Your data scientists or vendor should regularly ask for actual performance data and report on it. If someone was predicted to last in their role for at least 12 months, you will want to know if the new hire left prior to 12 months or if they are still employed).

The only reason to keep using a model is if it performs better than your current hiring and selection processes.

Looking for a great 1st predictive project in HR?
Candidate pre-screening is a wonderful choice. Easy. Elegant. Releases your corporate recruiters to interview, schedule, check references etc. and other activities better suited for a human.

Greta Roberts
Greta Roberts is the CEO & Co-founder of Talent Analytics, Corp. She is the Program Chair of Predictive Analytics World for Workforce and a Faculty member of the International Institute for Analytics. Follow her on twitter @gretaroberts.

9 COMMENTS

  1. I agree that a great benefit of algorithms is freedom from bias, which is the reason that they have proven superior for medical diagnoses, especially in fields such as radiology. But evaluating the suitability of a person to fill a B2B sales role requires considerable qualitative judgement – something that algorithms are not well-suited for, even ones that are sophisticated.

    I can hone in on some resume attributes that predict selling success: people who would be brand new to outside sales roles are frequently riskier hires than those who have been in the job a few years working for the same company. Those who have enjoyed earnings stability and earned regular raises throughout their career often have a hard time transferring to positions where a large chunk of compensation is “at risk.” Candidates who have weak writing skills would likely fail in a role that requires clarity. Algorithms are great for shunting those candidates to the side.

    But there are many anomalies that could easily fool an algorithm that uses resume data. I know more than a handful of salespeople who tout impressive revenue-to-goal records who could not sell their way out of a paper bag. Their quotas were ridiculously low, or they landed lucrative bluebirds. In some cases, they got a plum territory to begin with. They look “successful” on a resume, but they aren’t good salespeople. Still, to an algorithm, “exceeded quota” means “exceeded quota.” That candidate goes to the top of the interview stack!

    The problem with using predictive analytics and AI for the purpose of vetting sales candidates is that two key qualities in people that portend sales success – empathy and ego – cannot be discovered using technology – yet. Based on a resume, how would an algorithm discover a person’s capacity for either?

    “We’re shortchanging what it means to sometimes get it wrong. Yes: having ordinary humans make calls will sometimes enrage us [and] cause us to want to run out out the street and yell at the birds. But it also means we’re engaged with other people. It means we are present. Vulnerable. It’s far from perfect, but that may be a feeling worth protecting,” Jason Gay, sportswriter for The Wall Street Journal wrote in a column published today, Who is Ready for Baseball’s Robot Umpires? That idea applies to more than just baseball.

  2. Andy,

    Predictive Analytics is just a tool, a tool that can best be used in conjunction with — not as a replacement for — human judgement.

    My analogy is that it’s like a baseball pitcher knowing batter tendencies. With 0 balls and 2 strikes with a specific batter at the plate, is he likely to chase a down and away ball? Analytics can help increase the odds of success, not guarantee it.

    There’s a growing body of evidence that psychometric tests can add value to the hiring process. Suggest reading this academic article, which summarizes 85 years of research:
    http://bit.ly/1TfEBiQ

    My take: the value is mainly screening candidates OUT who are clearly a poor fit to continue with interviews etc. Bad hires are very costly mistakes.

    This assumes of course that the test is properly constructed based on research on what attributes actually correlate to the desired job performance. That’s a key “validity” question anyone evaluating tests should be asking the developer.

    Does that mean a few good candidates will be overlooked? Of course it does. But here’s the thing: humans aren’t very good at the hiring process, due to biases in what the “right” candidate looks like. A big one: managers like to hire people like themselves.

    The research references above found that the best results were obtained by combining testing with structured interviews. Unstructured interviews are relatively poor predictors of job performance.

    In summary, if you compare using testing/analytics with a really experienced human interviewer using structured interviews, it’s a toss up. But how many managers fit this profile? My guess is not very many. In my opinion the vast majority of large enterprises can get some value out of testing — including for sales positions — if it’s used as part of hiring process.

  3. Love your comment Bob – and this discussion in general. With one exception to one of your comments

    . . . “if you compare using testing/analytics with a really experienced human interviewer using structured interviews, it’s a toss up. ”

    In our research it is never a toss up – it’s never even close. The models outperform even the most experienced human interview. Let’s pretend it is a toss up. What you lose is the learning. When you come back to a model with what happened i.e. the model said a candidate was a high probability of success, and the person was hired and was successful or failed – this data goes back into making the model smarter. This is called machine learning.

    Even if there is a human here or there that hires brilliantly – they don’t get smarter or better over time based on experience.

    Other comments welcome. It’s a great discussion.

  4. My comment was based on the research I cited, which about using GMA (General Mental Ability) tests.

    I would expect custom tests validated for a specific job to do better, but haven’t seen any systematic research to support that. Can you supply?

  5. Bob – I fully agree that right now, the greatest value of algorithms is to screen job candidates who present high risk for not being successful at a company. And I do not question that the pursuit of expanded use of AI and predictive analytics isn’t worthwhile in this arena.

    Central to my concern is that beyond basic criteria (see my earlier comment), whether it’s premature to proclaim that AI is truly viable for predicting sales success.

    Algorithms can be both fair and reliable. But they can also be unfair (e.g. if they are written to exclude surnames of a particular nationality) and reliably wrong. It’s incumbent for developers and users of predictive analytics to be always vigilant for these possibilities, and to properly scrutinize the assumptions, processes and logic on which the findings were generated. That includes data sources. It’s unclear from this article whether TalentAnalytics uses only resume information as data source for its analytics, something else, or a combination of things. Resumes do not conform to standards – job candidates can choose to include or omit any information or words. So I’m interested in how the company contends with this challenge.

    For me, a key question is how sales success is measured. When I ask this question with clients, I frequently get an immediate response: percent of goal. The problem isn’t that percent of goal is used for benchmarking, it’s that executives fail to recognize when they are using it as a proxy for other attributes, and a very flawed one at that.

    If you present me two sales candidates – same age, same gender, same number of years in sales, same education, and ask me which person will likely be more successful in a future sales role: the one who was 85% of goal last year, or the one who was 110%, I cannot answer that question. Yet every day, people draw inferences from these proxies: the ‘higher performer’ manages her time well, has more skills, is more motivated, even has happier (and therefore) more loyal customers. None of this can be reliably concluded from that number.

    Since I don’t know how much (if any) weight TalentAnalytics places on self-reported percent-of-goal metrics, I will reserve for later a longer discussion about the reasons I consider quota achievement a widely-misunderstood variable. It might not pertain here. But what underpins my thirst for caveats here is what I wrote about in my article, Lazy, Uncoachable Sales Rep Produces Record Revenue.

    If anything, that article underscores the need for more AI in evaluating sales candidates. My question is whether we are premature in saying ‘yes, AI’s efficacy has been proven.’

  6. An op-ed in today’s New York Times, The Real Bias Built in at Facebook, by Zeynep Tufekci, describes the peril of assuming fairness or neutrality in algorithms much more eloquently than I did. It’s an outstanding article, but if you don’t have time to read it, the last two paragraphs are a good synthesis:

    “The first step forward is for Facebook, and anyone who uses algorithms in subjective decision making, to drop the pretense that they are neutral. Even Google, whose powerful ranking algorithm can decide the fate of companies, or politicians, by changing search results, defines its search algorithms as “computer programs that look for clues to give you back exactly what you want.”

    But this is not just about what we want. What we are shown is shaped by these algorithms, which are shaped by what the companies want from us, and there is nothing neutral about that.”

  7. Andy, we are completely, like you, dubious of self reported sales (or any other kind of KPI) measures. We’re specific with a company about what they want to predict pre-hire. If they want to predict revenue performance – then we get many years worth of actual revenue performance from their sales operations folks.

    We combine the many years of actual KPI data with a validated large sample of the aptitude of the current sales reps. Then model (using R) to find a correlation (if it exists) between the actual KPI and aptitude.

    And you’re exactly right. Algorithms care only about success in finding what you’ve programmed them to do. So – defining what you want to predict is key.

  8. Nothing is without bias, including algorithms which are developed by people. The issue is whether a given approach yields the desired outcomes better than alternatives.

    If a manager is skilled in interviewing and can consistently pick reps that are good performers, then does it really matter how biased the process was? (Aside from legal issues, of course.)

    Same goes with algorithms. If they improve the percentage of candidates that are “successful” then it can be viewed as better than human judgement alone.

    Your raise a great point about what exactly is sales success. If a company is struggling to hire and retain reps, one measure of success is increasing retention. If the average hire sticks around 3 years instead of 2 years, and there are few really bad hires that wash out early, that’s success.

    Another might be improving the % of reps that make quota in the first year or two.

    For complex B2B, maybe the success measure is the percentage of hired reps that get promoted to account manager. Or get NPS scores of xx or better.

    Predictive algorithms have to be optimized for a specific outcome. The input sources could include all sorts of data, include test results, resumes, performance history, social media info, … whatever is available and useful in building the model.

    Right now I think the sales industry is mostly focused on tactical issues, like hiring fewer bad candidates and getting more that can be productive (make quota) early. The ROI is clear and it solves an immediate problem.

    As the industry matures, I hope it progresses to more robust measures of success, including reps that might stick around for a long time, build great client relationships, and even manage sales teams.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here