While it may, understandably, be a sticky subject for some at the moment, today I’m talking recruitment. It’s a subject always on the whispering lips of LinkedIn, most likely because we all experience it from one side of the desk (or both) at some point. More than that, when its good we don’t tend to say very much about it, but when its bad, it’s pretty awful for everyone concerned and then everyone wants to talk about it.
Recruitment is a big process when done properly, stretching from having a great employee proposition (EVP) all the way to onboarding and induction. I’ve already written a bit about EVP and induction so I want to take a stroll down the obstacle-strewn path of selection. How do you decide who is the right person for the job from the pool of screened candidates?
There are multiple selection methods which are commonly, and variously, deployed in a recruitment process. Choosing the right method, or combination of methods, for a selection process should be highly contextual. What kind of role is it, how high-stakes is it? How long do you have to complete the process? What kind of access do you have to the candidates? These are just some of the factors to be considered. However, bearing in mind the cost of recruitment (when you add up time used, time lost, administration, permits, maybe even agency fees…) perhaps what we should be most concerned about is how valid the selection methods we want to use are i.e. do they really help us to predict behaviours and ultimately performance on the job?
Luckily for us, much research has been done into correlating the use of various selection methods with performance in the job. One of the most accurate sets of data (OPQ, 2013) throws out some interesting numbers for us to have a look at. On a scale of 0 to 1 (no validity to 100% accuracy of prediction), the top-ranking single selection method is work sample testing, with a validity of 0.54.
So, getting people to do an example of the exact work you want from them, and observing and recording how good they are at it, gives you a chance at being right with your selection just a smidge over half of the time. Surprised? It’s easy to see now how mismatches occur.
Structured interviews are next up in joint second place with ability testing at a predictive validity of 0.51. Of course, you knew interviews were a good way of selecting candidates. That’s why we always do them, right? But there is a very important word here. Structured. That means planned, with a purpose, with a standardised and theoretically supported framework, for example behavioural or competency based. I can almost hear the collective eye-roll at this point but know this: if the interview is not structured and not standardised, the validity plummets down to a mere 0.18 which is exactly the same amount of statistical use as making your selection based solely on the number of years of experience a candidate has. Be as formal or informal as you like but forgo structure at your peril!
It was never going to be long before I mentioned personality profiling was it. So here we are, at an average predictive validity of 0.4, personality assessments. Now obviously this figure can vary wildly depending on the validity and reliability of the assessment its self, how it is used and how it is interpreted and that, dear reader, is a subject for another day. But good, robust, psychometrics are always positioned as part of a selection process as supported by additional data and information to help you make your choice.
Which leads us nicely onto the next obvious question with the not-so-obvious answer which is; so surely then, if I just combine as many methods as possible together I should be able to improve the predictive score of any one of them alone? Not quite.
Another common selection technique is just that – a variety of different selection hoops for candidates to jump through under one convenient roof. I am, of course talking about assessment centres. These typically combine interviews with some kind of work sample test, presentation, a psychometric and maybe even a role play exercise. Sounds robust. Well it might surprise you to know that assessment centres have less statistical validity when it comes to predicting on job performance than any of the other methods we have already discussed. At an underwhelming (for the effort) score of 0.37 they creep into our charts marginally ahead of collecting biodata or screening CVs.
There are a number of reasons that the impact of assessment centres as a selection method can be disappointing including bad or ill-informed planning and design. However, the largest cause of introduction of error into assessment centre scoring (and therefor reduction of validity) is the post event debriefing and the sharing of the data collected during the event. This is where things like unconscious (and maybe conscious) bias starts to infiltrate the process and also other types of subjective error. In fact, research has shown that calibrating, or changing, assessment centre scores during the debrief results in the introduction of error and reduces the validity further.
So, what’s the answer?! I hear you cry. Well the good news is that all of the methods we have looked at here are more accurate predictors of performance than graphology and horoscopes (0.02 on the validity scale if you’re interested). The other good news is that by combining a structured interview with a well-chosen, well administered, robust psychometric you can significantly increase your selection process validity. There is a statistical calculation which will show you by how much based on the validity of the assessment and that should be available in the technical data from the publishers of psychometrics.
Overall, the lesson is clear though. Selection is a process through which people’s lives can be changed, both for better and worse and on both sides of the desk. It needs, and deserves, structure and thought behind it, lest we become objects of notoriety on our own LinkedIn feed.