Sackett et al.’s Revised Meta-Analytic Estimate for General Cognitive Ability and Job Performance: Much Ado About Nothing?  

Cognitive Ability and Job Performance
by Keith Francoeur

As Vice President, he is responsible for training and managing PCI’s global assessment team, designing and updating the assessment process (including the research and selection of the test battery and interview format), and handling custom competency mapping.

The Big Debate: How well does cognitive ability predict job performance?

Once hailed as the most valid predictor of job performance¹, especially for complex jobs, there has been a seismic shift in opinion on the usefulness of general cognitive ability measures relative to other selection tools. Some² have called for “…a reduced role for cognitive ability in selection…” (p.294), whereas others³ have gone so far as as to call for eliminating them from selection systems altogether.

The impetus for this was a highly influential article by Sackett et al. that was published last year. In it, the authors argued that a large meta-analytic study, which showed a very strong relationship between general cognitive ability and job performance had a major flaw. Namely, that a statistical correction for restriction of range that was applied to concurrent validity studies included in the meta-analysis was not appropriate because range restriction will be much less of an issue than in predictive validity studies. Instead, they argue for no correction at all to be applied to concurrent validity studies if applicant standard deviations are unknown. Their approach is perhaps most succinctly summarized by the statement, “As GATB scores were not used in selection, range restriction is expected to be minimal, and we apply no correction” (p.16).

Based on this logic and taking a similar approach to other meta-analytic studies between job performance and cognitive ability, their reanalysis considerably lowered the estimate between general cognitive ability and job performance from the widely accepted .51 to .31. This number was further lowered to .23 in a paper² published just two months ago, this time based on an unpublished conference poster presentation (which also chose not to correct for range restriction in concurrent validity studies on cognitive ability and job performance that have been conducted since 2000).

So, is the relationship between general cognitive ability and job performance really less than half of what the field thought it was? Others˒˒˒ think not and, after considering both sides of the argument and reviewing additional research, we at PCI strongly agree with them.

Predictive vs. Concurrent Studies

In predictive validity studies, scores on a cognitive ability test are collected during the pre-employment testing process and performance ratings are collected at a later time, typically at least 1 year after the people were hired. Range restriction occurs because, presumably, candidates scoring below a certain point on the cognitive ability measure would not have been hired. This results in a narrower range of scores among those hired. It is a statistical fact that range restriction artificially lowers the relationship between two variables. Therefore, it is commonplace to apply a statistical correction to such studies to get a more realistic picture of the strength of the relationship between the two variables.  

In concurrent validity studies, information between a variable of interest (in this case, scores on a general cognitive ability test) and an outcome (in this case, job performance) are collected at the same time. The study sample is comprised of current employees who have typically been on the job for at least one year. Those employees are given a cognitive ability test and the scores are correlated with performance ratings that were assigned at around that same time.  So, the scores from that test administration obviously were not used to help make the hiring decision, as that decision was made at least one year ago. This is the linchpin of Sackett et al.’s argument for revised numbers. Based on this rationale, they argue that there should be no correction at all for range restriction in concurrent studies because any restriction is expected to be, in their words, minimal. 

At first blush, it is easier to see how restriction of range may be more of an issue in predictive studies, and this has perhaps fueled the conventional wisdom. However, things are not always as they seem and it was noted some time ago¹⁰ that (when it comes to general cognitive ability) “Contrary to general belief, predictive and concurrent studies suffer from range restriction to the same degree” (p. 750).

Some of the most notable reasons why this is the case are outlined below, the second and third of which are directly outlined by Kulikowski⁶.

The Case Against Sackett et al.’s Revised Estimates for Cognitive Ability and Job Performance

  • First, many companies have educational requirements for positions. Educational attainment has been shown to have a sizable relationship with cognitive ability¹⁰ᵇ (interestingly, Sackett himself was involved in this study). This will have the effect of restricting the range of cognitive ability scores among incumbents. 

  • Second, even if companies do not have educational requirements, we can be pretty sure that incumbents standing on the construct of general cognitive ability itself played a role in being selected for their jobs. This is because research¹¹ has shown a sizable relationship between employment interview performance and general cognitive ability. In other words, what interviews are assessing, to a large degree, is how intelligent someone is. This may be done by directly asking questions related to their ability to plan, learn, and solve problems. Even without these types of questions, the quality of their responses to other questions will be significantly influenced by how bright they are. This is because they need to (1) accurately assess what is being asked, (2) retrieve appropriate information from memory, and (3) communicate it in a way that the other individual can understand.  In general, those who have low general cognitive ability will do less well in interviews and be screened out, whereas those who are higher will do better and be hired.  As a result, the range of cognitive ability scores among employees in the concurrent validity study will be restricted, even when a cognitive ability test was not used in the hiring process.

  • Third, it has long been known¹²˒¹³ that the complexity of a job that an individual occupies tends to be commensurate with their general cognitive ability.  Referred to as the gravitational hypothesis, this should lead to a significant amount of range restriction in concurrent validity studies because (1) many employees in the study have likely self-selected into roles that best fit their cognitive abilities and (2) employers are more likely to select and retain those who have shown they can handle the cognitive complexity of their role. (What is especially interesting about this hypothesis is that Sackett himself was involved in the research that found support for its premise. It is puzzling as to why he would now think this type of sorting that is driven by differences in general cognitive ability would not cause more than “minimal” range restriction in concurrent studies.)

    Two different studies done just this year drive home how significant occupational sorting based on cognitive ability tends to be. With one being done in the US and the other in the UK, the total sample of these longitudinal studies was over forty thousand subjects.

    The first study¹⁴ found that general cognitive ability was a very strong predictor of occupational sorting, being more important than education, age, parental education, and personality. Almost two standard deviations separated the average intelligence of those in the highest occupational complexity group (average IQ 114) from those in the lowest complexity group (average IQ 87).

    The second study¹⁵ showed a very strong relationship between the average IQ of individuals in an occupation and two independent ratings of occupational complexity. Again, occupations with higher average IQs of incumbents were rated as more complex by two independent sources (O*Net and DOT), and occupations with lower average IQs of incumbents were rated as less complex. And, once again, the difference between the highest (average IQ 117) and lowest (average IQ 89) rated occupations was striking, being just shy of two standard deviations. 

    As these examples illustrate, one should expect that there will still be considerably restricted variability in general cognitive ability test scores in concurrent validity studies. This is because incumbents of a particular job will be more similar to each other than they are to the broader applicant pool or the general population.  

Neither Logic, Nor Evidence, Support Minimal Range Restriction in Concurrent Studies

The above reasoning and research certainly do not support the notion that restriction of range will be “minimal” for concurrent studies. What’s more, it makes the position of not correcting for it at all appear to be a very extreme one that is untenable. As noted by Ones and Viswesvaran⁷, “multiple forms of range restriction affect both concurrent and predictive studies. Any single form of range restriction will be dwarfed by the multiple types of range restriction we fail to correct for” (p.360). 

Another Questionable Decision by Sackett et al.

The previous estimate of .51 between general cognitive ability and job performance was for medium complexity jobs. It was chosen as the default number because those types of jobs represent the majority of the jobs in the US Economy (62% as of 1998) and, when reporting the validity, it was always noted that the number applied to those types of jobs. This seems to be quite a reasonable approach chosen by Schmidt and Hunter.  

However, Sackett et al. decided to give a single number for the relationship between general cognitive ability and job performance that did not take into consideration job complexity.  The rationale was: “After all, meta-analyses of other predictors are also not restrictive as to job complexity level. Thus, not limiting the estimate for cognitive ability to jobs of medium complexity makes the comparison across predictors more apt” (p.16).   

This is another dubious decision, as general cognitive ability is unique from the other constructs. In the broadest and simplest sense, it should literally be thought of as the ability to deal with complexity¹⁶. Not surprisingly, research has shown that the complexity of a job moderates the relationship between general cognitive ability and performance; it predicts the strongest for high complexity roles, followed by medium complexity roles, and then low complexity roles. In fact, its’ relationship with job performance may be as high as .74 in the most complex roles, compared with .39 for unskilled positions⁷. Because of this, complexity level should always been accounted for in studies cognitive ability and job performance. To fail to do so when reporting cognitive ability’s validity obscures valuable information and can be quite misleading. As an example, one should not give the same weight to a cognitive ability score for an unskilled position versus a highly complex role.

Concluding Thoughts

It turns out that you should not believe the hype surrounding the updated validity numbers between general cognitive ability and job performance; it has all been much ado about nothing. Cognitive ability remains the single best predictor of overall job performance for medium to high-complexity roles.  

As noted by Kulikowski⁶, general cognitive ability “tests are valid, reliable, and have a clearly defined nomological network, whereas it is debatable what is measured by some other personnel selection predictors” (p.368). As opposed to other options, like structured interviews, we also know exactly how general cognitive ability leads to higher performance; “people with higher GMA acquire more job knowledge and acquire it faster, thus doing their job better” (p.368). More than that, a recent study¹⁷ shows that general cognitive ability remains an important predictor of job performance even after extensive experience, likely because jobs contain variable demands. Individuals with higher ability will be able to meet those demands better due to their stronger problem-solving skills. Finally, when compared to structured interviews, general cognitive ability measures are going to be more cost-effective, suitable for a wider range of jobs, and have a lot less variability in how well they predict performance.       

Even though it’s a strong predictor of success in medium to high-complexity roles, you should not use a general cognitive ability test as a stand-alone tool in the hiring or placement decisions. Incorporating a personality measure based on the Big Five/Five-Factor model can help you predict a countless number of other work-related outcomes (read more) and can be used to facilitate onboarding and development. And, adding in a structured interview can help you further enhance your ability to understand the whole person and predict important behaviors at work.  

References

¹Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262–274.           

² Sackett, P., Zhang, C., Berry, C., & Lievens, F. (2023). Revisiting the design of selection systems in light of new findings regarding the validity of widely used predictors. Industrial and Organizational Psychology, 16(3), 283-300.

³ Woods, S. A., & Patterson, F. (2023). A critical review of the use of cognitive ability testing for selection into graduate and higher professional occupations. Journal of Occupational and Organizational Psychology, Advance online publication.

⁴ Sackett, P. R., Zhang, C., Berry, C. M., & Lievens, F. (2022). Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. Journal of Applied Psychology, 107(11), 2040–2068. 

⁵ Hunter, J. E. Test Validation for 12,000 Jobs: An Application of Job Classification and Validity Generalization Analysis to the General Aptitude Test Battery (GATE). US Department of Labor (DOL), United States Employment Service (USES). 

⁶ Kulikowski, K. (2023). It takes more than meta-analysis to kill cognitive ability. Industrial and Organizational Psychology: Perspectives on Science and Practice, 16(3), 366–370.

⁷Ones, D., & Viswesvaran, C. (2023). A response to speculations about concurrent validities in selection: Implications for cognitive ability. Industrial and Organizational Psychology, 16(3), 358-365.

⁸ Oh, I., Mendoza, J., & Le, H. (2023). To correct or not to correct for range restriction, that is the question: Looking back and ahead to move forward. Industrial and Organizational Psychology, 16(3), 322-327. 

⁹ Cucina, J., & Hayes, T. (2023). Rumors of general mental ability’s demise are the next red herring. Industrial and Organizational Psychology, 16(3), 301-306.

¹⁰ Schmidt, F. L., Pearlman, K., Hunter, J. E., & Hirsch, H. R. (1985). Forty questions about validity generalization and meta-analysis. Personnel Psychology, 38(4), 697–798.

¹⁰ᵇ Berry, C. M., Gruys, M. L., & Sackett, P. R. (2006). Educational attainment as a proxy for cognitive ability in selection: Effects on levels of cognitive ability and adverse impact. Journal of Applied Psychology, 91(3), 696–705.

¹¹ Roth, P. L., & Huffcutt, A. I. (2013). A meta-analysis of interviews and cognitive ability: Back to the future? Journal of Personnel Psychology, 12(4), 157–169.

¹² Wilk, S. L., Desmarais, L. B., & Sackett, P. R. (1995). Gravitation to jobs commensurate with ability: Longitudinal and cross-sectional tests. Journal of Applied Psychology, 80(1), 79–85.

¹³ Wilk, S. L., & Sackett, P. R. (1996). Longitudinal analysis of ability–job complexity fit and job change.  Personnel Psychology, 49(4), 937–967.

¹⁴Wolfram, T. (2023). (Not just) Intelligence stratifies the occupational hierarchy: Ranking 360 professions by IQ and non-cognitive traits. Intelligence, 98, 1-25.

¹⁵ Zisman, C., & Ganzach, Y. (2023). Occupational intelligence as a measure of occupational complexity. Personality and Individual Differences, 203, 1-7.

¹⁶ Gottfredson, L. S. (1997). Why g matters: The complexity of everyday life. Intelligence, 24(1), 79–132.

¹⁷ Hambrick, D. Z., Burgoyne, A. P., & Oswald, F. L. (2023). The validity of general cognitive ability  predicting job-specific performance is stable across different levels of job experience. Journal of Applied Psychology. Advance online publication.

Big 5 personality traits
by Keith Francoeur

Can you change your personality?

For most of the Big Five domains Read More, the answer appears to be yes, but… It’s important to follow through! That was one of main findings of a recent study¹ that examined participants over a 15-week period. Individuals who accepted and completed a greater number of weekly behavioral challenges – which involved displaying specific, […]

Read More
Hiring Best Practices
by Deborah Bell

Type Indicators: More Harm than Good

In 2013, Adam Grant wrote an article for Psychology Today called “Goodbye to MBTI, the Fad That Won’t Die.”  Unfortunately, it seems like little progress has been made between then and now.  The Problem There is overwhelming consensus among personality researchers that personality is hierarchically structured and consists of at least five broad and essential […]

Read More