Mettl’s assessments have been the biggest filter in our recruitment process. Their platform has helped us reach out to a higher volume our applicant numbers. Mettl constantly keeps innovating on their products and tries to introduce a new aspect to everything.
The quality of a test is determined by its creation process. With psychometrics, much of the principles that define the creation of a test also cover ground on its qualitative value. This would imply validity, reliability, and test runs with a representative sample, otherwise known as norming.
Norming aside, however, the range of validity and reliability when it comes to test rigor falls under a range from 0 to 1. As a general rule, the higher the validity and reliability coefficients, the more beneficial it is to use the test.
Validity usually covers a range of 0.21-0.35 to being a qualitative test. Reliability in general covers a range of 0.70-0.89, scaling both aspects of being adequate and good in terms of quality through that range. For more information, utilize the table as shown below:
Considering the nature of psychometrics, it is important to explore multiple avenues when it comes to qualitative measurement. This would extend to the test, and also in terms of its utilization within an organization. There are also examples of when popularity is mistaken for quality. One such case is with the MBTI.
Psychometric tests have found use in different stages of the employee life cycle – appraisals, hiring, learning & development and more. It’s been known to increase chances of employee success given the correct use of both cognitive and personality tests, two of the most important components to a psychometric test.
But as explained in the section – Determining the Quality of a Psychometric Test, too many organizations use the wrong psychometric tests in the wrong way. But there are measures known to minimize risk and maximize predictive accuracy for said tests.
HR generalists, specialists or organizational influencers are often advised to maintain legal compliance with the addition of psychometric tests to organizational processes. Anti-discrimination laws require – especially cognitive ability tests to remain job-relevant and strongly validated.
A recent example could be traced to the National Football League, an organization that changed its assessment battery due to concerns around racial discrimination and poor job-performance prediction. 
This is the Wonderlic Personnel Test, a 12-minute, 50 item questionnaire used by the NFL since the 1970s. Recently, it’s been revealed to have nothing to do with football success with signs of racial bias.
A new test was since devised under Harold Goldstein, a professor of Industrial & Organizational Psychology, and Cyrus Mehri – a Washington lawyer at the helm of the Fritz Pollard Alliance that monitors the NFL’s minority hiring practices. The personality test devised closely resembled the kind firefighters used.
After all, tests are generally required to respect privacy and not endeavor to diagnose candidates.
Organizations are known to focus a lot more on the “independent variables” or predictors over what’s being predicted – the “dependent variables.” Consider the following:
A qualitative test is measured based on validity, and it is essential to ensure that the test being used measures what it is intended to measure. At the same time, an organization must understand the purpose for which they require an assessment before making any selections.
Psychometric tests are often a combination of different assessments; these combinations are best determined based on job roles. For example, content writers would require an assessment that measures for verbal comprehension, while hard labor would mandate a physical fitness test – both cognitive tests.
Understanding industries form an important part of your assessment battery. If you look at sales, even within the same job role, skills and functionality vary depending on product and buyer sophistication. A salesperson selling pens undeniably requires a different set of skills from one that sells IT services.
A test developed in India using the Indian population as a standard is remarkably more accurate than one that uses an American norm group. For example, it’s more effective – in context – to use cricket analogies in India against baseball analogies, a sport most Indians are unfamiliar with. Likewise, an American audience scarcely tests well off an Indian standard.
Some candidates may be tempted into “gaming” results. It’s commonly referred to as “impression management,” a method used to come across as the more ideal candidate.  It’s recommended to compare references and ratings to test results to identify both consistency and correlation. This can be a separate topic on gaming psychometric tests or impression management in psychometric tests.
Some psychometric tests work with in-built measures to decipher if a candidate’s responses reflect impression management, or if they are incongruent with one another. Response Style Bias is a common problem, more commonly central tendency, and social desirability biases. But security measures aside, even a well-designed, legally defensible, and predictive test battery is likely to fail in adding value should a candidate find the test intrusive or time-consuming.
High-Performance organizations are in constant requirement of change and improvement, improving candidate evaluation systems – for example – via utilizing predictor, outcome variables, and the correlation between them.  Psychometric tests should also be subject to similar validation and intensive testing as the candidates, they are being utilized to assess. Parameters for validity, reliability, and norming weigh into this scenario.
It’s assumed that when organizationally relevant professionals utilize appropriate methodologies to either retain, develop or select the right psychometric tests, they stand a chance to significantly improve the probability of selecting, developing, and retaining the right talent also. This holds true all the more when considering outside consultation or third-party assessment technology firms.
Originally published April 12 2018, Updated August 4 2020