Spotlight: AI and Assessment

A common topic in the assessment space right now is Artificial Intelligence (AI); but how are assessment providers building it in to their solutions in a way which adds true value to their clients? This short piece shares how Saville Assessment uses AI in our suite of tools to improve the assessment experience for organizations we partner with.

Saville Assessment, Artificial Intelligence and Data Driven Algorithms

Artificial Intelligence is where technology, systems and software are able to interpret their environment in order to take actions that maximize the chance of successfully achieving goals or outcomes.

Saville Assessment has been at the forefront of the use of intelligent computer mechanisms in assessment. While many popular assessments were designed before they could capitalize on the capacity of the internet, our suite of assessment tools has allowed for artificial/computational interaction. Saville Assessment has developed assessments which are powered by large predictive data sets to optimize the prediction of our assessments with a hierarchy of specialized algorithms.

Our assessment response format controls the items presented to candidates by utilizing different algorithms.

  • Our Wave questionnaires present different items to candidates based on their initial responses to increase the differentiation and the fidelity of measurement.
  • Our aptitude tests pull items from large item banks to present candidates with different content; this is controlled so that we can present items of equivalent difficulty to test-takers.
  • These questions are scored with item response theory to give the different candidates equivalent scores.
  • Our situational judgment tests use pattern matching algorithms based on the responses of subject matter experts to generate optimized scoring.
  • We do transparent modelling on our data when we combine assessments to provide assessment regimes which minimize group differences on age, gender and ethnicity.

Depending on the size of the dataset we are using to devise our algorithms, we use a mixture of natural and artificial intelligence to model our data. This ensures that we transparently understand what information our algorithms are using and that these are content as well as predictively valid so that we are able to justify decisions based on these algorithms.

It should be noted in terms of predictive modelling, there are many different approaches and some of these carry greater risks. Measuring/collecting thousands of data points and then using machine learning or neural networks in an attempt to predict outcomes can create a ‘black box approach’ to assessment - meaning that illogical or biased variables are used to predict outcomes. This approach could lead to organizations making recruitment decisions that are difficult to justify legally.

The second danger is one of overgeneralization. Neural networks, for example, can explore different relationships in large sets of data, and attempt to learn these, but the patterns they detect with large numbers of variables can be illusory – that is, the next set of data doesn’t follow these patterns and the network does not predict success or outcomes.

Machine learning is one of the predictive analytic methods that we can deploy as part of our predictive armory. For it be an effective choice in use (and be sensitive to the needs/uniqueness of a particular organization or role), some machine learning approaches require very large sample sizes within one organization (e.g. 10,000 plus) and separate large hold out samples to justify their selection. The datasets also need to hold high quality, consistent performance metrics. The limiting factor with any predictive analytical approach normally comes down to the prediction being based on solid credible performance criteria - and where this is not the case, machine learning is not appropriate.