Article · 10-minute read

EAWOP 2023

Our thoughts and takeaways from Katowice

By Jake Smith & Lauren Jeffery-Smith – 13th July 2023

Jake, Lauren and Rab from our R&D team descended on Katowice in Poland to take part in Europe’s premier work and organizational psychology conference. A host of sessions including keynote speakers, panel discussions and symposia afforded them the opportunity to learn about hot topics in research alongside presenting their own. In this article, Jake and Lauren share their experiences and what they learnt.

Lauren, Jake and Rab outside the EAWOP conference

Workshops: Building our Knowledge

Jake Smith profile picture

Jake

My conference began with a workshop on Multilevel Modelling. Yes, it’s as technical as it sounds.

So how to explain the subject in a sentence? Essentially, it’s a method of analyzing data that has a clustered or hierarchical structure.

For example, within the workplace, people are nested within teams and, as this can have an impact on what you are trying to measure, it’s important to account for it. I will be keeping an eye out for any future opportunities to apply my new learnings!

Lauren Profile picture

Lauren

As someone with a personal interest in living and working more sustainably, I was excited to see a workshop on how we as organizational psychologists can lead on climate action. The challenges of change and uncertainty are not new concepts in our field (in fact, our Building Resilient Agility Report focuses on this topic in particular), but this conference was the first time I’ve seen climate recognized as something that work and organizational psychologists should be talking about and having an input into –

it’s relevant for us, because it’s relevant for everyone.

The workshop itself was run by Terri Morrissey and Richard Plenty. We were a small group of nine attendees, which made for a really interactive and collaborative session. The workshop provided useful reminders that change is more likely to happen when it is Fun, Easy, Attractive, Social and Timely (FEAST), so if I want to encourage proper recycling in the office I need to make it fun and easy, rather than complicated and preachy! Relating climate change to psychology included covering change models, the dynamics of climate change, leadership in uncertainty and the role of psychologists in leading climate action. I left feeling both daunted by the size of the problem we are facing and motivated to do what I can to address it.

Panel Sessions: Healthy Debate

Lauren Profile picture

Lauren

Sticking with the theme, I attended a Climate Change Panel with contributions from psychologists working in the areas of climate and social psychology, as well as representatives from industries such as rail and energy. This is an issue where collective-level change is needed, so there is a clear link to leadership. However, interest in studying collective action and protest has dwindled in psychology.

As always, the point around insignificance of individual actions versus the corporations who want it to stay seen as individual responsibility was raised. So how can we address this? A top-down approach would be the government taking responsibility to create policies and regulations to influence behavior. But we also need bottom-up, community-level action to show governments that this is what their voters want. If we can identify what people are doing and amplify this, we can accelerate the positive things which are already happening. This kind of community action can create new norms which become salient to leaders and reframe the issue.

An interesting point was raised within the individuals versus organizations debate – organizations aren’t some separate entity, they are made of many individuals who can influence the organization and other individuals within and outside of it. Focusing on group-level action could be a middle ground between individual and systemic change, but group life is an area which needs more study to understand.

Jake Smith profile picture

Jake

Friday morning kicked off with a couple of interesting and lively panel sessions. The first was titled ‘AI-based Assessments: Should we embrace the technology?’, with the panel touching on potential opportunities and concerns within the realm of assessment. Recent advances in AI are impacting the world of work in numerous ways so this debate certainly grabbed me.

An opportunity raised early on was the potential for increased efficiency for recruiters, with a particular example related to asynchronous video interviewing with automated AI scoring. Benefits listed included 1) reduced cost and time for both recruiters and assessees and 2) increased diversity in selection pipelines resulting from less geographical restriction. One potential issue is what happens when candidates can also use AI to generate immediate, unique-sounding, plausible responses with tweaks based on their own experiences? But perhaps it will be AI-powered detection technology that will provide the solution to that…

Another advantage put forth was the ability of AI to transform and make sense of unstructured data; AI models have been found to find patterns in big data that humans just cannot see. A counterpoint was swiftly raised, in that correlation does not equal causation. New links between the increasing amounts of assessment data points and job performance will undoubtably be discovered, but are they transparent and can they be satisfactorily explained?

The second panel was provocatively titled: ‘Workplace assessment cannot be regulated so test publishers will continue to decide what makes a good test and who can use it.’ Despite this, the discussants seemed in general agreement with its sentiment.

If assessment is to be regulated, where would we start?

With users, test publishers or the assessments themselves?

As the panel progressed, something of a consensus seemed to emerge – rather than enforcing strict regulations, there are three steps we can take as test publishers to encourage best practice:

Both panels were well worth the time and gave me plenty to think about – an obvious question being how can AI be regulated, not just in terms of assessment, but more widely?

Papers, Symposia and Keynotes: What is the Latest Research Saying?

Lauren Profile picture

Lauren

In addition to sessions on climate change, I was also attracted to sessions which more closely related to my day job, around personality and women in leadership.

In Dr Katy Welsh’s research into the impact of personality on candidate reactions to selection processes within the Civil Service, she found differences in candidate reaction by exercise, suggesting it can be better to have a range of assessments in an early stage rather than staggered. This aligns with our recommended screening approach of a single multi-assessment stage, rather than hurdled approach.

On Friday morning, Dr Ryne A. Sherman discussed the past and future of personality assessment. He discussed the limitations of a trait theory approach which focuses on what the traits are to measure rather than a practical prediction approach focusing on the behaviors that are important to predict (something we considered and addressed by taking a validation-centric approach when developing Wave). The future-focus part of the talk inevitably turned to AI, including the threats and opportunities in the assessment space.

In particular, he asserted that most coaching will be replaced by AI for various reasons such as people feeling more comfortable revealing their true thoughts and feelings to an AI interface to avoid judgement, claiming rather provocatively that “coaching is dead”!

Next up in the auditorium, Prof. Janine Bosak presented her keynote on women in leadership and gender stereotypes, beginning with the fairly bleak statistic that it will take 132 years to close the gender pay gap as it currently stands at 20%. She discussed the difficulties women can face getting into leadership roles as well as the barriers once they achieve these roles, including the likeability penalty. Female leaders are judged less favorably than males for equivalent behaviors, although interestingly women are still seen as ‘gender-appropriately’ nice and caring if they are being assertive on behalf of others, due to this being more acceptable framing of a stereotypically male behavior in women.

Traditional views of what a leader is (i.e. male) can impact on how much a woman is seen as a leader. Assumptions and bias (including third party bias) can affect decision makers when shortlisting and in particular at final selection. Additionally, internalized gender beliefs can mean women feel less suitable and are therefore less likely to apply for leadership roles. Where cultural and structural barriers are faced day-to-day, these accumulate meaning pursuing certain roles or responsibilities are not a realistic choice, as opposed to an actual preference. This is something which I was reminded of when I was reading Lessons in Chemistry for our work book club. It described the prevailing view in the 1950s and 60s that girls and women just don’t like science (sadly an assumption which has not been fully eradicated to this day) when in reality, stereotypes, educational opportunities and patriarchal structures, among other things, were holding women back from being able to pursue a career in science (or any career in a lot of cases at that time).

A more promising insight was that more leadership roles where the focus is on expanding purpose beyond profit (e.g. CSR, ESG) are emerging and these align more with the types of communal goals that women tend to prefer – so in spite of enduring stereotypes, this hopefully means there are likely to be more leadership roles that women are attracted to and believe they are suitable for.

We were left with an important warning around the paradox of change – that progress towards gender equality is hampered by those who overestimate the rate of progress.

We can celebrate how far we’ve come, but we must not lose sight of what we still need to achieve or get distracted from ensuring intersectionality in our pursuit of gender equality.

My next stop was a symposium on faking in assessments of personality and competencies. Speakers from academia and practice outlined different ways to reduce faking on assessments including prevention (e.g. response format) and statistical correction methods. The pros and cons of normative and ipsative response formats were discussed and it linked well with research I have previously presented at the ITC conference which highlighted the benefits of a combined normative and ipsative (a type of quasi-ipsative) approach for forecasting specific competencies.

Our Turn: Presenting our Research

Jake Smith profile picture

Jake

By the time we reached Friday afternoon, it was our turn to present.

The subject of the first symposium we were part of was the past, present and future of the Great 8 competencies. It featured contributors from Hucama, Lumina Spark and Edgecumbe, along with a tag-team effort from myself and Lauren. Our paper discussed validity evidence against the Great 8 model at a different level of the Wave model hierarchy than has typically been explored, with our key finding that the Great 8 framework can be successfully measured by broader Wave scales. This supports different practical applications for Wave – from providing in-depth feedback based on more granular, narrow scales, to using broader scales which can be flexibly combined to produce a single score for powerful behavioral screening.

Jake presentation

A short break gave us time to meet up with our collaborator on the second symposium, Professor Hennie Kriek of TTS South Africa. On arriving at our room, a promising number of attendees had already gathered, ready to hear us present on the topic of: ‘Alternative Approaches to Assessing Validity and Effectiveness of Assessments in Use’. As symposium convener, Rab took to the floor and gave a short introduction. Following on, I presented a paper about the difficulties of accurately measuring the overall validity of personality assessments, in terms of being representative of how they are actually used in practice. The results detailed three meta-analyses I had run covering different methods of calculating overall validity for our behavioral screening tool, Match 6.5. A headline result of the paper was that calculating differentially-weighted overall scores can provide notable increases in validity, with emphasis also placed on the fact that high levels of validity can be attained even from a shorter-form assessment with a fast completion time.

The results really emphasize our approach to job screening – short (but still valid) assessments enable recruiters to easily implement a combined assessment approach that is flexible, simple and efficient for hiring managers, while respecting a candidate’s time.

Lauren Profile picture

Lauren

Next I dove into the development process for our Wave-i solution, covering how I maximized the validity and fairness of this new way of assessing potential. I outlined how I developed the i-Potential algorithm based on validity data and then further refined using fairness data. Only using validity data would have the potential to introduce bias as this can reflect stereotypes (as discussed by Prof. Janine Bosak). If behaviors that males are higher on are those seen as related to leadership potential because leaders have more traditionally been male, this just perpetuates the cycle. So while having rich data is fantastic, we need to be realistic about its limitations and supplement it with other sources of information and human logic and understanding.

A particularly interesting point that this process identified was that some of the more supportive behaviors such as understanding and getting to know people, which were not consistently related to potential on their own, when combined with more typical leadership behaviors around drive and leading people, could actually slightly increase the validity (although not significantly). I shared the strong validity and fairness data for the finalized i-potential algorithm, followed by a quick whizz through the dashboard and candidate report, before summarizing that by building minimization of adverse impact into the development of the algorithm, more women can be identified for leadership roles than previously as organizations look to increase the skills and diversity of their leadership teams, and hopefully this will contribute towards reducing the gender pay gap. It was lucky that Jake was succinct as I went a little over my time and got to experience the red countdown of doom, but there was a lot to cover and I was excited to let everyone know all about the development of Wave-i and i-Potential.

Next, Hennie was up to talk about the validity and utility of virtual candidate feedback, which was necessitated by lockdowns and travel restrictions during the pandemic. Despite the shift from face-to-face to virtual feedback sessions, most respondents reported very positive experiences. A key purpose of providing feedback is in itself to serve as an intervention to prompt development actions, and his research findings demonstrated that even receiving virtual feedback can still prompt growth and development. Hennie also contested Dr Ryne Sherman’s earlier assertion that AI would replace coaching, emphasizing the importance of our insights as psychologists and of human interaction, whether virtual or face-to-face.

And so we bid a fond farewell to Katowice, a great host city where we were also lucky enough to sample some excellent food and drink. Looking forward to Prague 2025!

About the Authors

Jake Smith profile picture
Jake Smith

Screening Solutions Manager

Lauren Profile picture
Lauren Jeffery-Smith

Wave Portfolio Manager