Cultural fairness in a placement test

by | 1 February 2022

In this blog post, I want to think about this question: How can we make sure that a placement test is culturally fair?

Let’s start by thinking about what ‘cultural fairness’ actually means. I’d like to define it, broadly speaking, as trying to make sure that no group of people has a particular advantage or disadvantage over any other group. How do we go about this? I’ll focus on four different measures we can take.

First, we think about the fairness of the topics and questions presented. So, a simple example would be a reading task about an Italian artist, let’s say Carravaggio. Art history fans or anyone from Italy might feel confident with this topic. If the test had questions such as ‘Where was Carravaggio born?’ or ‘What type of art was he famous for?’ they might know all these facts without even having to read the text. So people who know about the topic have a definite cultural advantage over others. And that’s unfair. That means item writers need to make sure they steer clear of this sort of question.

A second way of ensuring fairness is to avoid culturally sensitive topics, such as politics or religion, and to avoid stereotypes of gender or race. We don’t want some test takers to feel uncomfortable about the subject matter when we are focusing on their language ability. It is well established (by Krashen and others) that negative feelings act as affective filters that can have an impact not just on language learning, but also on test performance. Item writers therefore have to be sensitive about the topics they choose.

But you might say: ‘Well, a language test is bound to have some cultural and political references to the country where the language is spoken.’ And that sounds convincing. But English is a global language. It’s the official language of over 50 countries and it is used all over the world for business and education. So to reflect this, an English test should not focus on the culture of any one English-speaking country. That extends to accents too. As a simple example, listening tasks should have speakers with a range of accents.

Now, all this may seem a bit subjective. However professional and well-intentioned authors are, they are still people with their own cultural biases. So we need to take two more steps. The third measure is to have test items reviewed by a panel of editors from different cultures. This often helps identify issues that the authors might not even have thought of. For example, a draft question in the Dynamic Placement Test written in Europe included a reference to a birthday cake; an editor in Asia was able to point out that cakes are not universally associated with birthdays.

And the final step is item analysis. The University of Washington’s Office of Educational Assessment describes it like this: ‘Item analysis is a process which examines student responses to individual test items (questions) in order to assess the quality of those items and of the test as a whole. Item analysis is especially valuable in improving items which will be used again in later tests, but it can also be used to eliminate ambiguous or misleading items.’

What’s interesting is to compare the performance of culturally homogenous groups. That way you can see whether Mexicans, Indonesians and Saudi Arabians at the same language level react differently on a given item. If they do, then there may be a cultural issue.

These are just four of the steps that are taken by test designers to ensure that a test is culturally fair and that the test-takers’ scores are a fair representation of their ability.

This post is based on one of a series of videos on placement testing on Clarity’s YouTube channel.

Laura Edwards, Test Expert and Materials Writer, telc Language Tests

Laura Edwards, Test Expert and Materials Writer, telc Language Tests