Do your students spend enough time on their mobile phones in class? An odd question, perhaps, but instead of trying to get students off their devices, I’d argue that we should see phones as an essential device for learning — and especially for testing. After all, for most people, most of their communication is mediated through their device: reading the newspaper, shopping, booking a restaurant, navigating in your car or on foot, listening to music, chatting with your friends and family. This covers reading, writing, listening and even, occasionally, speaking. Shouldn’t learning and testing take account of that?
This approach certainly lends itself to the “can do” focus of the CEFR, especially at lower levels. Let’s start by looking at A2 Reading in the CEFR Self-Assessment Checklist:
I can read very short, simple texts. I can find specific, predictable information in simple everyday material such as advertisements, prospectuses, menus and timetables and I can understand short simple personal letters.
If we are testing this, or setting up activities for students to assess themselves, a number of mobile-based tasks immediately suggest themselves: ordering UberEATS food for a group of friends (perhaps one is a vegetarian, another doesn’t eat milk products); booking a train and plane journey to arrive at a particular destination by a given time (perhaps within a specific budget). These not only reflect the “can do” statement itself, they are based around the most natural way of performing it. They could even be dynamic, so that different classes on different days would have ‘real time’ tasks. The menu changes — so does the task outcome.
Traditionally, we might set up a question asking a student to write a short thank-you letter or email, perhaps for a birthday party or a present. But actually, sending short simple notes and messages by WhatsApp (or similar) is probably now the dominant medium of communication.
Let’s turn to writing, and compare a CEFR-focused task done on mobile to what we traditionally ask our students to do when we test them. First, the descriptor:
I can write short, simple notes and messages relating to matters in areas of immediate needs. I can write a very simple personal letter, for example thanking someone for something.
Traditionally, we might set up a question asking a student to write a short thank-you letter or email, perhaps for a birthday party or a present. But actually, sending short simple notes and messages by WhatsApp (or similar) is probably now the dominant medium of communication. It seems more practical, useful and immediate to move away from letters and towards exchanges like this (real) one:
- hows work goin?
- not good. getting too frustrated with my report
- me too. fancy a walk round town?
- yeah can do
- meet you at mcdonalds?
- ok give me 10
How could we implement this kind of task? At its simplest level, it can be a drag and drop exercise where a student rearranges the items to form a coherent dialogue. More exciting is to devise a simple algorithm to respond to the student’s utterances.
These algorithms are, of course, commonplace in the commercial world. They drive the live chat when you interact on your bank’s website; they enable you to reserve a table on automated restaurant booking sites; they power Alexa and Google Home. These interactions are, in fact, already testing you by measuring your reaction to the outcome of your instructions against those of thousands or tens of thousands of others. Does the way you expressed your instruction fit in with the spectrum of comprehensible utterances so it can be included in the ‘acceptable’ database with a valid response? Or is it rejected as incomprehensible?
If we, as teachers, had access to this information, then the whole artificial concept of the test could go — and painless, measurable assessment could take place as students go about their normal day.
No more marking — how AI can give you back your evenings
CEFR Self-Assessment Checklist