Pete Sharma (PS), co-founder of Pete Sharma Associates, specialises in every aspect of blended learning. He is a teacher trainer, writer and prolific conference speaker. He caught up with Clarity’s Adrian Raper (AR) at IATEFL in Liverpool.
AR: Pete, you’re famous for helping people evaluate digital materials. You’ve been looking at this for — dare I say decades? So how are you doing it now?
PS: That’s right. It’s 20 years since I wrote my first book — but look, the interesting thing is that it includes a framework for evaluating CD-ROMs that you could actually use today. So there are constants. But it’s way more complex now just because there are so many different kinds of digital materials and they do so many different things.
AR: So let’s say a teacher has been asked to evaluate an app to help with writing skills, how do they set about it?
PS: Well, in practice most teachers start in one of two ways. The first is to sit down with the software itself and play around with it — a judgmental evaluation. The second way is use that writing tool with students, and to gather information from them about their experience using it. That’s much more materials in use, it’s more empirical. I mean you’re basically finding out which bits of the app are most popular with students, whether they find it useful, and so on.
That is all fine. But what people should also be doing is using a framework to see more objectively whether the software matches the teachers’ and students’ needs. If you are evaluating an app, a great place to start is with a series of checklists developed by Hayo Reinders and Mark Pegrum.
And, as these checklists make clear, it’s important to remember that the student perspective is only one perspective. You really need to consider two others. The person who created it will have a view. They bring a lot of skills in instructional design to that piece of software. And then the teacher has a view — and their view would be “Is this useful for my students who are learning say, writing?” Being aware that you need to capture all three is not necessarily easy.
AR: In your experience do most teachers approach the task in the same way?
PS: Well, we all need to be aware that we are bringing prejudices and teaching beliefs to the evaluation, and that these may influence the outcome. Many language teachers who love teaching communicatively have a view that with digital materials the approach is behaviourist, that they are just stimulus-response, yes/no and so on. These beliefs are going to influence how they evaluate an app, and of course that’s not necessarily a bad thing — but you do need to consciously approach a new digital item with an open mind.
AR: Are benchmarks like CEFR useful for teachers to evaluate app?
PS: Ultimately the evaluation is all about what the tool is designed to do versus what really happens. Do people actually learn better? So in the end, you’re not necessarily evaluating what’s in front of you, as the material, you’re actually asking “Does it work? Does it achieve its goal?” So if your aim is to improve a CEFR level from A2 to B1, just finding out whether that piece of software helps achieve that goal is an absolutely valid criterion in your evaluation.
AR: And I’m guessing that in the future things are only going to get more complex?
PS: Well, think about Virtual Reality. Think about this new program on Netflix, Bandersnatch, where the viewer has to type in the next thing that’s going to happen. At one level you can get 12 or so different endings, but there are over a billion possible permutations. This is a different dimension because teachers are not used to this kind of unpredictability in class. And that’s a whole new challenge.
Pete’s course, Evaluating Digital Materials runs through April 2019. Email quickly to join. It may be made into a Self Study course so keep an eye on the website.
Further reading:
Mobile Learning: Languages, Literacies and Cultures by Mark Pegrum