At the British Council New Directions conference in Kuala Lumpur this month, Adrian Raper described two ways that Artificial Intelligence can help teachers grade student writing.
Sean McDonald of telc Language Tests explains why not all tests are created equal.
Subramoni Iyer describes how his experience with students and teachers in Syria led to localised test instructions and a pre-test video.
Clarity’s Technical Director, Adrian Raper, considers the issue of fairness when students are taking the same test on different devices.
In this video, Andrew Stokes discusses how we can ensure that a placement test — or any other kind of language test — is culturally fair.
Surely the more questions you answer in a placement test, the more points you get and the higher your score? If you can’t finish, you can’t do yourself justice. And that must invalidate the result.
In the second of a series of short videos, testing expert Laura Edwards looks at the the roles of output and input in language testing.
Should a placement test include speaking and writing? Is it important that it is adaptive? Does a test-taker have to attempt every question? What, in fact is a placement test?
When Clarity and telc first conceptualised the Dynamic Placement Test, a key objective was to devise a democratic test — a computer-based level test available to schools whatever their digital setup. At the same time, we didn’t want to compromise on the technology: it needed to be a test that went well beyond multiple choice questions and gap fills. So within these constraints, the team prioritised three areas.
Andrew Stokes, Managing Director of the Hong Kong-based EFL software company ClarityEnglish, talks to Melanie Butler about testing on mobile phones, the perils of unanswered emails and high tech toasters.