Combining AI and the CEFR to deliver quick, valid speaking assessment

by | 10 June 2025

Assessing spoken English remains one of the most demanding aspects of language education. It calls for swift, multifaceted judgements – on fluency, accuracy, vocabulary, pronunciation, and task relevance – often made in real time. For teachers managing large classes and limited contact hours, this can be an onerous task.

A recent webinar co-hosted by Ming Chuan University and ClarityEnglish presented a thoughtful and practical response: the Dynamic Speaking Test (DST). This AI-powered assessment tool, aligned with the Common European Framework of Reference for Languages (CEFR), offers a means of evaluating spoken English that is both time-efficient and pedagogically grounded.

Below is a summary of the key points for those considering how technology might enhance their own assessment practices.

The CEFR: A practical framework for speaking assessment

Dr Ashley Chen opened the session by noting that while speaking is integral to language learning, it is notoriously difficult to assess with consistency and fairness. Laura Edwards, a co-developer of the DST and a specialist in language assessment, offered a detailed yet accessible explanation of how the CEFR can support teachers in this area.

As is well known, what distinguishes the CEFR is its focus on what learners can do at each level, rather than what they know in theory. Descriptors across the six CEFR bands (A1 to C2) outline concrete communicative abilities – from describing daily routines at A1 to developing structured, coherent arguments at C1 and above.

Laura also highlighted how transparent CEFR descriptors can be for learners, helping them to understand their current level and what is required to progress.

Designing meaningful tasks with CEFR in mind

The DST draws directly on CEFR descriptors to design tasks that mirror real-world communication. One example discussed during the webinar asked learners to explain why they do not want to share a flat with a friend and suggest an alternative. While seemingly straightforward, the task draws on multiple areas of communicative competence: expressing disagreement politely, providing reasons, suggesting solutions, and employing appropriate tone and register.

Such task-based assessment encourages learners to use language meaningfully rather than relying on rehearsed phrases or memorised responses. It is a good illustration of the value of underpinning a speaking test with CEFR descriptors.

AI and Assessment: Transparent, Consistent, and Scalable

Martin Moore, ClarityEnglish’s Head of Assessment, provided insight into how AI technology is applied to the DST. Unlike traditional human marking, which requires extensive training and monitoring for consistency, the DST’s scoring system uses AI trained on thousands of human-marked responses.

The AI evaluates over 50 micro-features – including speech rate, pronunciation clarity, pause length, vocabulary range, and grammatical accuracy. These features are then mapped to CEFR levels using benchmark data. The result is a system that offers high reliability and objective scoring, free from issues such as unconscious bias.

Crucially, DST’s AI system does not simply reward fluent delivery; it also assesses task achievement. In other words, learners must actually respond appropriately to the task, not just speak well.

Fair, fast, and aligned with classroom needs

For classroom-based use, the DST offers several advantages:

  • Speed: The test can be completed in around 15 minutes.
  • Practicality: you can conduct 100s of tests simultaneously without needing lots of space or lots of teachers, invigilators, and administrators.
  • Transparency: Learners receive both a CEFR level and a numerical score (1–120), along with visual indicators (such as a spider graph) showing performance across key sub skills.
  • Flexibility: The test can be taken anytime, anywhere, on any device: at-home mobile testing for pre-session placement; classroom formative checks during term time; or end-of-course evaluations on a tablet.

DST evaluates communicative competence rather than passive recall. It supports teaching approaches that prioritise real-world language use.

Concluding Thoughts

The Dynamic Speaking Test is not a replacement for teachers, but a tool to support them. It complements the classroom by offering a consistent, practical method of assessing speaking, grounded in the CEFR and enabled by recent advances in AI.

For educators looking to improve the efficiency and fairness of speaking assessment – without sacrificing depth or nuance – the DST provides a thoughtful solution.

To learn more or request a trial, visit www.dynamicspeakingtest.com.

Katie Stokes, Blog Editor, ClarityEnglish

Katie Stokes, Blog Editor, ClarityEnglish