Experiment to trial Dynamic Placement Test at Wenzao University

by | 15 October 2019

In September 2019, the Wenzao Ursuline University of Languages in Taiwan ran a trial to determine whether they could use the Dynamic Placement Test to test the English of 1,525 new students, using their own devices. This report describes the trial and its outcomes.

Overview

Founded in 1966, the Wenzao Ursuline University of Languages (WZU; Chinese: 文藻外語大學) is the only university in Taiwan devoted exclusively to the teaching of languages

Every year, at the beginning of the first semester, WZU runs a placement test for approximately 1500 new students in order to put them in appropriate English classes. This semester they ran a trial using ClarityEnglish’s Dynamic Placement Test (DPT), running on students’ own devices. This report measures whether that trial was successful. 

By way of reference, the European Union’s HOPES MADAD project uses DPT for placement testing in a highly controlled computer lab context with approaching 100% success rate, over the course of more than 8,000 tests. One of the objectives of this experiment is to see whether we can approach these success levels in a much less controlled and predictable environment using the students’ own devices.

Objectives of the trial

The objectives of this trial were as follows:

  1. To find out whether DPT can enable WZU to run a placement test within two days with minimal logistical planning and human resources input
  2. To confirm that DPT gives each student a CEFR level with a Relative Numeric (helping to rank hundreds of students who get the same CEFR level)
  3. To establish that administrators get immediate access to result reports
  4. To find out whether DPT can enable WZU to run the test in individual classrooms using students’ own devices.

Finally, this is the first time that the Dynamic Placement Test has been run on this scale in this (WZU) environment, so we wanted to be able to use the experience to fine-tune the process for future operations.

Numbers

On 11th and 12th September, 37 classes started the test in their classrooms early in the morning. On 16th September, 4 classes ran tests in computer rooms. Three more classes did the same on September 19th and 20th. A total of 1,525 tests were run.

Setup and support

Setup

The first goal was to run the test in each teacher’s classroom using students’ own phones. Students were advised to use Google Chrome and to bring headphones.

Two labs were provided with Windows computers (some desktops, some laptops) running Chrome; students were directed to these if they could not run the test on their own phone for whatever reason.

Setups / devices tested

The devices used in the experiment were very varied.

  • Computers in LDDC and classrooms — all were Chrome running on Windows
  • Mobile phones in classrooms:
    52% Apple iPhone
    48% Android phones (Samsung, ASUS, HTC, etc.)
  • Operating systems
    12 versions of Android
    23 versions of Apple iOS
  • Browsers
    42% Safari
    58% Chrome

In all there were 82 different variations of manufacturer, operating system, browser.

Support

A team of teachers organised training for all the relevant teachers at WZU. The teachers were able to take a test, and looked at how to handle anything going wrong (such as a student’s battery running out). Most classes ran a familiarisation test the day before the real test. This just took students through 6 questions to make sure that they knew how to answer items on their phones, and to remind them they needed headphones. On each test day a small team of student volunteers also spread around the classrooms to help the teachers. They had supplies of headphones. There were also two labs set up for students who could not take the test on their phones. 

Results

Successful results

In the first instance, 97.4% of students who attempted the test successfully completed it. Results themselves are withheld, and are not strictly relevant to this study. Suffice it to say that they describe a classic bell curve from A1 to C2.

Unsuccessful outcomes

Of the 1,525 who took the test, 37 did not get a result. The explanation for this is as follows:

  1. 4 tests: Unknown reason 
  2. 33 tests: Failure was caused by caused by a fault in DPT. They were working in a particular lab and appeared to start and finish the test successfully. But their sign-in details were corrupted right at the beginning and the results were lost. The problem has since been rectified. These tests will be re-run.

Successful outcomes

Success rate

The numbers are as follows:

  • 1,525 students took the test
  • 37 tests failed

That means a success rate (of those who took the test) of 97.6%. This is not as high as the 100% at HOPES but it approaches it in a much less controlled environment. (If we remove the 33 for which the bug is now resolved then at 99.7% it is close to the HOPES figure.)

Hardware issue resolved

Hitherto the greatest obstacle to running a successful online placement test has been the hardware requirement. For example, Asia University, also in Taiwan, attempted to run approximately 2,000 placement tests on desktop computers, but found that the lack of computers available meant that it took weeks to complete the tests. In this experiment, 450 students on 11th September and 632 on 12th September took the test on their phones and the entire process was complete by the end of the first lesson. Another 400 took the test at convenient times over the next few days. This clearly shows that it is viable to run a placement test on students’ devices, with university computers needed only for backup purposes.

Timing issue resolved

Similarly, the fact that 632 students completed the test successfully within one session shows that even allowing an hour for each 30-minute test to accommodate groups leaving, arriving and settling in, it is feasible to test 2,500 students between 8am and 12 noon on a single day.

Issues identified

Sub-optimal setup

There are two running modes for DPT: offline and online. Offline requires a large initial download, whereas online requires a much smaller download spread throughout the test. Offline was mistakenly chosen which meant that with hundreds of students all starting a large download at the same time the system became slow.

Solution: This problem is easily resolved by selecting online mode.

Outdated browser

Computers in the main lab (Q03) could not run DPT because browsers in that lab were on very old versions (more than 3 years old). Once updated on 11th afternoon, they were fine and used on 12th.

Solution: Perform a more thorough systems check beforehand. ClarityEnglish can provide a better systems checklist.

Isolated instances

There were isolated instances of audio not playing, or a drag-and-drop not working. We are aware of about 10 such reports which is about 0.6%.

Solution: A failure rate of less than 1% is consistent with the equivalent failure rate in paper-based tests. Some failure needs to be expected with the very wide range of devices. The solution for the small number of cases is to have alternative devices — probably desktops in the lab as was done in this trial.

Miscellaneous findings

  1. Having student volunteers to help teachers in classrooms was very positive. In future years, issues such as those described above will be progressively eliminated, but it will always be a good idea to have a small team who can answer simple questions.
  2. An issue with scores being written slowly at the end of the test was identified and has been solved. In this instance it meant that morning results had to be checked in the afternoon, but this is still within a reasonable timeframe. In any case, this issue has been solved and results are now delivered instantly.
  3. It would be easier to analyse data if all students were listed by class. 1,299 students (which was the total of the main group) is too bulky.
  4. The first part of the test (which we call the requirements page) was used the day before the test as a practice by some classes. It is just 6 questions, but covers all the actions you need to do to complete the test. It seems very rare to find students who struggle with this, so perhaps a whole step of the test preparation could have been avoided if this had not been run the day before the real test.

Conclusions

To what extent were the experiment’s objectives met?

  1. To find out whether DPT can enable WZU to run a placement test within two days with minimum of logistical planning and human resources input
    The test was run over more than two days. However, the findings suggest that it could comfortably be completed in the two-day period — especially if lessons learned on the implementation and test management are actioned.
  2. To confirm that DPT gives each student a CEFR level with a Relative Numeric, helping to rank hundreds of students who get the same CEFR
    DPT successfully provided a CEFR level and Relative Numeric to 97.6% of students.
  3. To establish that administrators get immediate access to result reports
    In this experiment administrators got same-day access to results. As a result of subsequent system enhancement, this should now be immediate.
  4. To find out whether DPT can enable WZU to run the test in individual classrooms using students’ own devices
    This was clearly demonstrated.
  5. Finally, this is the first time that the Dynamic Placement Test has been run on this scale in this (WZU) environment, so we wanted to be able to use the experience to fine-tune the process for future operations
    This has been achieved.

Further reading:

Adrian Raper, Technical Director, ClarityEnglish

Adrian Raper, Technical Director, ClarityEnglish