Where online MCQ tests fall short

December 12, 2012

At school, and now university, it seems trendy to have lower level assessments sat via computer - students like it for the instant results and teachers love the absence of any real marking. While computer based testing can be handy, and has definite potential, it currently has some very apparent short-comings.

There is really one type of question available: multiple choice. Any other question types can become very ambiguous and extremely challenging to mark (automatically). Take the question: “What is the most demanding user interface on system resources?” as an example. Variations on the correct answer are multiple: Graphical, Graphical UI, Graphical Interface, Graphical User Interface and so on. Now imagine that with the common spelling mistakes or case changes and you’ve got a marking nightmare on your hands. Multiple choice it is then…

This is a problem though, there exists fundamental restrictions on what you can ask with multiple choice. You might think that _‘complete the sentences’_would work but candidates still pick up more marks on their own answer than completing a given one. This would suggest that people need to explain things in their own words to fully represent their understanding. This makes it challenging to ask anything other than maths related questions, and even then there are problems.

With maths though the issues aren’t as apparent and lurk behind the questions themselves. You can’t have answers that are obviously wrong, why even bother putting them in? You also want to try and stop people from just guessing. This means that you have lots of plausible answers with penalty for getting the wrong one. Lets assume we have a candidate who doesn’t guess and forgets one step in a five step question - they’ve now had marks subtracted, fallen prey to the guesser traps. Even when they did 80% of the question correct.