Recent modelling indicates that multiple choice quizzes are “the most common and enduring form of educational assessment that remains in practice today” (Gierl et al., 2017).
The prevalence of this assessment mode is not equivalent across the board: it is more evident in some disciplines and some countries than others. Its popularity has been boosted by the emergence of educational technologies that allow for self-marking multiple choice assessments, thereby increasing efficiency.
Considering their ubiquity, it is comforting to know that much research has gone into the educational efficacy of multiple choice assessments. In other words, do multiple choice tests actually help students learn?
Some studies (Butler, 2018) reveal that multiple choice assessments are indeed effective at prolonging the amount of time that a student can retain factual information in their memory.
However, the inevitable question triggered by such research is whether memory is really a desirable outcome.
Further research has examined the extent to which multiple choice tests help to foster other educational attributes, beyond the memorisation of content knowledge.
Under this lens, multiple choice formats have come under increasing criticism. Studies into cognitive validity (Smith, 2017) point out that the way students prepare for and conduct multiple choice tests typically does not cohere with the learning outcomes and desired processes associated to the learning units and courses in which such tests are administered.
This raises the question of whether administering well-constructed multiple choice assessments amounts to merely doing the wrong thing, well.
A development of the multiple choice format, enhanced by learning technologies, could help. Branched scenarios are constructs of sequential storylines that allow a user to choose pathways and experience a different narrative depending on their choices.
They’re relatively new to assessment practices, but are hardly a new concept. Indeed, they are one of the core structures upon which gaming narratives are built.
The algorithm at the heart of branched scenarios is in fact a multiple choice construct. Crucially however, rather than simply asking students to respond to discreet, disconnected questions, branched scenarios place students in decision-making positions. Students’ choices have immediate, individual and contextual consequences, which present them with new problems to solve.
It’s increasingly easy to build such innovative assessments. Hyperlinked Keynote or PowerPoint presentations are an obvious choice, though more powerful tools such as the branched surveys by Qualtrics offer more granular data.
Planning built-in feedback screens will help students understand how they are travelling, and if students are asked to take screen-shots of this feedback, they can then use them for a reflection or further development.
Key to the success of a branched scenario assessment is a series of well-crafted storylines. These can afford a realism which stands in stark contrast to multiple choice tests, which bear little verisimilitude to how the knowledge being tested would be applied in the real world.
If executed well, branched scenarios can exercise students’ higher order thinking, where they are not rewarded merely for memory, but rather for authentic problem solving of realistic situations relevant to the subject and unit being studied