[Each day in October, I analyze one of the 31 item writing rules from Haladyna, Downing and Rodriquez (2002), the super-dominant list of item authoring guidelines.]
Writing the stem: Include the central idea in the stem instead of the choices.
100% of their 2002 sources mention this idea. 100%. Haladyna et al. present other rules that a minority of their sources even mention, but this shows that it is possible for a rule to be agreed upon at a very high level. Seven of their rules are supported by more than three-quarters of their sources, but only this one is supported by 100%.
Dare I disagree? They have found no empirical support for this rule, but that is their standard, not mine. I think this is a good rule.
Stems should usually be questions, and these questions should be understandable without having to read the answer options. The central idea should be in the stem.
When items have open stems, the central idea should also be in the stem. That is, when an item offers part of a sentence as the stem and requires the test taker to select the response that correctly completes the stem, the central idea should be in the stem, not in the answer options, or at the very least, it should be clear what the item is getting at from the stem, alone. Test takers should not have to review the answer options to understand what the item is asking of them.
When an item offers a fill-in-the-blank stem, that stem should contain enough information for it to be clear what the item is about. Again, the answer options should not be necessary to understand what the item is getting at.
This is particularly a challenge with a multiple fill-in-the-blank item (i.e., such an item with multiple blanks), even when there are drop-down menus embedded in the presentation of the stem. There should not be so many blanks that are taken out of the sentence that it becomes its own puzzle to even understand what the item is getting at.
This is a form of clarity in the question, not just in the directions (Rule 14). I approve.
[Haladyna et al.’s exercise started with a pair of 1989 articles, and continued in a 2004 book and a 2013 book. But the 2002 list is the easiest and cheapest to read (see the linked article, which is freely downloadable) and it is the only version that includes a well formatted one-page version of the rules. Therefore, it is the central version that I am taking apart, rule by rule, pointing out how horrendously bad this list is and how little it helps actual item development. If we are going to have good standardized tests, the items need to be better, and this list’s place as the dominant item writing advice only makes that far less likely to happen.
Haladyna Lists and Explanations
Haladyna, T. M. (2004). Developing and validating multiple-choice test items. Routledge.
Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items. Routledge.
Haladyna, T., Downing, S. and Rodriguez, M. (2002). A Review of Multiple-Choice Item-Writing Guidelines for Classroom Assessment. Applied Measurement in Education. 15(3), 309-334
Haladyna, T.M. and Downing, S.M. (1989). Taxonomy of Multiple Choice Item-Writing Rules. Applied Measurement in Education, 2 (1), 37-50
Haladyna, T. M., & Downing, S. M. (1989). Validity of a taxonomy of multiple-choice item-writing rules. Applied measurement in education, 2(1), 51-78.
Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied measurement in education, 15(3), 309-333.
]