[Each day in October, I analyze one of the 31 item writing rules from Haladyna, Downing and Rodriquez (2002), the super-dominant list of item authoring guidelines.]
Writing the choices: Keep the length of choices about equal.
First, this rule seems VERY redundant with Rule 23. Why isn’t this a part of Rule 23 (Keep choices homogeneous in content and grammatical structure)? Why not make it cover all three issues?
Second, all of my objections to Rule 23 apply to this rule. Go read that, if you care.
Third, Haladyna et al. violate this rule in their examples in their books. Heck, they even offer an example that is based upon the very idea that answer option length can vary. In their 2002 book, they offer an example is one the key is the shortest answer option by quite a bit, and is less than half as long as the longest answer option. Even they don’t buy this rule—even though 85% of their 2002 sources cite it. Doesn’t that undermine the credibility of their whole endeavor? Their 2004 examples for other rules routinely violate this rule, showing how meaningless it really is.
Example 5.5 (2004, p. 103)
According to American Film Institute, which is the greatest American film?
a. It Happened One Night
b. Citizen Kane
C. Gone with the Wind
D. Star Wars
This example violates Rules 21 (i.e., answer options in logical order), in addition to this Rule 24. Example 5.19 (p. 115) also violates this rule.
When an item fails to perform an on a test, what is the most common cause?
a. *The item is faulty
b. Instruction was ineffective.
c. Student effort was inadequate.
d. The objective failed to match the item
In 1989, they pointed to research that showed that when the key is clearly the longest answer option, test takers are even more likely to select it. This fits what I have heard as a very practical guessing strategy: pick the answer that sounds more advanced or complicated. Sure. Pick the longest answer option. Now, if that is the problem, then item developers should try to make sure that there are also a bunch of distractors that are the longest answer options. The problem there is item developers who habitually write dumber sounding distractors; the problem is not some answer options are longer than others.
So, is this rule even worse than Rule 23? Yes. Yes, it is.
[Haladyna et al.’s exercise started with a pair of 1989 articles, and continued in a 2004 book and a 2013 book. But the 2002 list is the easiest and cheapest to read (see the linked article, which is freely downloadable) and it is the only version that includes a well formatted one-page version of the rules. Therefore, it is the central version that I am taking apart, rule by rule, pointing out how horrendously bad this list is and how little it helps actual item development. If we are going to have good standardized tests, the items need to be better, and this list’s place as the dominant item writing advice only makes that far less likely to happen.
Haladyna Lists and Explanations
Haladyna, T. M. (2004). Developing and validating multiple-choice test items. Routledge.
Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items. Routledge.
Haladyna, T., Downing, S. and Rodriguez, M. (2002). A Review of Multiple-Choice Item-Writing Guidelines for Classroom Assessment. Applied Measurement in Education. 15(3), 309-334
Haladyna, T.M. and Downing, S.M. (1989). Taxonomy of Multiple Choice Item-Writing Rules. Applied Measurement in Education, 2 (1), 37-50
Haladyna, T. M., & Downing, S. M. (1989). Validity of a taxonomy of multiple-choice item-writing rules. Applied measurement in education, 2(1), 51-78.
Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied measurement in education, 15(3), 309-333.
]