What Cognitive Complexity Is Not

As we often find, understanding what something is can be aided by thinking about what it is not.

  • First, cognitive complexity is not difficulty. Memorization of the names of all the US presidents is not cognitively complex, though getting that right is difficult — and notably more difficult that it was when I had to do it in sixth grade. Low cognitive complexity does not assure low difficulty. While higher cognitive complexity tasks often are more difficult, practice can lower difficulty. Furthermore, different people vary in how easy or difficult they find individual tasks. Two people can differ in which of two tasks they find easier, regardless of their relative cognitive complexity. 

  • Second, cognitive complexity is not grade level. The more advanced knowledge, skills and understandings found in later grades are not necessarily higher in cognitive complexity than those found in lower grades. In fact, some of the most advanced forms of knowledge contain large amounts of quite specialized memorized knowledge (e.g., drug dosing information). In fact, most experts have quite a bit of essentially memorized knowledge that is particular to their field — in addition to advanced understandings of complex ideas that they have expertise to apply with greater cognitive complexity and which connect and explain the significance and importance of that lower cognitive complexity stuff. 

  • Third, cognitive complexity is not importance. Low cognitive complexity knowledge can be important or unimportant. Memorizing each of the facts below is low cognitive complexity, but some of them are more important than others.

Content AreaLow ImportanceHigh Importance
ELAThe name of Romeo’s first obsession in Romeo & JulietThe correct usages of their, there and they're
Math√3 = 1.7337 x 8 = 56
ScienceThe atomic mass of iron Carbon dioxide acts as a greenhouse gas
Social StudiesIdentity of James Polk’s vice-presidentExact wording of the 2nd Amendment to the US Constitution

Similarly, high cognitive complexity skills can also vary in importance. Now, the square root of 3 (to four significant figures) is 1.732 — and not 1.733, as listed above – and that fact really does matter in some contexts. But memorizing that simply is not important.

  • Fourth, cognitive complexity is not the number of steps or amount of time a task takes. Memorizing the names of our 46 presidents might take a while to accomplish and certainly has 46 steps. That does not make it particularly cognitively complex. More of the same does not make the task more cognitively complex, even if it makes the task more exhausting or difficult. 

  • Fifth, cognitive complexity is not the context or scope of a task. Low cognitive complexity tasks that are done as part of a larger project are not higher complexity for that scale of that larger task. For example, proof reading a 10 page paper is no more cognitive complex that proofreading a 2 page paper. The larger paper might be more complex and producing it might have been a more cognitively complex task, but the cognitive complexity of proofreading is not changed for the complexity of the larger task — though difficulty, time and number of steps well could have changed. 

In fact, tasks of greater cognitive complexity often include or rely on subtasks for lower cognitive complexity. For example, the simple word recognition of sight reading is part of most school work — regardless of the overall cognitive complexity of that school work. But the cognitive complexity of the larger task does not make the subtasks more cognitively complex and the cognitive complexity of the larger tasks is not determined simply by adding up the cognitive complexity of all the subtasks. 

Is ELA One Construct or Two?

In our recent work on cognitive complexity, we came to the question of whether the ELA construct is really two constructs or is one united construct? Norman Webb’s Depth of Knowledge (wDOK) breaks out the reading construct from the writing construct, and some tests report separate writing scores from reading scores. On the other hand, other content areas report just a single score.

We were — and still are — unsure how to proceed.

The reading process often feels different and distinct from the writing process. The 3R’s lists them separately. But we know that no one can learn to write without reading — and cannot write well without doing a lot of reading.

On the other hand, if you look at the the first Anchor Standards for both the Common Core State Standards for reading and for writing, you see that CCSS links reading and writing from the beginning.

CCSS-L’s first Anchor Standard for Reading:

Read closely to determine what the text says explicitly and to make logical inferences from it; cite specific textual evidence when writing or speaking to support conclusions drawn from the text. [emphasis added]

CCSS-L’s first Anchor Standard for Writing:

Write arguments to support claims in an analysis of substantive topics or texts, using valid reasoning and relevant and sufficient evidence. [emphasis added]

Reading and writing are incredibly intertwined — especially in CCSS with its emphasis on writing about text. Beyond the route to skill acquisition I mentioned above, reading and writing remain incredibly intertwined even through the most advanced application of the various reading and writing skills — and are even usually particularly intertwined in the most advanced applications of these skills.

In fact, we believe that assessment of reading skills — certainly at the middle and upper grades — is best done through writing. Perhaps the most basic sorts of reading comprehension (e.g., literal or surface meaning of a text that is disconnected from broader contexts) can be well assessed without authentic writing. However, even at the middle grades, real display of reading skills — especially the most important grade appropriate reading skills — occurs when test takers’ understanding of the text is wielded in their writing.

While some of CCSS’s writing standards do not require writing about text, generally Common Core’s writing standards presume that most academic writing would be about text — which means writing about reading.

Reading is best assessed through writing and writing is quite often supposed to be about reading.

Now, we still think that these two strongly linked constructs are, nonetheless, two (not entirely distinct) constructs. They are linked through reasoning in many tasks, but there are some cognitive processes that are particular to each of them. However, eliciting high quality evidence of middle and upper grade reading skills seems quite unlikely without writing tasks. And Common Core’s standards strong preference for writing about reading meaning making eliciting high quality evidence about writing quite unlikely without reading tasks.

Which, of course, leaves us quite troubled.

What is Cognitive Complexity (in Large Scale Standardized Assessment)?


Though most people do not know this, standardized test developers generally examine items for "cognitive complexity.” This is one of many ways that they (are supposed to) ensure the quality of items on tests. Cognitive complexity is not the same thing as difficulty, however. For example, consider the question, “With whom was Romeo obsessed before he met Juliet at the party?” This is difficult question, but it is not a cognitively complex one. Rather, it is just a memorized fact that you know or do not. Cognitive complexity is something different than item difficulty.

Many of us consider cognitive complexity to a bea type of alignment. That is, items are supposed to measure specific skills found in the standards purportedly being assessed, and are examined for that. This additional layer of examination considers whether the cognitive complexity of each item is appropriate for the particular standard the item is intended to measure. Another way to think of cognitive complexity reviews is that the goal is to ensure that the range of cognitive complexity of assessments matches the range found in the standards, even if the match is not taken down to the individual item-standard pairings.

What everyone agree on is that large scale standardized assessments should not be limited to items of low cognitive complexity. In my view, that is one version of dumbing down tests, and obviously it should be avoided. 

So, what is cognitive complexity? Well, on this point there is not a huge amount of thoughtful agreement. But generally, higher order thinking skills and problem solving skills are thought to be examples of greater cognitive complexity and…umm….well, things like memorization are thought to be lower cognitive complexity. But that’s not really a definition is it?

The problem is that there are different ways to recognize or categorize cognitive complexity, and they each highlight particular aspects of this poorly defined idea. 

For example, some people look to Bloom’s Taxonomy (or Revised Bloom’s Taxonomy, RBT). They suggest that assessments should elicit cognition across a range of BRT categories. Now, Bloom’s (RBT or original recipe) is not really much of a hierarchy, so it is not well amenable to the idea of greater cognitive complexity. However, it can be useful to highlight the breadth of different kinds of cognition that a whole test might elicit. RBT acknowledges that the different categories within Bloom’s each have a range of levels, but does not offer a way to compare them across original categories. Nonetheless, because of how commonly Bloom’s is used in teacher training (i..e, both pre-service and in-service), it has the advantage of feeling familiar to many educators. So, if you are comfortable with Bloom’s, it is one view of cognitive complexity.

The most common typology used by developers of large scale assessments is Depth of Knowledge (DOK), a system developed by Norman Webb over 20 years ago — that’s far more recently than Bloom’s. Because DOK is so common, our own efforts to clarify the meaning of cognitive complexity has focused on it. Our Revised DOK (rDOK) is an attempt to preserve as much of Webb’s original DOK (wDOK) as possible, while addressing some its intrinsic shortcomings. Generally, both version of DOK focus on the difference between the kinds of skills that are applied more automatically and the kind of skills that require more careful thought and deliberation when applied. Our efforts with rDOK are primarily focused on with how poorly wDOK has been used, in practice.

Examination of cognitive complexity should hold test developers’ feet to the fire. It should force them to struggle with the constraints on standardized tests as they try to include that more cognitively complex cognition. It should drive them to be more innovative, as it highlights past shortcomings. It should help to make the case that items need to be better, available item types need to be richer and assessment of what standards describe requires real resources to score (and report).

Cognitive complexity should not be so undermined that it becomes just be a hoop to jump through resentfully. Norman Webb wanted his DOK to highlight differences between the kind of rich and thoughtful work in which students engage when doing their authentic school work and the simpler thinking that large scale assessments are so often limited to. We think that he was right.

Do Items Have a Central Cognitive Complexity?

Cognitive complexity can be a powerful lens through which to examine items. It can highlight gaps between the richness and depth of instruction, of standards, of classroom learning and what appears on assessment. Unfortunately, the idea of cognitive complexity is too often just waved at, perhaps resented, and only cursory effort made to consider it. More thoughtful use and consideration of cognitive complexity can contribute to the development of assessments that educators feel better reflect their efforts and the learning goals for students. In other words, tests that they feel have greater (facial) validity.

Perhaps the fundamental mistake in the assessment context in thinking about cognitive complexity – very much like thinking about other aspects of alignment – is assuming (or insisting) that items have some fixed or inherent complexity and should be evaluated as such. The RTD central tenet is that valid items elicit evidence of the targeted cognition for the range of typical test takers. RTD is deeply rooted in the idea that assessment is about cognition and that cognition varies across the range of typical test takers. RTD is clear that to understand cognitive complexity, one must start with the cognitive paths of  a variety test takers.

So, why is the idea of some innate item complexity wrong? Well, it depends on what is meant by that claim.

 

If item complexity refers to the complexity of the result or product, there are many reasons.

  1. The actual final work product on most items on most large scale standardized tests is merely the selection of one offered option among a small handful. A is not a complex product. C is not a complex product.

  2. If one argues that it’s not the label on the answer option but rather than contents of the answer option, one has hardly made any progress. Those answers are almost always quite short and straightforward. They certainly lack the range of complexity and depth of answers that might be offered in a classroom – be it in writing or orally. This leaves very little range of complexity for standardized test items.

  3. Furthermore, this approach would suggest that any math item whose answer is a number is not very cognitively complex and that math problems are generally less cognitively complex that even fairly simple ELA items. We do not believe any of that.

  4. Some argue that the final product is evidence of the complexity of cognition of the test taker. Frankly, whether they realize it or not, this simply concedes that cognitive complexity is a trait of the cognitive path (see below) and capitulates on the claim that complexity is in the final product.

Clearly, this idea simply is not productive or useful in the context of large scale standardized assessment. This means that item complexity must somehow be about the process through which test takers arrive at their final answers. And, again, we ask whether this idea is compatible with the idea that items have some important fixed innate complexity.

Put another way, is there some singular decisive process or path through an item to a solution/response that should be focus of cognitive complexity classification decisions? Is that idea productive as a general approach?

One simply must acknowledge that all items can be responded to with multiple cognitive paths. All. Test takers can respond with nonsense and/or can just guess, rather than working through the problem. Classroom teachers know well that many students respond to stress by losing confidence that they can work through a problem and revert to guessing. One might posit that the singular decisive path is the one that yields the correct response, but guesses can be correct.

Clearly, there exist multiple potential cognitive paths, so the question is really Which cognitive path is the singular decision process or path through an item to a solution/response that should be focus of cognitive complexity classification decisions? If, of course, such a thing even exists.

  1. It seems obvious to us that if you had to choose one cognitive path as the most important one, it would be the one that most test takers use – or at least that a plurality of test takers use. But that is an empirical question that can only be determined through massive amount of quite difficult data collection (i.e., ideally through the development of mind reading technology that be used at large scale to best assure that the sample whose minds are read are appropriately representative of the testing population). But even were that research possible, this would not simply be a feature of the item. Different testing populations may choose different cognitive paths. Moreover, if standards, curriculum or textbooks change in a state or district, students may be influenced to select different paths. There is no singular decisive path, here.

  2. Perhaps the shortest and most direct path to the answer is the singular decisive path to examine for cognitive complexity. Well, one would clearly have to put aside the actually shortest paths. Guessing is renders everything low cognitive complexity. Having already seen the problem and simply remembering the answer is not guessing or cheating, and is also very short cognitive path. But that is clear not the singular decisive path. Very many math problems on standardized tests can be answered through backsolving, because of the modality of selected response items. Is that backsolving path the singular decisive path of these item for these purposes? If proponents of this approach would accept that and would call for serious efforts to find the shortest path to a correct response (i.e., still excluding guessing and already knowing the answer), we could almost respect that. The problem is that that still leaves questions of how much experience and prior knowledge to disregard. Some test takers might not have seen this exact question, but have seen the exact same type of question and therefore can more quickly cut to the answer. Which of these paths does one exclude from consideration and which are candidates for the singular decisive path? And once again, issues of variation across test takers and their classroom – and other – experiences must be considered. There is no decisive answer here.

  3. Perhaps the singular decisive path is the one that is built around the KSAs of the desired aligned standard? That certainly would be convenient. Unfortunately, this idea quickly falls apart. In order for this approach to have any merit, one would have to assume that items are correctly aligned and that simply begs the question (new school) of what alignment means. It certainly forestalls the reality that items often have multiple paths to a response and begs the question (old school) of which path one should base such determinations upon. Otherwise, it is tantamount to saying, “Well, if they do the item the way we want them to – which might not be the easiest, most obvious or most appealing path for a test taker to taker – then this is how complex the item is.” In other words, cognitive complexity is a product of the path that content development professionals (CDPs) would like test takers to take. However, just as different test takers might see different paths as preferable, so might different CDPs. This simply becomes an arbitrary decision.

  4. The previous option is just one way to get to a very common problematic view: the singular decisive path is the one that I would take, that I imagine that I would have taken and/or that I imagine that most (or typical) test takers would take. All of those are projections of the CDP’s own thinking, habits and/or preferences.  Because different CDPs can come to different answers here, there certainly is no singular decisive path here to base determination of cognitive complexity upon.

Once you acknowledge that there are multiple cognitive paths through an item, there simply is no way to identify one of them as the singular and decisive path. If items are to have some fixed innate complexity that should be the starting point and focus of cognitive complexity recognition, it is not found in the cognitive paths of test takers

There is one more notable approach to identifying cognitive complexity – one that is actually quite commonly used. This approach says that skills and standards themselves can be classified by cognitive complexity. Unfortunately, this approach also collapses under the weight of reality and simple practical considerations.

  1. The simplest application of this approach suggests that more advanced skills are of greater cognitive complexity. But this simply turns cognitive complexity into a recapitulation of grade level. That cannot be right, as it renders it useless for its duplicativeness.

  2. A second application might consider the sophistication of the application of the skill or standard, but this generally becomes a standard-specific recapitulation of the grade level. Alternatively, it might be about the proficiency with which the has been (or must be) applied. But – again – proficiency is supposed to be a different construct. (In fact, IRT puts item difficulty on the same scale as test taker proficiency/latent ability. Cognitive complexity needs to be something else, to be useful.)

  3. A third application of this approach might consider the difficulty of the skill or standard, but difficulty is a product of instruction, practice and preparation. Different teachers can emphasize different skills or standards and different curricula can set of better instructional paths towards some standards over others. Again, this is not simply a function of the Aristotelian ideal of the skill or standard. That is, difficulty is a population-specific result. Furthermore, we already collect empirical measures of item difficulty and this approach is largely duplicative of that – or at least is largely duplicative for well aligned items.

Where does this too long discussion leave us?

  • There is no innate item complexity to be found in the final product.

  • Test takers always have multiple paths to a response, and most every item has multiple paths to a correct response.

  • There is not particular singular decisive path that one can use determine item complexity

  • Classification of individual standards or skills by cognitive complexity either falls apart for redundancy with other measures or is population- (and their educational experience) dependent.

  • Every route out of this conundrum of how to determine a fixed or innate item complexity resorts to projection and preferences of the CDP (or other human judge of its complexity).

Which means that such a thing does not exist. Which means, cognitive complexity must be grounded in something else. RTD says that items often have a range of cognitive complexity because they prompt a range of typical test takers to take a range of cognitive paths to their responses.

So, what does RTD offer for cognitive complexity determination? Well, that’s rDOK.

ChatGPT Results May Be Plausible, But They Are Not Credible

ChatGTP — the new artificial intelligence chatbot that is all the rage — is amazing. It writes plausible text that seems fairly informed about the world and any number of topics. It is so impressive that there are all kinds of people saying that this sort of approach could replace search (e.g., google) when looking for information.

Wow is that a bad idea. An incredibly bad idea. Just dumb. Really really really dumb.

First, and least importantly, I find it shocking how impressed so many people are at text written at a level of a very talented 8th grader. That is, the writing of a really smart 14 year old.

Second, and far more importantly, it appears that people do not understand the role of plausibility and accuracy in what ChatGPT is doing, nor their implications for the kinds of things that they might search for.

ChatGPT does not care about accuracy at all. That’s not part of its programing. That is not how it was designed and not how its designers want it to be evaluated. If you care about accuracy, you need to steer clear of such things.

Instead, ChatGPT cares much more about plausibility. Obviously, it’s just a program, so it does not actually care. And I’m not sure that its designers would use that term, “plausibility.” But I am fairly certainly that they would concede that that is part of the goal. ChatGPT is built upon some vast and broad corpus of text and generates responses that fit the patterns that the machine learning AI has found in that vast and broad corpus. That is, it generates text that rather looks like what it has already found, building on what the user types in in their side of the chat.

So, I gave it another sort of test run. I asked it about my work, about the differences between validity and reliability and how they apply in the development of items for standardized tests. Not surprisingly, it started with a bunch of very generic ideas about reliability and validity. It focused more on reliability. Through the chat it started talking about classroom assessment, and I tried to redirect it again.

Eventually, it recommended some books — or were they articles? Well, either way, they were focused on classroom assessment. So, I asked about standardized test development. Most of what it recommended was more about psychometrics. But one of the recommendations seemed on point! It was a book (or article?) that I’d never heard of.

"Constructing Effective Test Items" by Susan M. Brookhart

Fascinating. Let me see if I can find that. I’d love to know more about it.

  • Google found nothing. (Well, it found two pdfs that do not mention such a piece, those all those words do appear. There’s a reference to another piece by SM Brookhart. Well, co-authored by SM Brookhart.)

Google reports no “great matches.”

  • Google scholar found nothing. (Well, it found the same thing. Cites it as by TP Hall, when that is actually just the location (i.e., Tate Page Hall 334) of the meeting for which the PDF is a copy of the agenda).

Google Scholar reports one results, but it is just a a meeting agenda.

  • Google books did not find it, either.

No results found for "Constructing Effective Test Items" by Susan M. Brookhart

Google Books reports no results

  • Bing has offered more results, but not to the article. Rather, they are to Dr. Brookhart’s website, ASCD page, ResearchGate page, etc. There’s also a link to something by a Tamanna Kalim over at Bangladesh Education Article. Now, it is not the worst article in the world, but it is not by Dr. Brookhart and is relatively generic.

This means that ChatGPT just made up this book (or article) that it claims exists. Just made it up. Sure, it is plausible that she would write such a thing. But it does not exist. I am not saying that it is a bad book (or article). I am not saying that it is irrelevant or poorly named. I am sayin that it does not exist.

Even when ChatGPT provides references, it cannot be trusted. It just makes them up — because that it what it is designed to do. Present something plausible, something that like what is out there — without any regard to accuracy.

Do not trust ChatGPT. It is just not credible.

Addendum

Five days later, I have tried again. I had a slightly different conversation — one a little more focused on challenges of alignment and false negative inferences (i.e., Type II errors). ChatGPT was quite confident.

Yes, if you have two test questions that include the same two skills or concepts, you can designate one of the questions as being focused on the first skill and the other as being focused on the second skill. This can be an effective way to assess both skills within the same test, as long as the questions are well-constructed and aligned with the assessment objectives. However, it is important to ensure that the two questions are not too similar, and that they are measuring the skills or concepts in different ways. For example, if both questions focus on the same mathematical concept but one uses a more advanced problem-solving approach than the other, it may be more appropriate to consider the first question as the primary focus and the second question as a secondary focus. This can help ensure that the test provides a balanced and comprehensive assessment of the test taker's knowledge and abilities.

We had gone back and forth a bit on this point, and so finally I asked, “Where can I learn more about this?” ChaptGPT’s first bulleted response was:

Reading books or articles on the subject, such as "Test Development: A Beginner's Guide" by Ruth E. Newton and David E. Wright, or "Standardized Testing: A Practical Approach" by David M. Nichols and Richard C. Sweetland.

These two books do not exist.

2nd Addendum

This week’s Slate Money begins with a segment on ChatGPT, and Emily Peck quickly makes reference to this issue that I had shared with her.

The Ultimate MisNAEPery: Confirmation Bias

This week’s NAEP results have been deeply deeply disturbing. They should leave all of us with incredibly deep questions about education research and the education research and policy community. We have witnessed a new form of misNAEPery that should cast deep doubt on things that we have long taken for granted as true.

MisNAEPery is the misuse of NAEP data — results from the National Assessment of Educational Progress, known as “The Nation’s Report Card.” Please know that NAEP is a very different set of standardized tests. Students do not specifically prep for it and it has no stakes attached to it. Results are not published for individual students, teachers or schools. In fact, that is not even possible because it uses something called “matrix sampling.” This means that different students have different questions on their forms, and then all the data is combined into aggregate sores for entire states. ENTIRE STATES! (There is a also reporting on 27 particular districts which have volunteered to take part in the TADU (i.e., Trial Urban District Assessment), but because those districts are smaller than their states, they have larger margins of error.)

This approach allows NAEP to address countless objections to most standardized tests. Freedom from having to compile results for individual schools, teachers or districts allows check for and account for issues that other assessments cannot even dream of. Short tests that nonetheless address large content domains, care around item interaction effects and and and…it’s the gold standard.

The Question of the Decade

The educational policy and practice question of this young decade is about the impact of the pandemic on students, learning and teaching. The most obvious and contentious aspect of this question is the contribution of school building closures – and consequent reliance on remote (i.e., Zoom) schooling – on “learning loss” (i.e., the unfortunate name given to the idea that students did not learn as much during the pandemic as they would have otherwise, that they did not progress as much as they would have if there had not been a pandemic).

It is odd that this is such a contentious issue, because most everyone has a stake in believing that remote schooling is inferior to in-person schools. Those who wish to attack teachers unions, educational bureaucracy and even teachers blame them for school building closures and the resulting learning loss. (Of course, they conveniently ignore the fact that other schools that are more responsive to market pressures — such as private and charter schools — also closed the buildings during the pandemic.) Those who think that the New 3 R’s (i.e., rigor, relationships and relevance) are vital to success with the old 3 R’s (reading, ‘riting and ‘rithmatic), that teaching is more than just lecturing and is instead about meeting students where they are and meeting their needs…well, we think that time with teachers is valuable. We think that teachers matter. We do want to think schooling can help students beyond their own cognitive developmental path and the impact of various out-of-school factors.

We should all want to see that lost time in school with teachers had a cost. Even if people disagree about whether it was necessary or worth it to pay that cost, virtually everyone expects that the new NAEP results to give us a sense of what the cost was.

This is because states differed enormously in how long school buildings were closed. Chalkbeat’s coverage of the new NAEP data shows this, such as Texas schools being open 88.7% of the time and California’s schools being open just 6.9% of the time. (Go read that coverage. It’s surprisingly good. And note that while Matt Barnum wrote the story, the graphics — like the one I have copied below below – are by Thomas Wilburn. The originals are interactive.)

The Unexpected

The problem is that NAEP does not show that states whose schools were open to more in-person learning had markedly stronger results. It just doesn’t. For example, California’s results slipped back less in 8th grade reading and math than Texas’s and exactly the same amount in 4th grade reading and math. New York (14.2% in-person instruction) slipped less than Texas in 8th grade and more than Texas in 4th grade. Florida (96.8% in-person) was also worse than Texas in 8th grade and only better than Texas in 4th grade reading. Again, note that this is not about absolute level of performances, but rather is about how much the state’s students slid back during the pandemic, from one cohort to another. Even just among these four largest states, we do not see the expected results.

Taking all the states’ results into account, we do not see the expected patterns. In some cases, we see far weaker versions. In some cases, we do not see anything like what we expected. What literally everyone expected. (And I mean literally literally. 100%. Absolutely everyone. Not a single person predicted what this data shows. Not one.)

Again go read Chalkbeat’s coverage. And if you want more, there’s EdWeek’s coverage.

The Deeper Problem

No one assaults NAEP as being bad data. It is the gold standard. Those of us who decry low quality many state assessments and decry bad analysis of quantitative data point to NAEP’s quality. Those with more confidence in quantitive assessment results look to NAEP as the benchmark.

But suddenly, in light of these shocking results, people are making excuses. Because the 2022 NAEP results do not show what everyone expected, people are…behaving differently.

The real value in data and research is not in finding supports for what you already believe. The real value is helping you to figure out what is true. For those with intellectual integrity, it is more important to learn than it is to convince others. It is more important to be right tomorrow than appear right along.

NAEP is telling us that we were wrong. That all of us were. Now, from a bayesian perspective, the strength of our prior belief should make us less open to countervailing evidence. It should. That is OK. But the strength of NAEP as the highest quality evidence should make us question any prior belief. That is what NAEP is for. That is how everyone who knows about NAEP views it.

So, I have to ask, if NAEP is does not shed valuable light on this question, what is it ever useful for? If this is not the absolute best case use of NAEP data, then what is? And if NAEP is not useful, how is any achievement data ever useful, or any on-demand evaluation of student knowledge, skills and/abilities — be it standardized or not?

Or, if NAEP remains credible, what does that imply about the value and nature of teaching and the classroom? What does this say about natural cognitive development, as opposed to intentional learning? What does this say about the potential for additional use of remote schooling and how we might reshape childcare structures in this country?

Integrity in the Future

What is not acceptable is to simply ignore this year’s NAEP results.

I need to re-evaluate my confidence in NAEP more broadly. That’s my next step. I am comfortable saying that I would rather find problems with NAEP than have to devalue teachers and the new 3 R’s. I’ve not really dove into the mechanics and methodology in NAEP in a long time. And I’ve never subjected NAEP items to RTD’s level of item examination. At the same time, I also need to rethink the potential of…oh my god it hurts to type this…cyber charter schools. Oh the pain! The pain!. But I was wrong about something, either NAEP or nature of teaching and the importance of teachers.

As I look around this week, I do not see this kind of soul searching. I do not see acknowledgements of the importance of this moment for education researchers, educational policy practitioners, in school educators and assessment experts.

That worries me.

Vertical Scales and Unexamined Assumptions about Unidimensionality

Just this week, Chalkbeat’s Matt Barnum asked about the meaning of NAEP’s apparent use of a single scale to report all of its test result. This topic — vertical scaling — reveals problems with vertical scaling. This example makes easy to see.

What is a Vertical Scale?

While the same set of grades are reused across grades (e.g., either the A-F system or the 100-point scale), this is not always done with reporting on standardized tests. Though people understand that an a student who just earned a B+ in 10th grade knows much more than a student who just earned an A- in 5th grade, some people want to highlight that there is this longer continuum across the grades. They even want to compare performance of students (or collections of students) across grades. That is where vertical scaling comes in.

With vertical scaling, we do not have to reset our understanding of the reporting scale for each grade. Instead, the scale just keeps going up. So, the average 2nd grader might score in the 140’s, and average 3rd grader in the 160’s, an average 4th grader in the 190’s, and so on and so on all the way up to the average 11th grader in 620’s. It’s a VERY long scale, with lots of overlap between grades.

There are generally defensible techniques for doing this — though they rely on problematic assumptions. Vertical scales are very important to support various policy goals and evaluation approaches. More simply, though, they support more kinds of comparisons — even comparisons of how much a single child learned one year vs. another year or how much two children in two different grades learned.

The key to vertical scaling is the use of anchor items. Anchor items allow the linking of two tests — across multiple forms of a tests, across different years, across different grades. By reusing a handful of items on each test, they can act as a kind of splice that enable comparisons across tests. That is, comparisons of items across tests. So, if they quantify the performances of test takers on those anchor items on each test, they can use them as a common baseline to link performances across all the items on each test to each other — regardless of which test the items are on.

In the context of vertical linking, they take some of the harder items on the lower test and some of the easier items on the higher test and make sure they are all on both tests. (They do not have to be the easier/harder items, but I think the logic works better when they are.) These shared anchor items provide the psychometric bridge to create a single reporting scale for both tests. Do that with all the gaps between each grade and you can get single scales for the entire span of K-12 education.

Unfortunately, I don’t buy it.

Unidimensionality’s Basic Falsehood

Unidimensionality is the idea that whatever this is that we are measuring, we really are measuring just one thing. That is, if this is a math test, so we are measuring math. We can basically treat each item as contributing equally to the score because each item measures one unit of math. We can summarize performance with a single score on this 3rd grade math test because 3rd grade math is just this single homogenous thing.

The problem is that 3rd grade math is not a single homogenous thing. 3rd grade math is MANY things. Common Core has 5 different domains in 3rd grade math, comprise of 25 different Math Content Standard. If one counts all the individually broken down subparts of CCSS’s 3rd grade math standard, you get 33. Of course, there are also the eight Standards for Mathematical Practice.

How can we report a 3rd grade math as a single score when it has all those different parts? We know the parts are different because the content experts tell us that. We know that different kids have trouble with different parts. We know that they are different grain sizes — even just between the Standards for Mathematical Practice and the Content Standards.

The Reporting Compromise and Its Unexamined Assumption

There is such utility in reporting performance unidimensionally, we simply have to find a compromise. Now, this is a compromise that we have all long been comfortable with. After all, we accept report cards that give students a single grade for math, a single grade for science, and a single grade for each course they take. We accept that in test reporting as well.

The compromise is acknowledge that there are different standards, so the reported score is a composite score. 4 parts this domain, 3 parts that domain, 6 parts this other domain. It is like a teacher who says that grades in their class are made of up of:

  • 30% homework

  • 30% projects

  • 20% tests

  • 20% class participation

Because standardized test reporting impacts so many thousands or millions of students, those composites should be designed very carefully. They should properly weigh the different elements of the entire content domain because different weightings will yield different results. Different weightings will encourage teachers to focus on different parts of the curriculum. Different weightings will favor or disfavor different students, different teachers, different schools and different instructional approaches.

Thank god, the developers and sponsors of standardized tests know that the weightings matter. They try to be thoughtful about them. However, they may not be thoughtful enough. They may be too driven by convenience and too accommodating of the limitations on the tests (and of how those limitations drive the weightings). But no one takes designing a test blueprint lightly. Nonetheless, there is always something arbitrary about the weightings, as there is no definitively correct answer and there are so many factors that influence blueprint design that have nothing to do with the content domain itself (e.g.., item type limitations, scoring budgets, seat time limitations, etc.).

Unfortunately, the real unexamined assumption is that the items themselves actually measure what they purported measure. There is very little work on making sure that items do not individually produce false positive or false negative results. That is, whether students can solve them without using the targeted standard or might fail to solve them for reasons other than lack of proficiency with the targeted standard.

This lack of care with item validity (i.e., items that elicit evidence of the targeted cognition for the range of typical test takers) undermines the thoughtful work of designing the composite that a test’s blueprint promises. If the items don’t measure what they purport to measure, the elements of the composite are not properly weighted. Some elements might not even be represented at all!

This leads to scores who meanings are uninterpretable — unless we just accept that the blueprint and details of the composite’s weights do not really matter. After all, 3rd grade math really is just one thing, right?

Problematicly Assuming Unidimensionality for Vertical Scaling

Vertical scaling necessarily assumes unidimensionality. It has to. Even if the composite was crafted incredibly wisely and the items each actually was perfectly valid, successive grades would have different composites. Some subdomains are more important in 3rd grade math and others more important in 4th grade math. Eventually, lower level content is taken for granted so that higher level content can be focused on. For example, while arithmetic is always important, the importance of interger addition on tests fades as more advanced arithmetic is covered and eventually the importance of arithmetic fades and algebra and other more advanced topics gain focus.

  • If the composite changes, what does it even mean to link scores between them?

  • If we acknowledge that the summative score is made up of different subdomains, how many anchor items do we need to link the subdomains across grades?

  • If a new subdomain appears at some grade, what does it do to the very idea of linking scores across grades?

The only way to resolve these (and other) issues is to hand wave them away and assume unidimensionality.

Back to NAEP’s (facially) Vertical Scale

The National Assessment of Educational Progress — “the nation’s report card”!! — makes no such claim. It does not claim to be a vertical scale. It does not claim that 4th grade scores can be compared to 8th or 12th grade scores. It does not claims a two-point increase in 8th grade means the same thing as a 2-point increase in 4th grade. It does not claim that high enough performance on the 8th grade test would mean more advanced average proficiency than a very low performance on the 12th grade test.

Not at all. it is not a vertical scale. But the three grades are reported in a way that looks like it might be a vertical scale.

But here is how we know it could never be a vertical scale: You cannot anchor items between two levels so far apart. If the items on the 4th and the 8th grades test each actually represent appropriate grade-level standards, we should not expect that any decent number of 8th graders would get the 4th grades items incorrect. Nor should we expect sufficient 4th graders to get any 8th grade items correct. Certainly not enough to splice the two test’s scales together.

This is not about how smart the 4th graders are. Rather, it is that they simply have not been exposed to the 8th grade content, yet. Any signal (i.e., information about 8th grade math skills) in that data would be overwhelmed by noise (e.g., test taking savvy). Similarly, 8th graders who get 4th grade items incorrect might be far more likely do so because they misread the item, rushed or were sloppy than because they lack the content expertise. Again, the noise of construct-irrelevant factors would overwhelm any signal of some 8th graders’ lack of proficiency with 4th grade content.

You simply cannot link tests that are so far apart because you cannot ask these students the same kinds of questions.

The Point?

Well, I see two important takeaways.

First, I find Matt’s question disturbing because he works for a very good education-specific news site and his beat includes both education policy and education research. Among scholars I respect, he is well thought of. No question, he knows a lot for an education journalist.

And yet, even Matt did not understand this. I’ve no idea how many times he has reported on NAEP scores and use of testing has been one of the dominant themes in education policy for decades. If Matt does not understand this, then what does that say about the rest of the media? What does this say about our elected leaders, about parents and about voters?

Second, whenever I challenge psychometricians about their assumptions of unidimentionality, they retort that their methods are robust to some amount of multi-dimensionality. They report that their statistical methods do not break down when faced with data that is not stricitly unidimensional. Of course, I accept that. But that does not mean that the results the report mean at all what they think they do. Validity is about “interpretations of test scores for the proposed uses of tests” (The Standards for Educational and Psychological Testing, 2014, p. 11). Even if the statistics yield a result, the use and acceptance of vertical scales — even if only the suggestion of a vertical scale with NEAP — shows how little considerations psychometrics gives to validity.

I suppose that there’s a third takeaway, though it is less far-reaching. Matt’s question about NAEP scores has long since been addressed. In 2012, David Thissen wrote about the question of the NAP and vertical scales. “The conclusion of this essay will be that evidence can and should be assembled to support, and make more precise, interpretations of the first kind (“one year’s growth”), while interpretations of the second kind (cross-group comparisons across four-year spans) should be discouraged.” This work was done under contract with the publishers of NAEP, and yet it has take up neither of his suggestions. They should do better.

Excellence is Multi-Dimensional

My high school experience back in the 1980’s was a bit odd, in quite a few ways. For one, it was an almost brand new school when I got there. It was a new public exam/magnet school and for various reasons, they the district decided to just let in one class at a time. So, the first year, there were just freshman. The second year, that first class rose to be sophomores and my class joined. It wasn’t until it’s fourth year that we had seniors, and that first class was the top class their entire high school careers.

I was on a competitive team from my freshman year, and there were two real stars in the class above me, but they took very different paths with very different strengths. One was rock steady, always doing what he could do, without mistakes. The other was more mercurial, with more brilliant moments mixed in with too frequent mistakes.

Now, both of them were excellent. But one was steady at a high level, and the other had more variation from meet to meet. Sometimes James exceeded Peter, but sometimes James fell short.

Throughout our high school years, Peter raised his level. He remained consistent, not making mistakes. But he did that a higher level of performance each year. Through those years, he nearly caught up to James’s peaks. Similarly, James also improved. But for James, improvement had to mean addressing those mistakes. Through those years, he nearly caught up to Peter’s consistency.

Back then, I thought that I was more like James. I wanted to be more like James. I wanted to reach those heights, and I did not yet realize that James and Peter were converging. I saw them embodying two contrasting archetypes. And I certainly did not appreciate the value of consistency or of simply not making mistakes.

It was not until late in college that I really started to appreciate that James was not better than Peter. I did not understand the value of reliability — particularly when that reliability comes with a high level of performance. Yes, I still see value in moments of peak brilliance, but I value consistently far more than I used to.

Consistently avoiding mistakes that you are capable of avoiding individually requires a kind of focus that I did not have as a teenager. While I have gotten better, it is still sometimes hard for me. Whatever the reasons, it does not come easy to me in any domain.

As a adult, I see incredibly value in avoiding downsides, potholes and mistakes. I see reliable contributions from colleagues, reliable friends and reliable recipes. The staples of our lives, of our work, of our pantries are so under-appreciated. Delivering every day and being able to count on them make everything else so very much easier.

This was true on my high school math team. The most thoughtful football analysts say it is true of running backs, too. It is an under appreciated kind of excellence.

Who Make Decisions about Goals and Resources?

Recently, someone tweeted to me, “I have lots of faith in teachers to implement learning properly. I have less faith in schools and admins to set the proper goals and resource appropriately.”

We are in an era of decreasing trust of teachers and schools. Of course, we are in an era of increasing distrust of all institutions, so this shouldn’t be so shocking. And while trust in teachers remains quite high, is has declined a little bit in recent years. Teachers now trail only nurses and medical doctors, but they used to rank higher. (They are still far ahead of police officers, judges and bankers. Local office holders and members of congress are net a little and very much distrusted, respectively.)

Nonetheless, it is quite striking that someone would distrust schools and administrators to “set the proper goals and resource appropriately.” These simply are not the jobs of teachers or school administrators.

Educational goals are laid out in state learning standards. These state standards are developed by educational professionals, researchers and policy-makers, and then customized for various states. Finally, these customized standards are ratified and endorsed by state legislatures. For example, Florida customized the Common Core State standards and the Next Generation Science Standards and calls them standards The Sunshine State Standards.

Educational goals are not set by individual teachers, individual schools, districts or their administrators. Educational goals are set by state legislators.

Educational resources are similarly out of the hands of schools and educational administrators. States are the primary determiner of educational resources — again, through acts of state legislatures. Local municipalities also contribute to educational resources through local government budgets. Again, it is local elected officials who make these decisions. In some areas, the school district has the authority to levy taxes, instead of the general local government. But this is done through elected school boards. In none of these cases are schools or administrators responsible for these decisions. In all of these cases, it is elected officials.

Of course, the federal government contributes ~10% of school resources,. Here, it is Congress that decides. Again, elected officials.

To be fair to all of those legislative bodies, their acts usually have to be signed by an executive. Thus, it is not the legislatures alone who do set standards or set resource levels. But they are all elected officials.

Now, where I live, we actually vote on he town budget every year. My local town government does not have the power to set budgets. Rather, it’s elected officials puts together a budget for the citizens of the town to vote on. Occasionally, a town budget somewhere does not pass, and the town government must put forth a new proposal for citizens to vote on. This American Life recently did a piece on a a contentious effort of citizens to radically alter a school budget. But no where in any of this do schools or school administrators sets budgets.

It is incredible that people distrust teachers and administrators to do things that they’ve not be responsible for in generations.

Better Tests, Not Lesser Tests

Standardized tests and the uses to which they have been put have a very troubled history — and in many ways that is still true today. One very common response to this situation has been attempts to marginalized or eliminate standardized tests, or at least any meaning that make provide a foundation for decision-making. 

And yet, teachers should still be accountable to principals, parents and students, schools accountable to communities and school boards, and school districts accountable to communities and various levels of governmental oversight. 

There has been this idea that standardized tests are responsible for bad decisions that have used them as justification. This idea persists, even though poor school funding and marginalization — within our schools! – of low performing students and populations go back as long as any concept of schooling has existed. 

There has been this idea that if we can protect students from the evil tests which come from those unknown strangers that we will ensure that those who know and love them best are will do right by them. And I agree that that is the best case. That is what I want teachers, schools and school districts to do.

But actual history shows us that that is often not the case. We have too often settled for unacceptably low performance from some children and expected even less from others. Too often, educators and policy-makers have been blinded by the soft bigotry of low expectations. Dumbing down assessments so every kid will score well on them does a disservice to the very populations and communities that our educational systems has so long failed to do right by. 

Now, I am the the first one to assail the quality of our standardized assessments. They really do need to be changed. But the answer cannot be to make them so easy that they are incapable of providing any meaningful information. That perpetuates the false senses of complacency that this child is being well-served and that community is having its educational needs met. It lowers the bar on what we can expect from our schools, and I find that entirely unacceptable. 

On the other hand, simply making tests more difficult is no better an answer. It is trivial to accomplish, but it too fails to provide useful information. 

Rather, each state has standards that define what students should learn in each grade. Some set of standards has been endorsed through our democratic processes in each state (i.e., by state legislatures and signed onto by governors). We know what the children should be learning — at least academically. Our standardized tests must to a better job of assessing those goals, so that parents, communities, school boards and other levels of government can make appropriate decisions about how to better support our children. 

While these academically-focused standardized test should not be the only basis that policy-makers used to make decisions about our schools and not the only basis by which community members should evaluate their schools, they deserve better information about this core function of schools, not lesser.

Copyright, Fair Use and Plagiarism in Assessment Development: Part II

Last week I wrote about plagiarism and how the concept applies in assessment development. Plagiarism is about using others’ ideas or words without giving appropriate credit (i.e., citation). But appropriate varies from context to context, and in assessment development the only things that are generally credited are excerpts from previously published work. That is, quite extended quotations — which are credited to their original creators or copyright holders.

This week, I address copyright infringement and fair use, in the context of assessment development.

What Is Copyright?

Copyright is mentioned in the US Constitution, that Congress shall have the power to “secur[e] for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” That covers copyright and patents, respectively.

Copyright only protects expression, and not ideas. So, it protects this blog post as I wrote it, but not the ideas or organization of the ideas. It protects my words. Copyright protects visual media, as well. That’s photographs, illustrations, painting, drawings, film/video, etc.. Again, it protects the actual exact thing, not the ideas behind it. You can rip off a plot, without violating copyright.

Copyright does NOT protect physical things. It does not protect inventions (i.e., that’s patent law). It does not protect designs for clothing or handbags, even though their look is part of the point. Should it? Well, it doesn’t. That’s how it works. It protects computer software because…because it does. Because software is written in a programming language, and back in the day lawyers convinced courts that that was the best way to think about computer software.

Copyright allows the copyright holder — usually the creator, unless they have transferred their copyright to someone else — to decide how the work may be used. It’s up to them. And it lasts for a limited amount of time, though that period keeps getting extended longer and longer. It is supposed to expire, eventually.

What About Fair Use?

Fair use is the big exception to copyright. The copyright holder gets to control how and in what conditions the work can be used, except for fair use. People talk about parody and satire, but what they really are talking about is a particular kind of fair use.

Fair use is not just an opinion. It is a technical term and the law (i.e., Section 107 of the Copyright Act) defines a four factor test to determine whether something is fair use.

  1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

  2. the nature of the copyrighted work;

  3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

  4. the effect of the use upon the potential market for or value of the copyrighted work.

These four factors are all relevant. Each must be examined and then the results are weighed out.

Factor 1: Purpose and Character of the Use

Some people falsely think that if they are not making money when they violate copyright then it is ok, that it is fair use. But they are generally wrong. Yes, that is relevant, but it is not enough. It is just one part of this one factor.

As one weighs out the four factors, it does matter if the goal of the enterprise that is using the work is to make money. That is, use by a commercial entity in the furtherance of its goals and operations — even if the reproduction is not actually being sold — is commercial use. Even if the use does not make money itself, it can still weigh against the use. On the other extreme, non-profit educational use is the best case for fair use. A for-profit educational organization is close to that extreme, but not all the way there. A non-profit that is not education is not all the way there. A random book publisher or restaurant? Those are just straight commercial uses.

Quoting a work to comment upon it, like in a book review or various forms of scholarship? That’s leans heavily into fair use. Some might even consider that an educational use.

Regardless of the context, when use of a prior work transforms it in some way, fair use is a more likely conclusion. This is usually the point of parody and satire. The more they transform the original work, the more likely they are to be seen as fair use. Summarizing a work may include all of its ideas, but it entirely transforms the expression of those ideas. How transformative a use is can be a matter for debate, but there’s no question that greater transformation is more likely to be fair use.

Factor 2: Nature of the Copyrighted Work

Not only does the use matter, but so does the nature of the original work being copied.

Creators should generally have control over first publishing of their work, so courts are more protective of unpublished work than published work. That weighs against a finding of fair use.

Copyright is generally understood to be more focused on encouraging creativity than other sorts of endeavors, so fiction and other literary work is generally more protected than work aimed at being informative. This blog post? Informative. The 5pm newscast on your local TV station? Informative. That hit movie or novel? Literary.

Where does documentary film fit in that? It is a certain kind of creativity to record, edit and put that together. It is supposed to be entertaining, in addition to being informative. So, the courts might give medium weight to that, rather than maximal or minimal.

Once again, however, this is just one of four factors, and they all must be weighed against each other. There is no condition of any factor that guarantees a particular final ruling on the question of fair use.

Factor 3: Amount and Substantiality of the Portion Used

This is likely the easiest factor to understand, of the four.

If you are using a small piece of the original, then it is more likely to be fair use. The more you use, the less likely it is to be fair use. You copying the whole thing? That’s gonna weight heavily against you.

Note that this is not amount the absolute quantity used, but rather the relative quantity. Copying all of a 4-line poem is using the whole thing, while copying four lines of a 100-line poem is only using a small piece. 200 words from a 20-page short story is very different than 200 words from a long novel.

Of course, this posits a problem: how do you define the original work? Is it five minutes from an episode of a television show (i.e., nearly 25%)? Or, is it five minutes from a whole television series (i.e., less than 1%)? Obviously, the copyright holder would want to claim the former, and the person seeking fair use would claim the latter. It can be quite a challenge to figure out how to evaluate even a single of the factors, sometimes.

Factor 4: Effect of the Use Upon the Potential Market

This factor ought to be quite easy to understand, but it is far too often ignored.

If the use would tend to decrease the market for the original work, that counts against fair use. So, a teacher who makes copies of a story to distribute to their class, so that the school does not have to pay for the book in which the story appears? Sure, that’s an educational use, but it directly harms the market for the original work. Without making those copies, the school would have to pay for the book.

In theory, some uses may encourage the market for the original work. For example, movie reviews bring free publicity for films. Now, this — like the photocopied story — is hard to disentangle from the purchase and character of the use. But it can be pretty easy to recognize when the use is primarily to avoid having to pay for the original.

On the other hand, when a work is out of print and unavailable on the market, there is less likely to be an impact. That doesn’t mean it has to be available in the form you would prefer (e.g., streaming). If it is only available in some other form (e.g., BluRay or DVD), that’s still available.

Applying the Four Factors to Test Development and Publishing

The assessment industry has settled on some patterns of practice in building stimuli for items that can be examined through these four factors to determine whether they are fair use. These stimuli may include reading passages or other works that could be under copyright.

  1. While professional licensure exams and certainly a commercial purpose, K12 assessments are intended for educational and public policy use. Test development vendors that are non-profit organizations are, therefore, engaged in non-profit educational work. For-profit test developers are not quite as well off, in this regard. But all of them tilt at least a little towards fair use.

  2. Original works maybe more creative (e.g., poetry or short stories) or more informative (e.g., journalism or scholarship). If they are previously unpublished, they are always are commissioned by the test developers, so they own the copyright. But those other works — if the copyright has not expired (i.e., are now public domain) are quite often in the more creative realms. That argues against fair use.

  3. Test may include whole poems or articles, and may be limited to excerpts from larger works. It really runs the whole range. In some cases, this argues for a determination of fair use, and in others it agues against. Mostly commonly, though, they are excerpts.

  4. It is not likely that any on a large scale assessment would lessen the demand for work in the market. That argues for fair use.

Taken together, this very straightforward analysis suggests that most use of potentially copyrighted works would pass for fair use. Excerpts used by a non-profit company to put out a product for educational use is usually going to be fair use. A for-profit company using an entire poem or an original photograph is far less likely to be considered fair use, regardless of its impact on the market for the work.

Additional Considerations

Fair use does not matter if the user (e.g., a test developer or client) is willing to pay a licensing fee agreeable to the copyright holder and works that are in the public domain have no copyright claims — by definition). This why most test developers go with three options.

  • Try to find public domain works

  • Permission existing works (i.e., pay a licensing fee)

  • Commission works that they can own the copyright to.

Even more broadly than that, many test developers make use of ideas that the find elsewhere. But ideas are not copyrighted. Summaries, reimaginings, simplifications and adaptation are so transformative as to not even constitute use of the original expression, and therefore questions of fair use vs. copyright violation.

*********************

This entry does not address any of those other intellectual properties areas (e.g., trademark, trade dress, trade secrets, patents), as they do not apply at all to questions of test content — though they certainly are interesting in their application elsewhere, even in the context of assessment development organizations.

Copyright, Fair Use and Plagiarism in Assessment Development: Part I

Large scale assessments do not exist in a vacuum and often rely on using the work of others — even feature the work of others. This can lead to concerns about copyright infringement and plagiarism.

Copyright is a legal construct. It is mentioned in the US Constitution. Article 1, Section 8 says, “Congression shall have the power to” of do whole bunch of things, and Clause 8 lists, “To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;” But copyright is the focus of the next blog post. Today, I am focusing on plagiarism.

Plagiarism is not a legal issue. There are no hard and fast rules on plagiarism because there is no central authority who gets to decide on such things. But generally, plagiarism is inappropriately using someone else’s work — usually without proper attribution.

In the previous paragraph, “inappropriately” and “proper” are carrying a lot of weight. No one can give a general definition of plagiarism that is more specific than that because standards for plagiarism are contextual. That is, they vary from one context to another.

Example #1: Shakespeare

The great broadway musical and film West Side Story rips off Shakespeare’s Romeo and Juliet. We all know that. Two warring groups. A teen from each meet at party, immediately fall in love, find a way to be together — even to marry in secret. He kills one of of her cousins, but she marries him anyway. Through tragic misunderstanding and mistakes, they die in the end. Well, he dies. In West Side Story, she lives. Everyone knows that West Side Story is Romeo and Juliet.

So, is that plagiarism? It’s never credited. The story is changed, but it is barely changed. The setting is updated and it’s turned into a musical, but the plot and main characters are basically the same.

There is this book from the 1990’s by Jane Smiley, 1000 Acre. It’s King Lear on an 20th century American farm. The same plot and characters — whose first initials match their analogues in King Lear.

There’s this new play, Fat Ham. It’s Hamlet. Well, it’s not quite Hamlet. There are a LOT of changes. But the basic setup is Hamlet. And the main character slips into some of Shakespeare’s lines from time to time — from Hamlet, of course.

None of that is plagiarism. We are ok with movies, books, plays and television reusing old ideas, be they famous or more obscure. We do not expect them to attributed to the original author, and certainly do not require it. Often, knowing about the original work can deeper our appreciation of the new work. It is not the changes from the original plots or settings that keep them from being plagiarism. It is just the expectations that that in this context ideas are reused all the time, even with rather little change.

In this context, it’s just how it works. Heck, Shakespeare himself plagiarized almost everything. Sure, he added a twist here and there — brilliant alterations to make for a better story — but he based is work on the work of others without ever giving any credit for it. Scholars have basically figured out what books he owned because there are particular details in different plays that make clear whose versions he based his own work upon.

Example #2: Academic Writing

Academic writing is “the worst” (Manual Miranda, 2015). The expectation this that you have to attribute every idea that was not originally yours to the proper source. That is, “Using the ideas, data or language of another without specific and proper acknowledgement” (Vice Provost for Student Affairs, 2020). Like, if I wanted to mention chocolate chip cookies, I should credit Ruth Wakefield, who invented them in the 1930’s.

Whether I am using someone else’s words or someone else’s ideas, in academic writing I have to credit them. “Specific and proper acknowledgment,” says the Vice Provost of Teachers College.

Now, even in academic writing, as careful as it is about plagiarism and credit, there is room for judgment. No, no one would expect you to cite the inventor of the chocolate chip cookie. And these days, no one cites Watson and Crick for their 1953 discovery of the structure of DNA, either. Ironically, if a work is important enough — foundational enough — it transcends the need for credit. Watson and Crick (1953) have only been cited 16,000 times, even though vastly more work builds on their ideas. And though when I wanted to use the phrase “the worst” and I knew what I heard in my head was from Lin Manual Miranda’s use of it in Hamilton, I didn’t really need to cite him.

But if I were to talk about the need to consider the details of things and understand how it feels to be in them, and also to consider the big picture using the idea of the balcony and the dance floor, I would have to cite Heifetz. This video is all about that, and particularly ripping off how he used that metaphor, its purpose and context and lessons. There is nothing wrong with writing about this, even writing that much, but they really should have cited Heifetz. That video does not meeting academic standards for citation, but it is not from an academic institution.

Example #3: Blogs

There is no rule or expectation for blogs. Blogs are a little microcosm of the world, in this regard. Some blogs cite sources more, and some cite them not at all. With blogs, links often serve as citations.

Both academia and link-heavy blogs are about the conversation and the connection of ideas. There are other reasons (e.g., credibility) to cite and link, but a big part of it is just to continue to be a part of a larger and ongoing conversation. But that’s voluntary, when it comes to blogs.

Most blogs are not trying that hard to give credit. Their authors want to feel more ownership of their ideas. And adding all those citations and/or links actually makes it harder to read. Academic writing is really hard to read, and one of the contributors to that difficulty is all of those citations. Blogs want to be more accessible than that.

Plagiarism and Assessment Development

Plagiarism is about failing to give appropriate credit for using someone else’s words and/or ideas. And what constitutes appropriate credit varies by context. What is appropriate in the context of large scale assessments can be observed by looking at what large scale assessments have historically done.

Assessments do not credit originators for ideas, not generally. Excerpts from previously published works are generally credited to their authors and/or their copyright holders. Generally. And that’s it.

That is how it works.

One might argue that it should work otherwise. One might argue that it it should be more like academic standards for plagiarism. I would respond that the audience (i.e., the test takers) are likely not prepared for academic levels of citations and that they certainly do not expect it. But should there be more citation than there has been? Well, some might think that, but it is just their view in the context of established expectations that are quite different. They can try to convince people, but there is no authority they can cite that makes it mandatory or even appropriate.

They might argue that to do otherwise is copyright infringement. In fact, citation is no protection against copyright infringement and what they have a problem with is likely not copyright infringement, in the first place. But that’s for Part II.

Why Not Speak Up?

One of our colleagues pointed out to us last year that while humility — one of our core principles — is important, it was also important to recognize that a lack of confidence is also important. Appropriate confidence in one’s own expertise is critical to successful collaborative work, too.

Thus, we we dove back into thinking and came up with Expertise, Confidence & Humility. We are pretty happy with it, but we did not dive into the messier aspects of why people might not speak up when they should. We focused on the importance of appropriate confidence in one’s expertise, but not about reasons why someone with such confidence still might not speak up.

In that piece, we acknowledge that there are gender issues here around internalized and externally imposed societal expectations, but we did not address the social expectations that the expert in question might have for others — often well grounded expectations.

As we mentioned, one of us has had too much experience not being expected to have something worthy to contribute. If this happens enough — being faced with others not having confidence on your own expertise — it obvious gets more and more difficult to speak up. Why bother when you know that no one will listen, anyway. This is not a matter of confidence in one’s own expertise, but rather confidence in lack of respect by others. Yes, this is a real problem.

Unfortunately, it gets worse. Members of less powerful or prestigious groups (e.g., women, members of underrepresented minorities) can face real negative consequences for speaking up. For not knowing their place. For being — for lack of a better word — uppity.

And even worse, being right can make these consequences even more severe. People who resent the uppity voice will very much want to reinforce their own dominance, perhaps making sure to point out when that voice is wrong. But if that voice is not wrong, their need to reinforce dominance will seek other outlets — and perhaps require even more substantial efforts to enforce a desired hierarchy.

We don’t have an answer to this. We know that within our own teams, such dynamics should never happen. That violates the norms we try to establish and maintain. And we hope that the larger organizations in which our teams works are similarly disapproving of such attitudes.

But we know that this horrific dynamic exists broadly, and even within our own organizations and teams, there are people who are nervous to speak up because they have learned these problematic lessons elsewhere.

Of course, all of us still have to be mindful of when to we are relatively more expert or relatively less expert in a room. But one’s ability and readiness to offer contributions is complicated by doubt about whether others will listen, and fear of backlash simply for speaking.

We wish we had an answer to this, but we do not.

Innovations and Citations

As an academic researcher and a dissertation coach, I am very familiar with the importance of citations. When explaining this to people, I say that what makes scholarship different is participation in what I call “the scholarly conversation.” That is, by positioning this research in the context of what came before, the scholar credits those who came before, demonstrates understanding of what came before and frames their new offering as they wish it to be framed for their readers.

My wife is an attorney and often a litigator. When filing briefs and motions with a court, she has to play a similar citation game. That is, the credibility of her arguments are increased because she demonstrates her understanding of what courts, legislatures and regulators have decided in in past and she similarly frames her argument as she wishes her readers (i.e., a judge and their clerks) to understand them.

Both of these uses of citations is intrinsically concervative. They look back in time for wisdom and authority from those who came before.

In matters of law, this makes a great deal of sense. We do not want the law to change frequently. Stability and predictability of the law and of the outcomes of judicial decisions is generally a good thing. In matters of scholarship, it makes a great deal of sense, as well. It allows scholars to build on a huge amount of previous — often complex and subtle — work of others without having to review all of it in depth. Rather, the scholar tells the reader where they can find support for a point or full explanation of something or others’ evidence for an idea.

However, conservatism is at odds with innovation. Innovation looks forward to something new, while conservative looks backward for the wisdom of the ages. To get around this, innovation often has to claim that they are restoring the true wisdom of the past that has been missed or misunderstood. I just had to do this, to some degree. Efforts to expand civil and level rights to new groups of people in this country often have to attempt a similar strategy (i.e., no, you’ve underread the t4th amendment. it actually suggests that we should…).

The problem with this conservative approach is that it stifles innovation. This may be a good thing. We do not want radical changes in contract law. We do not want to change what basically works and is widely depended upon. But it depends on the assumption that things do basically work and that when they could bear to be improved upon, there is relevant wisdom in the ancients that we can depend upon.

This is a challenge to the (RTD) Rigorous Test Development project. We have always said that content development is a black box that researchers and the literature have ignored. There simply is very little work on what the practice of assessment development is about, and virtually nothing about content development. ECD (Evidence Centered Design) really stops short of content development — or rather, it works around it, looking at test design and psychometrics, but not item development.

So, I wonder about assumptions about the value of citation-based arguments and the values they are based in. If things are not – in fact – on the right track, then is it an obstacle to necessary improvement?

Reading Skills, Making History and the 3rd Amendment

(Many do not understand this, but Common Core did not just include standards for Math and English classes. The Literacy standards are for English class, but even on the title page it is clear they are are also for “History/Social Studies, Science, and Technical” classes. Anywhere that one reads to learn information.)

Common Core’s (CCSS’s) sixth reading standard says, “Assess how point of view or purpose shapes the content and style of a text.” This skill is applied to reading literacy works and to reading for information — and even to speaking and writing. Instruction in this skill begins in kindergarten and awareness of the author and that fact that author’s have a point of view begins in third grade. By high school exit, students should be able to, “Determine an author’s point of view or purpose in a text in which the rhetoric is particularly effective, analyzing how style and content contribute to the power, persuasiveness, or beauty of the text.”

The Third Amendment to the Constitution of the United States says, “No Soldier shall, in time of peace be quartered in any house, without the consent of the Owner, nor in time of war, but in a manner to be prescribed by law.” The Bill of Rights only has ten amendments, and two proposed amendments were rejected. This issue of quartering soldiers was clearly very important to our Founding Fathers. The 9th Amendment says this is not an exhaustive list and there are other rights (“The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.”), but they listed a whole bunch of particular rights that they wanted to specifically innumerate.

Why did the quartering of troops make that list when things like marriage or travel did not? Heck, voting is not even listed — not enumerated.

That important sixth CCSS standard is quite relevant here. As adults we should be able to consider the point of view and purpose of the authors of Bill of Rights when thinking about what was included and what was not.

The political leaders of the new states and nation had just gone through quite a trauma. That had particular grievances with their old king (e.g., “Quartering large bodies of armed troops among us,” “He has plundered our seas, ravaged our Coasts, burnt our towns, and destroyed the lives of our people.”) and wanted to make sure that the new central government did not repeat those offenses. They listed the issues that were on their minds, because they had just gone through something. They did not wish those offenses repeated.

And they knew there were others (i.e., see the Ninth Amendment, but they needed to make sure about that recent stuff not happening again.

******************************

There is a great quote of ambiguous meaning, “Well-behaved women seldom make history.” Many people take this to mean that women should not be concerned with being well behaved, as behaving well would prevent them from great accomplishments. That is a wonderful interpretation, and historian Laurel Thatcher Ulrich applauds the sentiment.

However, when Dr. Ulrich first wrote those words, she was lamenting the difficulty of finding historical records of the lives of women who were not remarked upon for their misbehavior — unlike, say, “witches.”

It simply is difficult to find the concerns of well behaved women in all those written documents. Historians know that. Women had less access to power, to education, to quill and paper. Their letters were far fewer and their direct participation in matters of state was virtually nil.

******************************

Taken together, this should not be hard to understand.

Our Constitution and laws were focused on the concerns of men and they were focused on the recent offenses of their former colonial overlords. They certainly did not see any need to protect — or even address — the concerns and habits of the women around them who were engaged in the normal lives of society’s women.

Should we take the absence of a right to reproductive freedom by free white women in our founding documents as a sign that it did not exist? As a sign that pre-quickening abortion was rare or socially unacceptable? Or should we take it merely as an indicator that the issue was not threatened or on the minds of the holders of political power?

That’s not actually a hard question at all.