Interim assessments that follow the science of reading comprehension can enhance teaching and learning.
American teachers and students are captives of a broken assessment system.
Interim reading assessments frustrate teachers and students and devalue what students are learning, even though they’re intended to provide useful information about student progress and help teachers target instruction throughout the year. They have not moved the needle on reading proficiency or reducing inequities, as new NAEP reading results confirm.
Today, we’re issuing a clarion call to assessment stakeholders at all levels: Do better for teachers, so they can do better for students.
Right now, periodic reading tests prompt students to “find the main idea” or identify a “point of view” — discrete standards and skills that don’t add up to reading comprehension. They are misaligned with the research on how kids learn to read well and ignore the foundational role of knowledge in reading comprehension. Reading is a meaning-making endeavor, and comprehension is an outcome that occurs when readers apply a dynamic set of reading processes and knowledge to a text.
But that’s not what we’re measuring. Consider this fourth-grade Reading Standard 3 for literature:
Describe in depth a character, setting, or event in a story or drama, drawing on specific details in the text (e.g., a character’s thoughts, words, or actions).
Students could miss a test item tied to this standard because of weak decoding skills, insufficient vocabulary, difficulties parsing syntax or transitions, or insufficient background knowledge. Often, it’s a combination of these factors, not misunderstanding the standard itself, that contributes to a wrong answer. But the interim assessments we give students today can’t identify what went wrong.
Reporting test results by standards, strategies, genre or any single construct confuses cause and effect. Answering a question based on a standard is an effect of comprehension, not a cause. And a student’s response to any one question tied to a standard does not predict how well that student will do on a similar question using a different text.
It’s time to transform. Few schools — or teachers — will move to text-focused classrooms and abandon using standards as the organizing force for daily lessons if the assessments they’re provided use an outdated, ineffective approach. It’s a vicious and damaging cycle. There’s a better way.
Transforming Assessment Questions and Classroom Conversations
We need new assessments that reflect the research base and diagnose the degree to which actual reading comprehension is occurring.
Assessments should focus students on the most challenging sections of a text and pose questions that can determine whether students navigated the passage for meaning. Questions also should address what world knowledge can be learned from reading the text carefully. And, questions should focus on challenging vocabulary or phrases to see if students understand the contributions that vocabulary makes to meaning. Only then should tests feature standards-based questions that fit the text to determine if students’ comprehension reflects the depth and complexity called for by the standards. (For an example of this approach, see the Case Study Asking Better Questions of Texts.)
Such assessments would provide more meaningful information and play a more powerful role in the classroom. Rather than issuing reports on mastery of this or that standard, assessment developers need to release their passages and items in full, along with guidance on how to discuss the results with students. Then teachers could use interim assessments to deconstruct student thinking in class, by revisiting reading assessment texts and asking students to share their thoughts, passage by passage, about each question they encountered and explain why they answered questions as they did.
This is a low-tech, labor-intensive, and high-impact way to use interim data to inform instruction. We learn from our mistakes, and in the case of comprehension questions, the richest discoveries will come not from asking which items students missed, but by asking why. Students can go astray for a variety of reasons, and the best way to identify the path they followed, or where comprehension broke down, is to ask them what they were thinking. The challenges any text presents will vary, but the number and types of obstacles are not infinite. As obstacles are revealed, teachers — and eventually, students — can lead discussions that explore how best to overcome them. This collaborative approach enhances comprehension for all students, expanding their understanding by recognizing how ideas, language, and vocabulary interact with knowledge to make meaning.
Deconstructing assessments with students connects instruction directly to the science of reading comprehension rather than treating reading as a disjointed series of atomized elements. Teachers might find that what they are already doing to support students’ reading comprehension is on the right track, but they need to do more of it, or some areas require less attention. Over time, teachers and students will recognize the nature of the various obstacles that complex text presents and how these can be addressed. In other words, assessments can do what is intended of them: inform instruction.
Teachers face a learning curve, and these candid, text-driven conversations take time to do well. However, it is hard to imagine a more powerful way for teachers to support students in learning about texts, probing their thinking, tackling common challenges, deepening comprehension, and exploring the suite of constructs known as literacy.
Contextualizing Assessments Is Key
An even more enduring and essential reform is to ensure tests actually measure what students are learning. Better interim reading assessments, then, would not only reflect the science of reading comprehension but they also would be based in curriculum and connected to the books and topics students study in class.
This vision rejects the false premise that reading comprehension is a content-neutral skill that can be taught and tested in the abstract. Rather than asking students to address items tied to random passages they may not know anything about, a contextualized approach to reading assessment would offer a multidimensional view of students’ reading comprehension. It would be more fair, authentic and equitable, and would more accurately mirror the literacy tasks students will encounter after graduation.
It’s time to invest genuine energy and resources into creating interim assessments that provide actionable insights and align with research and the real world. Current assessments are standards-specific and knowledge-agnostic — the inverse of what research and experience tell us teachers and students need. This approach is a closed loop that is steering teachers and students off-course.
Rather than assess frequently, study the error patterns in data meetings, map those errors onto matching discrete skills or standards, isolate those standards, and instruct teachers to repurpose reading into a relentless repeating pattern of practicing said standards — interim assessments, whether created by assessment providers or curriculum publishers, simply must focus on the real and varied causes of breakdowns in comprehension.
Developers need to revamp their tests to tackle the challenges inherent in content-rich text. They need to abandon the practice of reporting by state standards, strategies or any other atomized element. They need to release items that allow teachers and students to thoroughly analyze and comprehend what students are learning.
Designing the right tests will empower and incentivize the right teaching and make reading tests genuinely valuable to educators and students. The responsibility and power rests with interim assessment providers and publishers, as well as the state and local leaders who procure them. Test developers, hear our call: We need an interim assessment do-over.
Susan Pimentel is co-founder of StandardsWork, a nonprofit education consultancy that sponsors the Knowledge Matters Campaign. She was the lead author of the Common Core State Standards for English/language arts literacy and led development of the Knowledge Matters Review Tool.
David Liben has worked with schools and districts nationwide to improve student learning for over 20 years. He is the former principal of a high-performing school in Harlem and is the co-author of two highly acclaimed books on reading.