Note to Test Designers: Bad Questions are not the Same as Hard Questions
Thursday, August 22, 2013 - 04:00 AM
We have been hearing recently about how “hard” the recent state tests were. We are told that they had to be “hard” because we have to raise the standards in order to make sure all our students are college-ready.
Having proctored the fifth-grade tests this spring, I have to wonder where the folks at testing company Pearson went to college. From my vantage point, the tests suggested that the designers have had little direct experience with literature, history, or math. The tests were hard in the sense that they were hard to do well on. But they were hard to do well on because they were poorly designed, with little connection to the work that students should be doing in school, at the fifth grade or at college.
Here are some key problems I encountered with the latest state tests:
Problem #1: There’s not one right answer in literature
When I teach fiction, I emphasize that different interpretations and opinions are fine, as long as students can support their ideas with evidence from the text. Yet for three days in April, fifth graders were asked to read poems and stories and identify the one and only one theme of the selection. Not a main idea - I believe it is possible to pinpoint one main idea in reading selection - but one theme, as if a story can’t be about both friendship and trying new things.
This could be fair if three of the four choices provided were clearly wrong, yet they weren’t. As a savvy test-taker, I could usually identify which choice Pearson considered right. But as I watched students make their “wrong” choices, I could see how they could make a beautiful argument with evidence from the text to support their answer. And sometimes I couldn’t choose. One question had two choices that were different ways of saying the exact same thing and another had two choices that seemed equally possible, no matter how many times I reread the poem in question.
As a teacher, I know how to prepare students for college and I know how to assess their reading; this test did not do either.
Problem #2: Expository texts require visuals
Reading expository texts is challenging for many students at any level, including college. Good texts are filled with maps, graphs, photos, charts, etc. to help the reader visualize and organize new information. In fact, good teaching involves teaching students how to read all those visuals and integrate the information provided with what is in the text, and we should be assessing whether students can do that. Yet Pearson expected fifth graders to comprehend an article about Jefferson’s vision for westward exploration that ultimately led to the Lewis and Clark expedition while providing no visuals.
There was no map to help students understand that “The United States” was just a tiny sliver of the land between the Atlantic and Pacific Oceans, or what “The Louisiana Purchase” referred to. There was no timeline to help students grasp the time span covered in a few short paragraphs or to keep track of the different generations of Clarks. This was not an example of challenging text; this was an example of poor text, making it a poor assessment.
Problem #3: Math should make sense
We hear a lot about students who can perform the basic operations in math but can’t apply them. Presumably the new test would ask students to apply their math skills to real-life situations. Instead, Pearson’s goal seemed to be to come up with situations that would never happen in real life to test students’ stamina and ability to complete nonsensical operations.
A person has change in her pocket. She spends varying amounts at various places. In the end she has 1/45 left; how much money does she have. 1/45? When in college or life does anyone have occasion to work with forty-fifths? In another question we learn how long it takes kids to do something a certain number of times, and the question is how long does it take them to do it one time. The answer, which all the fifth graders I was with could figure out, was four minutes. The problem? Pearson wanted the answer in hours, and since it was not multiple choice, it was anyone’s guess what answer they were looking for. I hoped that the kids that wrote “1/15 hour” got it right.
Question after question, either the scenario described made no sense or the math required made no sense. I like math. I think it can be fun and I know it is useful. But this test was like math through the looking glass because there was no point to any of it, least of all to assess students’ math skills.
If Pearson stood by their tests, they would release them to the public and let the public be the judge. And I would challenge any adults that think these tests reveal something about children’s learning to take the tests themselves. I would love to see those scores.
Editor's Note: The state education department released a sample of test questions on their website. You can see them here.