Winds of Change in the Assessment World?
by Kathleen Blake Yancey
Sometimes signs of change are hard to see. I think most of us, if asked about assessment and testing, would say pretty much the same thing: we have more tests and more bad tests than ever before. And what’s worse is that such tests—in the time they take away from teaching and in the messages they send to students about what’s important and what’s not—prevent us from helping students as we could.
This fall, visiting with preservice teachers in Michigan and with practicing teachers in Pennsylvania, I heard the same story over and over again. In fact, we’re not alone in this view. Writing in The New York Times, for example, Brent Staples voices the kind of sentiment we do when he says in referring to the testing industry, “What we need now is a revolution in writing instruction, not just another test prep exercise.” Such a revolution, of course, won’t happen as long as regressive tests dominate the educational landscape.
In my most recent Council Chronicle article ("Assessment Models Worth Sharing," November 2008), I suggested that all assessments aren’t regressive and that there were some assessments that we might endorse. I pointed to a portfolio assessment in a Virginia elementary school as an assessment that documented what kids can do (rather than what they cannot); that was useful to teachers in the classroom; and that showed that these ESL kids were succeeding.
I also pointed to the Insight Resume, used as one college admission piece at Oregon State University, which counts on students to tell their own stories, and which has opened the doors to students previously excluded from OSU. (And let me note the smart title of this assessment—the Insight Resume—which seems to assume that students do have insights worth hearing about!)
In looking at some of these practices and one other that I want to share here, I think I see an emerging trend in assessment, still very much in process, but one that (1) we should know about; and (2) we may want to think about collectively, especially as members of NCTE. Put differently, I’d like to share with you what I see, and what I think it means, and then raise some questions we might want to consider.
One place I go for news is Inside Higher Ed (IHE), an online newspaper focused on higher education, but also on K–12 activities, too. In September of this year, in a story titled “If You Can’t Beat 'Em, Join 'Em” , IHE reported on a shift in the national thinking about college admission, even on the part of the College Board. The story begins with this paragraph:
For decades, critics of standardized testing—and especially of the SAT—have said that these examinations fail to capture important qualities, resulting in admissions systems that favor certain groups over others, while failing to represent test takers’ full identities. And generally, these critics have said, the qualities that the SAT is best at identifying are those that wealthy white students are more likely than others to possess.
Indeed. What’s interesting is that the story doesn’t stop with this critique. Instead, it talks about the role of “non-cognitive” traits in accounting for student success in college and about the need to include such traits in the college admission process.
These traits, which are the kind of characteristics that the Oregon State Insight Resume already includes, are exactly the kinds of traits that we teachers have historically valued—traits like artistic and cultural appreciation, multicultural appreciation, leadership, interpersonal skills, perseverance, and integrity. Moreover, based on recent research, these traits are the kinds of personal characteristics that successful college students demonstrate.
So if you value such traits, how do you assess them? The current thinking, led in part by an inter-collegiate group housed at Michigan State called GRASP is that the best way to assess these traits is twofold: first, by asking students to describe their experiences in writing, much as in the Insight Resume; and second, by providing students with scenarios to which they respond. A sample scenario looks like this:
What would the student do if, when assigned a group project, all the group members sat down and no one said anything. Choices would include:
• Look at them until someone eventually says something.
• Start the conversation yourself.
• Get to know everyone first and see what they are thinking about the project to make sure the goals are clear to everyone.
• Try to start working on the project by asking everyone’s opinion about the nature of the project.
• Take the leadership role by assigning people to do things or ask questions to get things rolling.
Now this model is something of a mixed bag, to be sure. When I look at this scenario, I’m not sure what I would do; for me, it would depend on the situation. (And in theory, I am a leader) So this looks a bit odd to me. But what I do like about this model of assessment is that in the earlier part where students are asked about their own experience, the measure positions them as authorities. It validates their own experience, as described in their own words, and it counts such experience in a serious way in an important decision. Equally important, the research shows that the use of this kind of measure rewards students who historically have been excluded from higher educational institutions, especially students of color. Both of those changes are, in my view, good.
But as is typically the case with assessment, the news isn’t all good. Two issues are central here. One has to do with how such a measure might be standardized—or if. Some say that to work, this new measure requires standardization; and no surprise, the loudest voices here come from testing companies. I’m not sure that this claim is valid, however, given the experience at Oregon State.
A second concern has to do with gaming the system. As a administrator at Purdue articulated the issue,
If these become “high stakes admissions instruments, she said, it’s only a matter of time until the test-prep industry materializes and offers to coach students on how to answer. The whole system is based on students answering questions honestly, she said, and its value would be “quite diminished” if students figured out how to answer to get high scores in various categories.
But what’s interesting here is the connection between standardization of this measure and gaming. The Oregon State folks have been using their measure for three years, and they’ve had no problem with gaming at all. So the concern about gaming seems to be directly tied to standardizing.
I share all this today for the following reasons.
- If the College Board is interested in non-cognitive abilities and is working on an assessment measuring them, we can expect that this measure may come soon to a school near you.
- These traits, which can be described and evidenced in writing, are traits classroom teachers see daily. Do we need a standardized measure?
- Given the relationship between college admission testing (e.g., the SAT, the ACT) and what schools teach, is it possible that non-cognitive abilities will be tested in schools?
- Suppose we thought outside the testing box and that, instead of giving students tests, we helped them create print or digital portfolios that not only described these traits but also showed evidence of them?
- Is this move to non-cognitive trait measuring an issue that NCTE should address? And if so, how?
Thanks for reading, and please feel free to let me know what you think: firstname.lastname@example.org
Kathleen Blake Yancey is 2007–2008 NCTE President and is Kellogg W. Hunt Professor of English and Director of the graduate program in Rhetoric and Composition, Florida State University, Tallahassee.