Gettysburg Address Flunks Robograder
Computer-graded writing continues to spread nationally, but at what cost?
Student writing requires detailed feedback and a careful eye. The national introduction of Common Core English standards tests in 2014 expands the need. Standardized writing scores carry great impact and must be graded with care; however, limited graders combined with expanded examination create a problem. Testing companies think they have the solution. While computer essay graders can score thousands of papers in seconds, the machines’ practical shortcomings trump the benefits of expediency.
The National Council of Teachers of English opposes the use of machine scoring in the assessment of writing. Such grading devalues and eliminates the social and contextualized nature of the art. As NCTE member Les Perelman found, these machines are fallible. On a scale of 1 to 6, the Gettysburg Address earned a 2. Perelman also found the machines overvalue magniloquence.
"While (robograders) may promise consistency, they distort the very nature of writing as a complex and content-rich interaction between people," CCCC Writing Assessment: A Position Statement
The Conference on College Composition and Communication, a constituent group of NCTE, supports human assessment. Assessment that isolates students and forbids discussion and feedback from others conflicts with what we know about language use. We write for social purposes, so it would follow that only a human can accurately critique communication.
"E-Rater doesn’t care if you say the War of 1812 started in 1945," Les Perelman in The New York Times
Despite what the grading companies say, their products are far from perfect. Perelman found several ways to game the machine, as outlined by NPR. Machines may grasp how to grade great grammar; they cannot enjoy the emotions of evocative essays or see sparks of subtle smarts.
"If a student's first writing experience at an institution is to a machine, this sends a message: writing at this institution is not valued as human communication -- and this in turn reduces the validity of the assessment," CCCC Position Statement on Teaching, Learning, and Assessing Writing in Digital Environments
The movement against robograding must start at the top. If colleges and universities begin using computers to grade student writing, high schools will begin preparing their students for computer-graded writing. As Crispin Sartwell notes, writing to computers could reduce all writing to templates.
Anne Herrington and Charles Moran took an extensive look at computer graders and noted several problems. If a computer can grade thousands of papers at once, why not put all those students in one class? Why not use one teacher to record an online lecture for thousands of students? Gone would be the demand for teachers and the diverse voices they offer.
Computers may be cheaper and faster than humans, but they do not possess the requisite skills of comprehension to accurately grade student writing.