Peggy O'Neill,
Loyola University Maryland,
on the Limits of
Machine Scoring of Writing
Peggy makes the case that machines can only score one, limited kind of writing and that kind of writing is not the complex writing expected in college and the workplace.
Chris Anson,
North Carolina State University,
on What Machine Scoring
Can't Do
Chris concedes the value of machines analyzing texts, but points out what they can't do effectively in evaluating student learning/writing.
Carolyn Calhoon-Dillahunt,
Yakima Valley Community
College, Washington,
Tells a Story about How
Machine Scoring Was Useless
in Administering Placement Exams
Carolyn goes into persuasive detail about the limits of the e-write scoring program. She says it rewards students for writing mechanical pieces and that it isn't reliable. She adds that human scorers in her department disagreed with the machine scores.
Les Perelman,
Masschusetts Institute
of Technology,
Bluntly Explains Why
Automated Scoring Doesn’t Work
Les discusses the artificiality of automated essay scoring. Says SAT is a race to see how many words a student can get on a paper in time limit. Points out that machine scoring tends to be biased against ELL’s because it over-emphasizes errors with prepositions and articles.
For more on the SAT, see NCTE's The Impact of the SAT and ACT Timed Writing Tests.