National Writing Project

Always Already: Automated Essay Scoring and Grammar-Checkers in College Writing Courses

Publication: Machine Scoring of Student Essays: Truth and Consequences
Date: August 9, 2012

Summary: While the possibilities of automated grading are often dismissed, researcher Carl Whithaus argues for a shift in the way we think about technology as an assessment tool. "If our practices combine software's functions as media and tools," Whithaus says, "then we need to reformulate our conceptions about machines reading and assessing students' writing."

 

Excerpt from Article

In practice, software is used as both a medium for communication and as a tool for assessment and response. I am arguing for a conceptual shift within composition studies—if our practices combine software's functions as media and tools, then we need to reformulate our conceptions about machines reading and assessing students' writing. The tradition of rejection, reaching back to Ken Macrorie's (1969) critique of Ellis Page's work (Page and Paulus 1968), needs to be revised in favor of theories and practices of writing assessment that acknowledge the range of software's influence as responsive evaluative agents. Acknowledging this range will make it possible to evaluate the validity as well as the reliability of automated essay-scoring systems, not because the systems are valid in and of themselves, but because—drawing on Lee Cronbach's (1988) notion of validity as argument—the use to which the software agents or other forms of writing assessment are put are appropriate. For instance, the writing component on the new SAT exam is not a valid measure of a high school junior's or senior's overall writing ability, but it is a valid measure of how that student writes on a twenty-five minute timed, impromptu writing exam. Will this exam tell us all we want to know about incoming students' writing abilities? Hardly. But it does give a snapshot of a student's ability for one particular moment and for one particular form of writing. Predications based upon the writing component of the SAT, then, will be most accurate for this form of writing; the scores will have less validity as students move on to other, more complex writing tasks. Similarly, in carefully defined writing activities, software can be effectively used to assess short, close-ended responses from students, to quickly respond to surface features of student writing, and to offer the potential for students to develop metacommentary or reflection on the paragraph level.

Copyright © 2006 Utah State University Press. Posted with permission.
Whithaus, Carl. 2006. "Always Already: Automated Essay Scoring and Grammar-Checkers in College Writing Courses." In Machine Scoring of Student Essays: Truth and Consequences, edited by Patricia Freitag Ericsson and Richard Haswell, 166–176. Logan, UT: Utah State University Press.

Download the Article

PDF Download "Always Already: Automated Essay Scoring and Grammar-Checkers in College Writing Courses"

Related Resource Topics

© 2023 National Writing Project