National Writing Project

Book Review: (Re)Articulating Writing Assessment for Teaching and Learning, by Brian Huot

By: Douglas James Joyce
Publication: The Quarterly, Vol. 25, No. 2
Date: Spring 2003

Summary: Douglas James Joyce reviews (Re)Articulating Writing Assessment for Teaching and Learning by Brian Huot.


(Re)Articulating Writing Assessment for Teaching and Learning
Written by Brian Huot. Utah State University Press, 2002. $21.95; 216 pages. ISBN 0-87421-449-1.

Writing assessment is a recursive practice. Like those mesmerizing fractal images used to illustrate chaos theory—in which every part contains the whole, and the whole is a never-beginning, never-ending transformation of some profoundly simple equation like Zn+1=Zn*Zn+C—writing assessment requires that we continually revisit where we were, take that product, add something to it, and see where it takes us next.

But if you've ever watched a computer in the process of creating a fractal image, you know how exceedingly dull it can be to just sit there and watch it. I mean, the first minute-and-a-half is interesting, captivating even, but then it's pretty much the same thing over and over. And over and over. And over and over. Better to go get lunch, come back in an hour, and see if it's "done," which, of course, it never is, but it may have reached a point where you can call it good enough and print the image.

Brian Huot's 2002 book, (Re)Articulating Writing Assessment for Teaching and Learning, shares many of these same qualities. His conclusion is superb, but watching it develop is mind-numbingly redundant. The book had its genesis in 1996 as an article in College Composition and Communication, titled "Toward a New Theory of Assessing Writing," which Huot revised and included here as the fourth of seven chapters, the centerpiece if you will. But he begins with a narrative on the hand-wringing he went through to come up with a book title. Shall it be called Reclaiming Assessment for the Teaching of Writing? No, that would imply that assessment had been previously claimed. How about Re-Imagining Assessment for the Teaching of Writing? No, that's too highfalutin, but clearly it must begin with Re-something. Re-, Re-, Re-. . . Well, let's see, it's basically a rehash of others, especially Edward White and Kathleen Yancey, who first articulated writing assessment—that's it! (Re)Articulating Writing Assessment. Now we can all (re)think assessment (again?) without having to reread all those old books and articles.

Huot then (slowly, recursively) moves on to discuss "Writing Assessment as a Field of Study," that is, the history of writing assessment. He describes the problems associated with assessment's two main foci—reliability and validity—when applied to writing assessment, concluding that past attempts to increase interrater reliability have led to indirect measures that are actually deleterious to the learning of writing. Further, Huot concludes that our concept of validity must evolve from a simple correlation—does the test measure what it purports to measure?—into a lens through which researchers reflect on their own theory and practice (23-51).

From there, Huot moves into "Assessing, Grading, Testing, and Teaching Writing," in which he reveals the assumptions behind his approach to writing assessment. He writes, "One assumption is that in literate activity, assessment is everywhere. . . . The second assumption is that being able to assess writing is an important part of being able to write well" (61-62). In other words, every part contains the whole, and the whole is a never-beginning, never-ending transformation of the part. Yet, our current notions of assessment do not live up to their pedagogical potential. As Huot points out, "The kind of assessment that exists outside of a context in which a student might improve her work can be labeled summative, whereas those judgments that allow the student to improve are called formative" (65). But Huot would take us even further, calling for what he terms instructive evaluation, which "requires that we involve the student in all phases of the assessment of her work" (69). He then goes on to recite a wonderful argument for the use of portfolios, in which students self-assess their work as they select and revise those pieces that go into their portfolios, thus learning to develop "the critical consciousness necessary for a developed, evaluative sense about writing" (79). Okay, that's great, but how can we use portfolios to assess our writing programs? Not so fast—Huot's fractal image program still has a long way to go.

He moves into the central chapter of the book, "Toward a New Theory of Writing Assessment," in which he argues that "writing assessment has always been a theory-driven practice" (82). While Huot offers no Theory (with a capital T) of Writing Assessment, he does attempt to move us in that direction with some "Principles for a New Theory and Practice of Writing Assessment"; that is, it should be "Site-Based," "Locally-Controlled," "Context-Sensitive," "Rhetorically-Based," and "Accessible" (105). He concludes that "We need to begin thinking of writing evaluation not so much as the ability to judge accurately a piece of writing or a particular writer, but as the ability to describe the promise and limitations of a writer working within a particular rhetorical and linguistic context" [italics added] (107). Ah—now we're starting to see the picture.

But then Huot goes into "Reading Like a Teacher," which essentially rehashes everything you've ever read about genuine reading of genuine writing, and how to respond to that writing. He concludes:

If we can change the ways in which we respond to our students in our classrooms and the ways in which we think and write about response in our scholarly literature, then we can harness the power of reading and writing to teach writing to our students, instilling in them the same wonder and struggle that guides all of us who work with language. (136)

Now, don't you just hate it when people, even fellow teachers of writing, insinuate that the only thing standing between your students and writing excellence is you, their teacher? While I agree that we must all continually assess and improve our teaching methods, I can tell you that there are a number of factors inhibiting my students' writing excellence that have nothing whatsoever to do with me, their teacher. Enough said.

Huot then discusses "Writing Assessment as Technology and Research," which would seem to be out of place, as maybe it should have come earlier, probably right after "Writing Assessment as a Field of Study," but it does propel us ever so slowly toward "real change in the ways we think about writing assessment and the positive role assessment can play in the teaching of writing and the administration of writing programs" (150). As Huot notes, "What writing assessment culture does exist often revolves around a sense of crisis, in which assessment is cobbled together at the last minute in response to an outside call that somehow puts a program at risk" (150). Yes! That's why we're thinking of buying this book! But hold on, Huot must first make some interesting connections, such as validation as research, and an aside in which he puts assessment into rhetorical terms—seeing validation as argument, which necessitates that we always consider our writing assessment audience, a form of rival hypothesis testing (157-158). All of which very nearly completes Huot's picture of writing assessment; all we need now is a good model.

Which Huot provides admirably. His model for writing assessment comes from his own institution, the University of Louisville, where he is professor of English and director of composition. There, "[w]hile the composition program goals stipulate the number of formal papers for each class and differentiate between a writing process orientation for 101 and a research focus for 102, they do not dictate a specific curriculum or text" (183). All instructors are inculcated in the goals of the program, but diversity among pedagogical approaches is actually encouraged. All course syllabi are collected and read every semester to make sure they conform to the program's general guidelines, and all non-faculty instructors maintain a teaching portfolio containing syllabi, assignments, and other instructional materials. As Huot notes, "These portfolios allow the Composition Program to know what's going on in various classrooms, while at the same time providing instructors with freedom in course design and curriculum" (184). Note that what is important is whether program goals are being met, not how they are being met. Finally, instructors are regularly observed in the classroom, and the observation process is used as an opportunity for self-assessment on the part of the instructor.

Next, the whole picture becomes crystal clear with the component of program assessment that focuses on student writing. Huot writes:

Because evaluating student writing is something that requires some effort and expense, we do not assess student work every year. In addition, because we are looking to evaluate the program and not individual students, it's not necessary that we assess every student's writing. We choose to look at about ten percent of the students' writing in each of the courses that constitute the two-course sequence required of most students. Because we are looking at a limited amount of student writing, we can choose to look at it in some depth. (185)

And that depth is deep indeed, incorporating a three-tiered assessment process that becomes progressively more public with each succeeding tier. The first tier is composed of three-teacher teams that meet to read portfolios from each other's classes, discuss their readings, and evaluate each of the selected portfolios. While each teacher assigns a grade, only the student's instructor of record assigns the grade that will count; however, the program collects a list of all of the teachers' grades for all of the portfolios reviewed. Because the University of Louisville also has access to high school writing portfolios, a small number of these are compared to their respective writers' college portfolios to assess improvement between high school and college. This last step clearly shows how much progress students are making through the program itself.

The second tier is composed of a campus-wide committee that evaluates fifteen sets of high school and college writing, assigning grades to the college portfolios, and "characterizing the qualities of writing for each grade" (186). The committee also discusses the similarities and differences between high school work and subsequent college writing. All discussion is carried out via an Internet listserv, which would allow automatic archiving as well as provide a convenient medium in which busy personnel can conduct the assessment.

Finally, the third tier is composed of "writing assessment and program professionals," brought together on a listserv (186). The same fifteen sets of high school and college writings that went through the second tier are examined by experts from around the country, providing valuable public insight into how the program is doing. After this last assessment, a report is compiled that is distributed to the participants of all three tiers. From this data "course goals, faculty development opportunities, grading procedures, and other program guidelines and policies" may be assessed and revised as needed. The picture is now complete.

In the end, Huot's concept of writing assessment is sound; indeed, his model is highly appealing. But now that you've seen it, you probably don't need to read the whole book.

About the Author Douglas James Joyce teaches English composition and literature at McCook Community College in McCook, Nebraska. He was a teacher-consultant of the Denver Writing Project, Colorado.

PDF Download "(Re)Articulating Writing Assessment for Teaching and Learning"

Related Resource Topics

© 2022 National Writing Project