|
Tweet |
|
Improving Students' Academic Writing: Developing New Knowledge about Teaching and Assessing for Improvement
By: Jayne Marlink
Date: January 27, 2010
Summary: Jayne Marlink, who directs the California Writing Project, details how an assessment regimen known as "forced choice" gave assessors the language to name the improvements readers experienced when comparing pre- and post- test essays.
"Out of the [Bay Area Writing Project's first summer institute] grew a greater awareness of student writing problems and the need for a composition program in the high schools that would lead students from writing about themselves to writing about concepts and ideas."
– Jim Gray, founder of the National Writing Project, in California Monthly, 1974
From the beginning: taking on the challenge of teaching the writing about ideas
The educational forecast in the early 1970s was dire. Newspaper and periodical headlines declared with certainty that Johnny couldn't read or compute, and he certainly couldn't write. The news out of UC Berkeley, the home campus for the soon-to-be fledgling writing project, was equally gloomy. In 1973, 50 percent of freshmen admitted to a University of California campus failed the Subject A Examination, the University-wide writing placement exam. For students, failing the exam meant taking a required remedial class, also known as Subject A, until they demonstrated that they could write at an acceptable level for college coursework.
The Subject A Examination (now renamed the "Analytical Writing Placement Examination") requires that students write an essay in response to the ideas and issues presented in a published nonfiction passage, one that might be read in an introductory college course across the disciplines. Passages for the exam are drawn from authors who are historians, psychologists, anthropologists, biologists, and occasionally essayists whom students might encounter in English classes, such as Jamaica Kinkaid or bell hooks. In response to the plummeting Subject A scores that included those at UC Berkeley, Jim Gray noted that "Berkeley's freshmen were bright students, but in the early 1970s, most had limited experience writing papers about ideas (Gray, 2000)."
Because Jim thought one reason for the problem was that university and high school teachers were not talking to each other, he brought together classroom teachers and UC Berkeley instructors to pinpoint reasons for the decline in student writing, especially the plummeting pass rate on the Subject A exam, and to discuss possible solutions. "Blame for the sorry state of affairs was lobbed, like a hand grenade, back and forth across the table (48)." The discussion did not go well, and a second meeting was just as unproductive.
In contrast, the first Bay Area Writing Project Invitational Summer Institute, held just months later, brought together successful teachers of writing, middle school through university, as colleagues with a shared purpose—improving their teaching of writing. In addition to sharing their teaching practices and questions, everyone wrote—in many genres, for many purposes—and they wrote and revised often, creating for many their first-ever community of writers. In addition to writing to topics of their choosing, they wrote "an assigned piece that moves the writer from a personal experience to an essay about some idea in the initial experience (85)." They wrote to a Subject A exam topic and composed a position paper or policy statement on the teaching of writing. From the very beginning, teacher-leaders in the writing project experimented with genre, all to explore what it means to write about self and about ideas and how to write about experience, observation, and learning in an analytical context. What they learned from their writing informed what they planned to do as teachers of writing.
It comes as no surprise then that the work of every California Writing Project (CWP) site has included programs focused on improving the teaching and learning of academic writing, in particular the analytic writing and critical reading that is so important for success in college. CWP has a rich history of such programs, for example, the UCLA Writing Project's Teaching Analytical Writing Program and the Area 3 Writing Project's Transition to College Program. Over our 35- year history, the purpose of these programs has remained the same—to increase teacher and student expertise in analytic writing, the writing about concepts and ideas.
A new approach: CWP's Improving Students' Academic Writing
During CWP's first 25 years, programs with a transition to college focus waxed, often in response to University of California and California State University outreach initiatives, and then waned because of decreased funding for them. In 1999, California was in the midst of a new outreach cycle, supported by significant state funding that targeted transition to college programs with an embedded research and evaluation component. Taking advantage of this opportunity, CWP launched a new effort built on our 25-year foundation called Improving Students' Academic Writing (ISAW).
Participating enthusiastically in the first year of ISAW were 54 teachers representing 15 writing projects and 18 high schools that reflected California's cultural, linguistic, and economic diversity. The goal of ISAW was to conduct a statewide study of high school students' progress in academic writing and reading—using the Subject A Examination as the measure of achievement—and provide ongoing professional development for teachers to improve their teaching of analytic writing.
Enthusiasm would be an expected response from teachers involved in sustained professional development that included several weekend working retreats, along with school and site team meetings, all in the company of like-minded teachers. But that enthusiasm was replaced by wary anticipation regarding the evaluation component of the study—administering pre/post Subject A exams to our high school students and then turning over the scoring of those exams to a group of independent, experienced Subject A readers.
More than an evaluation study: teaching for improvement
Every evaluation study has embedded inquiry questions. At the start, ISAW's inquiry questions included the following:
- Will students of CWP teachers of ninth through twelfth grades make significant improvement in Subject A Examination test scores from a fall pre-test to a spring post-test?
- How will we know? What will improvement look like in their writing?
- What teaching strategies or approaches are most effective in helping students improve their academic writing and critical reading?
- How will we help students recognize their own writing improvement and growth?
As our study progressed, however, the word "improvement" took on increasing importance and weight. Our professional development meetings centered on how to begin to teach for improvement. We wrote and revised essays in order to understand improving analytical writing as writers and teachers. We designed and scored writing and reading assessments to identify instructional needs; we developed instructional materials and assignments, and assessed and documented the strategies and approaches that proved most effective with our students.
The more we learned, the more hard-edged and urgent our inquiry questions became:
- As we assess our students' writing, how can we do more than diagnose the problems that students are having?
- How do we help them name their next steps?
- How do we help students build their skills? Would it help if we shared smaller writing tasks, the informal writing we ask students to do, and the early writing assignments that help students work up to the analytical work of Subject A-like writing?
- How do we make sure, as we develop lessons and units together, that we are not just preparing students to take a test?
- How does our school team move a curriculum that has been mostly literature-based toward incorporating more non-fiction? What are some good, interesting, yet challenging non-fiction pieces to use?
- If we want to help students write more analytically, how can we find readings that are great analytical essays and can serve as examples of the writing we are asking our students improve toward?
Underpinning our questions was the need for one to have a clearer understanding of what improvement in analytic writing looks like for high school students, and we were grappling with what teaching for improvement meant for us, their teachers.
An assessment problem: scoring and documenting improvement
The closer we moved to scoring the pre/post essays, especially by the time we began to develop a set of rangefinders for the independent scoring session, we knew we had a new improvement problem to solve. The only rubric we had in hand at the time was the Subject A scoring guide, a holistic rubric used for evaluative purposes, for sorting out passing papers from failing ones. For such an evaluative purpose, it worked well and provided an efficient way to score up to 20,000 essays at a time. But as anyone who has studied matched pairs of student essays scored with holistic rubrics can say amen to, analyzing what's improved from a pre-test scored a "low 3" to a post-test scored a "mid 3" is not very informative for teachers and is even less so for students. Using an evaluative rubric was probably not going to tell us much about the specific improvements of our students.
Another improvement problem for us was that the Subject A scoring guide was written in the way all evaluative rubrics are—descriptions of passing scores are written in positive terms and those of failing scores are described in terms of deficits. In short, passing papers do; failing papers don't. The language used is not a fault of the scoring guide or rubric. We needed a scoring guide that assessed improvement in analytical writing, not passing or failing, and because we did not have such a tool, we needed to create one.
Forced choice reading: collecting the language to describe students' developing progress
We made the decision to ask our essay readers to do double the work. In addition to scoring the pre/post essays using the Subject A scoring guide, because we needed those scores for our evaluation study, we conducted a second, "forced choice" reading of the papers. Readers received pairs of pre/post essays, and not knowing which essay was the pre or post, they were asked simply to read the pair, decide which paper was the better essay, and then list what made one better than the other. We knew in some cases the better paper would be the pre-test. That happens for some students when they write for high-stakes purposes; they don't improve. Interestingly, the percentage of papers chosen as better through the forced-choice reading mirrored the percentage of papers that were given better holistic scores.
The purpose of the forced-choice reading, however, was to see if we would collect language that could help us name the improvements these experienced readers saw in the better essays, most of which were the post-tests. The readers did not disappoint.
The better paper shows a developing understanding of the analytical task. It is more of an essay. The weaker paper is more a narrated, parallel example of the passage.
The better paper summarizes the passage with a purpose. The weaker paper is an extended restatement, a retelling of the passage (fairly accurately though, it should be said).
The better paper critiques ideas in the passage from her reader's perspective and anticipates our needs as readers of her response.
The better paper shows an understanding of control—from overall organization to the sentence level.
The better paper has better grammatical errors.
The better paper is just more confident. The writer has a lot to learn, but seeing the two papers together shows how far he has come. Will he get to see what I saw here?
A new assessment tool: creating the ISAW Improvement Scoring Guide
From the forced-choice reading, we took away over 300 pages of descriptors of what mattered and counted as improvement. With that information as a starting point, ISAW teachers, community college instructors, and university composition faculty launched an exciting five years of knowledge development—the creation of the ISAW improvement scoring guide. We met regularly to create this guide in the way rubrics were originally developed—by reading and discussing student writing.
As we developed the improvement scoring guide, we drew on recommendations from Richard Haswell who asserts that if a group believes that "the essential function of a writing course is to foster improvement in writing," then using what he terms a "paired comparison" method will give much more information about how much the student has progressed during the course (Haswell, 1988). Based on some of the paired comparison formats Haswell suggested, we decided that we would write descriptors for four stages of improvement, the first being the beginning steps students were taking in writing more analytically, the fourth being where we wanted them to progress.
We then created seven improvement categories: Response to the Essay Topic, Understanding and Use of Text, Development, Organization, Word Choice and Sentence Structure, Grammar, Usage and Conventions, and Anticipating Readers' Needs. While that list may not seem unusual, perhaps some of the 18 dimensions across the categories will surprise—Reasoning, Employing Sentence Structure to Convey Ideas, Using Grammatical Relationships.
What may be more unusual is that the ISAW Scoring Guide uses no deficit language. Read the three writing dimension bands below from left to right and then imagine that in addition to the teachers using the improvement guide in writing conferences, students have illustrations of what these improvements look like in student essays including their own. Imagine too that their teachers help them keep track of their improvements, remind them that working to improve one area might mean a step back in another, point them to their first essay attempts so they know how far they've progressed, and celebrate their improvements at the end of the year. Finally, imagine that the scoring guide is a living document that has been refined through 34 revisions based on its use in assessing the writing improvement of over 19,000 students and because of ongoing suggestions from an ISAW professional network that now includes over 400 teachers.
In a meta-study of the early years of ISAW, Laura Stokes of Inverness Research Associates writes, ISAW's "collective effort ultimately produced an instrument that captured the range of writing characteristics students exhibit and developmental pathways they take as they develop academic literacy before college.... ISAW participants wanted the rubric to reflect their grounded knowledge of how these skills evolved, and they wanted the rubric's language to be of practical instructional use for themselves and their students. In effect, they were building a new grounded theory of academic writing development, grades 9-12 (Stokes, 2008)."
Assessment-focused instruction: improving writing and teaching
What began as an evaluation and research opportunity more than ten years ago has developed into a network of classroom and school communities focused on improving analytical writing. The ISAW knowledge we have constructed has yielded instructional resources and assessment tools that help teachers and students recognize and document specific improvements in academic writing, demystify for students what to work on next, and give teachers assessment information on which to base sound instructional decisions. The ISAW community of teachers is using that assessment information to accelerate the writing improvement of all students—the college-bound, English learners, struggling writers, and special education students— and prepare more of them for the writing of college and the writing about ideas.
More to the point: does ISAW make a difference for students? During the last two years, CWP embarked on a new ISAW effort, a Local Site Research Initiative study supported by the National Writing Project, comparing the writing achievement of high school students whose teachers are participating in ISAW programs to that of students whose teachers are not. The study is evaluating the student performance of 3600 students in the classrooms of 87 program and comparison teachers from 18 high needs schools. An independent group of readers evaluated students' improvement across ISAW's eighteen dimensions of writing and found that students in ISAW classrooms outpace their comparison counterparts in all eighteen dimensions. Evaluators found the differences to be large enough to be considered statistically significant.
An invitation: co-constructing new knowledge
ISAW is not a closed, one-size-fits-all community, and we have much more to learn. If you are intrigued by the knowledge we have constructed and especially if you would like to help us make new knowledge, we invite you to join us in Los Angeles on February 11, 2010 for the CATE Pre-convention Day—Improving Students' Academic Writing: Traveling the Road to Success.
Works Cited
Gray, J. (2000). Teachers at the center: A memoir of the early years of the national writing project. Berkeley, CA: National Writing Project Corporation.
Haswell, R. (1988). "Contrasting ways to appraise improvement in a writing course: Paired comparison and holistic." Paper presented at the Annual Meeting of the Conference on College Composition and Communication, St. Louis, MO.
Stokes, L. (2008). "The national writing project: Anatomy of an improvement infrastructure." Meta Study Case Report supported by The John D. and Catherine T. MacArthur Foundation and The Spencer Foundation.
Copyright © 2009 California English. Reprinted with permission.
Marlink, Jayne. 2009. "Improving Students' Academic Writing." California English 15 (2): 6–9.
Read more California English articles on the California Writing Project.
Download the Article
Download "Improving Students' Academic Writing"