WAC Clearinghouse home page
About ATD   Submissions   Advanced Search

CCCC 2006 in Review

E3 Closing the Gap Between Process and Product: Building Bridges with Assessment Rubrics

This session consisted of two presenters from one institution (University of Miami-Coral Gables) and one from another (Arizona State University) discussing their experiences with developing rubrics for assessment of writing.

Paul Conner, "Reconciling Product and Process: Constructing a Sound Theoretical Foundation for Creating Assessment Rubrics"

The first speaker, Paul Conner of Miami, covered his composition program's attempt to ground the construction of an assessment rubric in theory. He said he and his colleagues had studied Brian Huot's (Re)articulating Theory for Assessment and Learning (Utah State UP, 2002) and Ed White's Teaching and Assessing Writing: Recent Advances in Understanding, Evaluating, and Improving Student Performance (Jossey-Bass, 1994). He also mentioned that he had read (though admittedly incompletely) Bob Broad's What We Really Value: Beyond Rubrics in Teaching and Assessing Writing (Utah State UP, 2003). He said that a useful rubric must be both reliable and valid, and he quoted Huot and White, respectively, on what the latter term means. He described Broad's method of "dynamic criteria mapping," as opposed to rubric-construction, as interesting but overly complicated.

Darrel Elmore, "Blueprint for a Rubric: The Key Components of Assessment"

Conner's colleague from Miami, Darrel Elmore, described the process of developing their composition program's rubric for instructor and peer reviews of first drafts. He narrated the program's experience of going through multiple drafts in developing this rubric, and he showed the final product: a five-level rubric that attempted to assess drafts in four areas, with "critical thinking" and "content" constituting "Higher Order Concerns" and "language" and "organization" constituting "Lower Order Concerns." Each of these terms was defined for users of the rubric—both instructors and students—in footnotes. The same rubric would be used to evaluate final drafts of papers written in first-year composition, and presumably students could improve their writing by having the rubric applied to first drafts. The presenter also offered a selection of student comments about the rubric, most of which suggested that this method had deepened student thinking about their writing, although a few students offered "anti-rubric sentiments" that suggested they were still product-oriented in their thinking. He also offered the first draft of a student paper and invited the audience to apply the rubric to it.

Zachary Waggoner, "Assessing Our Own Assessment: The Rubric as a Tool for Self-Reflection"

Waggoner described a collaborative project undertaken by three public universities in his state—his own, the University of Arizona, and Northern Arizona University—to assess their assessment methods. He said their were many variables in the kinds of composition courses taught at the three schools, but that the dialogue that the project entailed was more valuable than what any one institution learned about its assessment methods.

I found this session of very little use. My own "project" for CCCC this year was to attend as many sessions on assessment as I could, inasmuch as I am beginning an assessment of the WAC program at my own institution. The information provided by the first speaker, I thought, was elementary. I doubted its value to anyone but the most neophyte assessor. The U. of Miami rubric was marginally interesting, but I thought what was said about it was also elementary. The narrative of the Arizona project was mostly anecdotal, without much in the way of useful theoretical or practical conclusions.

— Joel Wingard

CCCC ConventionFor more information on the CCCC 2006 conference,
visit the NCTE Web site at http://www.ncte.org/profdev/conv/cccc/.