WAC Clearinghouse home page
About ATD   Submissions   Advanced Search

CCCC 2006 in Review

This session was a report of an ongoing collaborative assessment effort among five institutions from across the country. I'll list the presenters and their affiliations first, because all they mostly spoke in sequence, the nature of the program on which they were reporting made the overall presentation a collaborative one, as opposed to five or six separate and perhaps more loosely related presentations. They were Neil Pagano of Columbia College, Chicago ("Assessing Student Writing Collaboratively: The Process and the Results"); Linda Rowland of Florida Gulf Coast University ("The Programmatic and Institutional Benefits for Collaboration in Assessment"); Amy Heckathorn of California State University, Sacramento ("The Struggles and Benefits of Collaborative Work"); Stephen Bernhardt of the University of Delaware ("Outcomes of Inter-Institutional Writing Assessment") ; and Marsha Watson of the University of Nebraska at Omaha ("The Results of an Inter-Institutional Assessment Project").

Pagano described the outlines of the assessment effort, saying that it was a two-phase project, the first phase of which looked at changes in student performance over one semester in first-year writing and math courses, and the second of which would study changes in students' diversity awareness. Only the writing assessment was presented here. Watson provided a general description of the writing assessment, saying that it used a pre/post model of comparing student writing; that it involved shared, course-embedded assignments (with some flexibility built in so that different institutions could work within the local contexts of their first-year writing courses); that it was trying to assess both writing-course and institutional effectiveness; that it involved the development of a scoring rubric that could be used at each institution; that the participants had engaged in norming sessions both via email and at CCCC 2005; and that results would be shown graphically in this presentation.

Rowland then took the microphone to describe the development of the collaborative, criterion-referenced rubric that the project participants used to rate student writing. The team used a 6-point scale by which they attempted to measure "task responsiveness," "engagement with texts" that students had read in connection with the assigned writing, "development" in student writing, "organization" within that writing, and "control of language." She also discussed how her own institution used the results of the assessment, which measured changes in ratings of student writing, pre and post, in all the areas of the rubric. Each speaker referred to a graph of changes—numerically and in percentages—that applied to his or her own institution. Of course there were variances in both starting points and rates of change, reflecting the differing compositions of student bodies and different schools. For instance, Cal State, Sacramento, with a large proportion of students whose first language was not English, exhibited the greatest gains in "language use."

Bernhardt discussed the factors involved in an inter-institutional assessment effort and its benefits, and Heckathorn wrapped up with a discussion of some of the struggles of such an effort.

In the question portion, the first questioner scolded the panel for what she called "contradictions" in their methods, a "tendency to blame adjuncts" for relatively low gains in some areas, and for "confusing skills with knowledge." This question was quite aggressive, and to me was trying to denigrate the whole project, or at least its research methodology and results. Some of the panelists seemed shaken, and when another questioner suggested that assessment was something "done to" students, Watson strongly denied this as well as the imputation that the assessment did not benefit students. Apparently as a result of the tone of these questions, Pagano seemed wary when I went up to present him with a reviewer's card. He may have felt that their project was going to get trashed in my review. However I failed to see the problems that the first questioner identified, nor do I believe that assessment—at least the kind of carefully thought-out assessment described in this session—is "done to" students in any sense suggestive of them as victims.

— Joel Wingard

CCCC ConventionFor more information on the CCCC 2006 conference,
visit the NCTE Web site at http://www.ncte.org/profdev/conv/cccc/.