WAC Clearinghouse home page
About ATD   Submissions   Advanced Search

CCCC 2006 in Review

M9 Building Community Through Writing Program Assessment

At this session, two faculty each from two large public universities in the Midwest—Indiana-Purdue-Indianapolis and Eastern Michigan—described some of their experiences with writing program assessment projects. Then a communications professor from an Australian university reported on "Assessing the Multi-genre, Multi-modal Paper," which is assigned to fourth-year students in the program in which she teaches.

Both pairs of American faculty—Susanmarie Harrington and Scott Weeden of IUPUI ("Representing the Center: Voices, Maps, or Rubrics") and Linda Adler-Kassner and Heidi Estrem of EMU ("Assessing the Multigenre, Multimodal Paper: Risks and Values Explored")—had based their assessment projects on Bob Broad's What We Really Value: Beyond Rubrics in Teaching and Assessing Writing (Utah State UP, 2003). So instead of devising rubrics for the assessment of student writing, both pairs used "dynamic criteria mapping" and/or focus groups. In her presentation, Harrington said that rubrics may hide the complexity of assessment, a complexity that Broad's book lays out. She said that rubrics may be popular because the may build community in the very act of hiding conflict and difference within an assessment team or across groups of stakeholders. At her university, the writing program assessment was part of a revision of first-year comp. She and her colleagues developed what she called an "un-rubric" to use in their assessment. Her colleague, Weeden, described five areas of conflict in their program that their assessment strategy revealed:

  1. the role of the composition course in students' lives, which varied greatly;
  2. teachers' desire for structure in student writing, when that writing didn't always provide it;
  3. teachers' uncertainty as to what counts as passing as they developed their "un-rubric";
  4. teachers' differences on the importance of grammatical correctness in student writing;
  5. and teachers' excitement at using dynamic criteria mapping, despite their uncertainties with regard to other values of the assessment.

He said that the conversations that had occurred among and across faculty had involved productive talk about the values in writing and had, in fact, built community by putting these issues on the table instead of obscuring them.

Adler-Kassner took the floor to say that EMU had tried to develop an assessment of first-year writing that would be both locally contextualized and framed within the larger disciplinary conversation of rhet-comp. Estrem then went into some detail of the dynamic criteria mapping that groups from various campus constituencies had engaged in. She said that they were "still mired in the process" because this assessment had begun only last semester and that they hadn't yet been able to get much critical distance from their work. But she thought that the process had served to build community because of the dialogue about good writing that it had fostered.

Finally, Claire Woods of the University of South Australia, Magill described the framework for assessment of writing done by fourth-year honors students in her program, which is situated in the School of Communication, Information and New Media at her university. These student projects are "fictocritical" writing, blurring the differences among several genres. Her handout offered a formula for fictocriticism, as follows: "Critical/Theoretical + Research/ethnography/exposition + Creative/fictional/poetic + Self Reflexive/autobiographical + ludic/playful practice + rhetorical practice + tekhne = hybrid text/research paper as creative practice/praxis." It struck me as she was talking that her formula might well describe much of Patricia William's talk from the previous afternoon. Anyway, her framework for assessment of this kind of project involved three sets of criteria: the thesis examiner's implicit criteria for assessing fictocritical writing, criteria for reviewing creative analytic process ethnography (drawn from L. Richardson and E. Adams-St. Pierre's essay, "Writing: A Method of Inquiry," in N. Denzin and Y. Lincoln's edited collection, The Sage Handbook of Qualitative Research, 3rd ed., Sage Publications, 2005), and "the micro, macro, and meta approach to editing and assessing the text." These "levels" look at, respectively, language and stylistic matters; organization and rhetorical issues; and the theoretical, the critical, and the creative, asking "What metageneric learning or knowledge is on display" in the writing? Woods said that this framework for assessment points toward a need to reconceptualize the assessment of writing as genres are increasingly blurred, as performative texts are increasingly visible, and as new technologies affect the shape of texts.

I thought this session was "messy," but in a stimulating and engaging way. Conclusions were tentative at best, in keeping with the ongoing, inventive, and unconventional types of assessments that were described.

— Joel Wingard

CCCC ConventionFor more information on the CCCC 2006 conference,
visit the NCTE Web site at http://www.ncte.org/profdev/conv/cccc/.