WAC Clearinghouse home page
About ATD   Submissions   Advanced Search

WAC and Assessment: Activities, Programs, and Insights at the Intersection

Introduction: Writing Across the Curriculum and Assessment

The issue of assessment is not going to go away,
and we need to be smart about how we address it in our WAC programs.

—Sue McLeod

Art Young likes to tell a story about how assessment was responsible, at least in part, for the beginnings of a curriculum at Michigan Tech in 1977-78—what we now know as Writing across the Curriculum, or WAC. The quick version of the story is that Art was approached by a dean of engineering who, dissatisfied with student writing, believed that a test would improve it. Art suggested a curriculum instead, and the rest, as they say, is history; assessment helped to motivate the Michigan Tech version of WAC. In this narrative, of course, we also see what has been an historic tension between curriculum and assessment, especially an assessment of individual students, a tension that many believe has been exacerbated by the No Child Left Behind Act (Yancey 2009). At the same time, as Sue McLeod suggested in her plenary address at the 2008 WAC conference in Austin, Texas, assessment—at least program assessment—has from the beginning of WAC been an integral part of the "WAC conversation." As McLeod (2008) says,

Many WAC programs, my own first program included, started with grant money and had to have an assessment component to show the granting agency that we did what we said we would do. Let me state up front that I am for assessment; when I began consulting at other institutions that wanted to start WAC programs, I always included assessment as part of what I recommended they should do, a feedback loop into the program that would let them know what they were doing well and where they needed to improve. I recommended, and still recommend, gathering data of all sorts—numbers of students and faculty involved, specific changes made to assignments and syllabi, documents produced by faculty to explain the writing conventions of that discipline (like those at Oregon State and George Mason University), evaluations of faculty workshops, and so on.

In this account of the relationship between WAC and assessment, we see a different version of assessment, one that is not in tension with learning, but rather one vested in two kinds of learning: (1) the learning engaged in by faculty and administrators who conduct program assessment, and (2) the learning of students that is enhanced by such assessment.

It is in this second spirit of assessment that we introduce this special issue of Across the Disciplines focused on WAC and Assessment, the first compilation of work on this topic since the 1997 volume Assessing Writing across the Curriculum, and we're pleased to include such a rich variety of approaches to assessment-that-fosters-learning. Our first article, for example, begins inside the classroom with a focus on student self-assessment as a programmatic activity. The second, located in a WAC assessment program already in place, moves outside the individual classroom to consider what happens when another part of the program, in this case a first-year composition program, is put in dialogue with the WAC program, thus enlarging both purview and process. The third details a surprising addition to WAC assessment in an account of a WAC-supported effort to include quantitative reasoning (QR) as part of the genre of rhetorical argument that is at the heart of its WAC program. Our fourth article, also connected to an established program—this one a Communication across the Curriculum (CAC) program—details a departmental "profile" process where faculty collaborate to calibrate curricular goals, outcomes, challenges and difficulties, working in an iterative process where faculty questions lead to curricular enhancements. In our fifth article, again focused on an established program, we learn about iterations of assessment that are linked to research; in such activities, the combination of "thick descriptions and supportive statistics" helps the leader of WAC and the campus constituents make two kinds of inquiry: (1) learning themselves about what learning is taking place; (2) considering how to stage increased learning opportunities. Our sixth article moves outside the institution as it connects with alumni whose views of what's important in professional success motivate a new CAC program. And in our seventh and final article, we look to the future with a design piece articulating the various components—a first-year composition program; writing to learn strategies; writing in the majors courses; a new studio; and an electronic portfolio—of a WAC program for the 21st century.

We talk in the field about multiple perspectives and stakeholders; within these articles, we see them all as valuable in creating a culture of writing and a culture of learning.

Themes across the Curriculum

In reading across the articles, we saw patterns as well, five of which we highlight here.

One recurring theme is a shared interest in documenting the value of programs. More specifically, as the interest in assessment of higher education programs in general has grown, stakeholders in WAC and CAC programs increasingly feel the need to identify and quantify or otherwise document the value that these programs have for students. Such stakeholders assess and describe how the programs are currently supporting and extending the learning process for their students. In so doing, of course, faculty and administrators also can pinpoint the ways in which these programs might be re-envisioned, revised, and/or reformulated to provide students with even greater learning opportunities and growth in the future. Since WAC and CAC pedagogical approaches don't always lend themselves to standard systematic assessment methods, as Hilgers and Stint-Bergh suggest, developing appropriate and useful assessment practices for WAC or CAC programs is a task many schools are now wrestling with. The articles in this special issue of ATD thus show how a number of different schools have faced this challenge, have used a diversity of methods to meet their particular WAC/CAC assessment needs, and have reported the results of such efforts.

A second theme has to do with the variety of options available to WAC programs: they seem to be increasing, even if the relationship among them is still in progress. In this special issue of ATD, because each school's reasons for grappling with assessment were so different and because there is such variety in the settings and school cultures where the assessment solutions were put into place, we have something of a portrait of current approaches to WAC/CAC program assessment. Oregon State's piece shows how the focus of assessment can be directed at individualizing instruction and empowering students by giving them a voice in decisions about their writing and learning. Mason's piece shows how collaboration of the various stakeholders, when developing an approach to assessment, can help to maintain the integrity of a WAC program already designed to individualize instruction and empower students. Carleton's shows that WAC assessment methods that respect the integrity of the program can open the way to embrace an even larger concept of learning to write and writing to learn than has generally been recognized. NC State's article details how employing a departmentally-specific approach to assessment that complements a departmentally-centered CAC program energizes faculty across the disciplines about CAC program goals and reinvigorates faculty efforts to focus on them in their own classrooms. MIT's shows how an assessment that includes information from alumni can be used to develop a program that better meets the real-world writing and communication needs of the present-day students. Hawai'i's demonstrates how using multiple methods can provide a clearer map with which to chart program improvements than any single assessment method could hope to furnish. And Virginia State's illustrates a kind of curricular and programmatic assemblage: how a school can pull from what other schools have already done to develop something that meets their own particular needs.

A third theme has to do with a consistent and shared set of practices: although the models of WAC and assessment, like the institutions that house them, vary considerably, they also share a set of features and a set of practices. In each case, for example, the effort was intentional, focused on a question relevant to the institution, and developed in a staged, discussion-rich process. Regardless of effort, the process seems to have three steps.

What's particularly important in this process, of course, is twofold: first, the intentionality of the effort; and second, the fact that the results have been used to enhance curriculum—a claim that surprisingly few assessment programs can make.[1]

Fourth, we see here something of a shift in writing assessment. For decades, scholars in writing assessment have argued that a direct measure is preferable to an indirect measure (Yancey, 1999); it was this argument, for example, that has often motivated the shift from the indirect measure of a multiple choice test to the direct measure of an essay exam or portfolio.[2] In the articles here, however, we see if not a shift to indirect measures, new uses of them at least. At North Carolina, for example, the interviews with faculty provide primary data for understanding the curriculum for students. Likewise, at MIT, alumni perceptions function as a kind of check on ways the curriculum has functioned: in this regard, alumni have not only motivated change, but also continue to provide a kind of reality check. Perhaps as important, they are helping the other stakeholders in the MIT program understand and connect in-school academic literacies with the literacy demands of the workplace.

Fifth and finally, we see here a new valuing of the native informant—be that informant student (as at Oregon State), faculty (as at North Carolina State), or alumna/us (as at MIT). In each case, informants are understood to have an appropriate expertise, and tapping such expertise is understood as one important means of learning about the effects of a WAC program and then of enhancing it. Furthermore, when the perceptions of native informants are combined in an iterative process of program assessment, as at the University of Hawaii, the result can be a robust, sustained WAC curriculum.

The Articles Within

In our first article, "The Writer's Personal Profile: Student Self Assessment and Goal Setting at Start of Term," Tracy Ann Robinson and Vicki Tolar Burton of Oregon State University first define a self-assessment and goal-setting survey they call the Writer's Personal Profile (WPP), and then show how students' completion of it can contribute to better teaching, more directed learning, and enhanced programs. More specifically, the survey invites students to engage in a three-part process: (1) reflect on their college writing experiences; (2) identify their strengths and weaknesses as writers; and (3) set personal writing goals for the forthcoming course. For students, the survey assists in linking the writing goals of school with post-graduation writing goals, and it provides a base-line that can be useful in survey follow-up activities promoting students' responsibility for their writing and learning. For faculty, the WPP results can guide writing instruction toward the needs of a given set of students. For WAC administrators, WPP results can direct the design of faculty development. Not least, this assessment activity has raised interesting questions about the lore of what's valuable in WAC, especially about certain writing-to-learn strategies, and about whether and/or how these writing practices in fact foster writing development, questions that thus can lead directly to research.

An assessment team from George Mason authors our second article, which tells the story of designing and implementing an assessment of first-year composition in the context of a process of WAC assessment already in place and in response to a state mandate for "value-added" writing assessment. Using an assessment process based on the WAC workshop-based process Mason has used for several years, team leaders Terry Myers Zawacki, Shelley Reid, Ying Zhou, and Sarah E. Baker, in concert with first-year-composition (FYC) faculty, developed and implemented an assessment of research-based essays from FYC. This process—paralleling the discipline-focused, course-embedded, and workshop-based assessment process George Mason has been implementing successfully since 2002—provided the data required by the state, but more importantly, provided opportunity for conversations about pedagogical practices, quality of writing, and the role of first-year composition in beginning a vertical curriculum. As important to the team is how this process has fostered the kinds of cross-disciplinary conversations that help them sustain and enhance their programs, and how these conversations model the spirit of negotiation and cooperation that has likewise sustained the culture of writing at Mason.

In our third article, "Pairing WAC and Quantitative Reasoning through Portfolio Assessment and Faculty Development," Carol Rutz and Nathan D. Grawe explain the relationship of numerical evidence to rhetorical argument and outline how the Carleton WAC program has provided a platform for a recent initiative in quantitative thinking in the context of writing. Because Carleton College has historically linked faculty workshops and educational reform, it made sense to continue that linkage. Thus, building on the portfolio success that is a signature of that program, WAC and quantitative reasoning (QR) partnered in providing joint faculty development opportunities, where the rhetorical power of numbers in teaching argumentation and the design of assignments that would foster such argumentation held center stage. In this article, Rutz and Grawe trace the history of WAC and QR at Carleton, describe the faculty development and assessment features, and argue that the combination of WAC and QR serves two important goals of liberal education: precision in language and ethical argumentation. In addition, as we saw in the assessment activity described earlier, this program assessment activity helps us understand writing more fully and raises questions whose pursuit can continue that process.

In the fourth article, "Profiling Programs: Formative Uses of Departmental Consultations in the Assessment of Communication Across the Curriculum," Chris Anson and Deanna Dannels turn their attention to departmental faculty as they explain a new process for the review and re-invigoration of a continuing communication across the curriculum program. This process—the departmental profile—is a research-informed, departmentally-based methodology for the formative assessment of CAC programs within academic disciplines. While the process itself includes several sources of data, the authors report that thus far faculty prefer interviews that the CAC consultants structure but that permit both dialogue and thinking aloud. Like the Oregon State model, this model of assessment sees formative assessment as a key to institutional change. In the North Carolina State model, the status report, an outcome of the profile, leads to a suite of options connected to identified communication outcomes for departments to consider. Drawing on one departmental profile to illustrate this process, Anson and Dannels explore ways in which the method can map a department's progress toward CAC implementation and thereby reinvigorate its attention to CAC as a sustained element of its teaching mission.

Les Perelman, in our fifth article, uses an historical lens to contextualize MIT's long history of integrating writing instruction throughout its undergraduate curriculum. This article, "Data Driven Change Is Easy; Assessing and Maintaining It Is the Hard Part," focuses on two studies, the first completed in 1995 when a special faculty committee was charged to evaluate the effectiveness of MIT's then-current writing requirement in teaching students to write well. The data suggested that students did not prioritize writing as an essential skill during their undergraduate career, an observation confirmed by a second study, completed in 1997, that asked alumni to rate various abilities (e.g., leadership, writing) in two dimensions: (1) the level of significance to them now; and (2) MIT's contribution to their acquisition. The data from this survey showed that while MIT prepared students well for the intellectual challenges they would face as engineers and scientists, the Institute did not prepare them to be effective communicators or leaders. These studies motivated a new "Communication-Intensive" curriculum that required every undergraduate at MIT take to one communication-intensive (CI) class in each of their four undergraduate years. As a measure of success, a new survey will tap the perceptions of more recent graduates (those involved in the new curriculum) about how well this new program has been successfully implemented. In sum, this article speaks to the power of post-graduate student voices both in helping motivate a curriculum and in determining its efficacy.

In "Program Assessment: Processes, Propagation, and Culture Change," long-time WAC leaders Monica Stitt-Bergh and Thomas Hilgers make a strong case for the value of the "in-house" approach when the goal is to create a culture of assessment keyed to increasing learning opportunities for students. At the University of Hawai'i at Mānoa (UHM), a relatively large research-extensive university, the approach to both WAC and assessment is faculty-informed, and the history is one moving from consideration of inputs to gathering of outputs all targeted to two goals: broadened program ownership and ongoing program improvement through increased faculty involvement. Of particular value, these authors find—like their colleagues at North Carolina State—is the interview. Although Stitt-Bergh and Hilgers note the need for direct evidence of learning—"Because interviews are typically self-reports, they ultimately need to be supplemented by direct assessment of student learning"—they also outline how important interviews can be in shaping both the interviewed and the interviewee. Put differently, they see the interview as a site for reflection leading to advanced understanding, a claim made also by researchers at the University of Washington's longitudinal Study of Undergraduate Learning (reported in Inside the Undergraduate Experience). As the UH study attests, then, the methods we choose construct us as much as the program of review itself.

Last but not least is the Virginia State University plan for a 21st century WAC program. Authored by Freddy Thomas, this article reports first on previous efforts that were unsuccessful, then on the current plans tied to an accrediting Quality Enhancement Plan (QEP). Central to the plan are several components—the first-year composition program as well as writing in the majors courses, for example—as well as a culminating activity, the electronic portfolio. In addition, while keyed to the WPA Outcomes Statement, the VSU writing outcomes are also keyed to local concerns, such as the need to include the cultural heritage of African Americans, particularly given that VSU is an HBCU. In sum, such a plan builds by design the vertical curriculum that we know students need to develop throughout their collegiate years, and in providing for technology and reflective practice, it offers much for those of us who care about WAC and assessment to consider.

Absences, Lingering Questions, and a Note of Thanks

As rich as this range of approaches is, and as many proposals as we received for this special issue—and there were 31 of those—there is still much work to be done, much learning to be documented and shared. Among those 31 proposals, for instance, we found not one from a community college. Among them, we found not one from an Hispanic serving institution or a tribal college. And among them, we found very few that integrated digital technologies into their programs in any explicit way. On the plus side, however, we saw several very interesting approaches we'd call WAC-complementary, focused on critical thinking, on information literacy, or on integrative learning. As Carol Rutz and Nathan Grawe claim, WAC continues to provide a testbed for educational reform of various sorts, much as Barbara Walvoord urged over a decade ago.

New work awaits, of course; on the basis of the articles in this special issue, we identify five questions that can guide this work.

Finally, we close our introduction with a note of thanks. We thank first our co-editors—Emily Baker; Scott Gage; Jill Gordon; and Rory Lee—who worked with us to read all proposals; to select the articles; and to communicate with and provide guidance to our authors. Second, we thank Michael J. Cripps, whose coding has made all the texts wonderfully legible—and interactive ;). And third and not least, we thank Michael Pemberton, whose kind invitation initiated this special issue and whose encouragement throughout was all we could have asked for.

References

Beaufort, Anne. (2009). All talk, no action? Or, does transfer really happen after reflective practice? Conference on College Composition and Communication, San Francisco.

Beyer, Catherine Hoffman, Gillmore, Gerald M., & Fisher, Andrew T. (2007). Inside the Undergraduate Experience: The University of Washington's Study of Undergraduate Learning. Hoboken, NJ: Jossey-Bass.

Hilgers, Thomas L., Hussey, Edna L., & Stitt-Bergh, Monica. (1999). "As you're writing, you have these epiphanies": What college students say about writing and learning in their majors. Written Communication, 16, 317-53. Retrieved from http://www.mwp.hawaii.edu/epiphanies.htm

Li, Xiaoli. (2008). A Conversation with a WAC colleague: An interview with Art Young. The WAC Journal, 19. Retrieved from https://wac.colostate.edu/journal/vol19/li.pdf

McLeod, Susan. (2008). The future of WAC - Plenary Address, Ninth International Writing Across the Curriculum Conference, May 2008 (Austin, Texas). Across the Disciplines, 5. Retrieved from https://wac.colostate.edu/atd/articles/mcleod2008.cfm

O'Neill, Peggy, Schendel, Ellen, & Huot, Brian. (2002). Defining assessment as research: Moving from obligations to opportunities. WPA: Writing Program Administration 26(1-2), 10-26.

Walvoord, Barbara. (1996). The future of WAC. College English, 58(1), 58-79.

Yancey, Kathleen Blake, & Huot, Brian, eds. (1997). Assessing writing across the curriculum: Diverse methods and practices. Greenwich, CT: Ablex.

Yancey, Kathleen Blake. (2009). The Impulse to Compose and the Age of Composition. Research in the Teaching of English, 43, 316-38.

Yancey, Kathleen Blake. (1999). Looking back as we look forward: Historicizing writing assessment. College Composition and Communication, 50, 483-504.

 

Notes

[1]The National Institute for Learning Outcomes Assessment has just been formed in an effort to focus institutional efforts not only on outcomes but also and more particularly on the use of assessment data to enhance programs: what we see in this special issue in these two regards is precisely kind of the effort they are highlighting. See http://www.learningoutcomeassessment.org/

[2] Moreover, this is an argument that we have successfully made to the larger academic community. In the case of the Voluntary System of Accountability, for example, many have chosen the essay-format CLA as a measure of value-added assessment rather than the MAPP, which is a multiple-choice test.

Contact Information


Complete APA Citation

Kistler, Ruth,Yancey, Kathleen Blake, Taczak, Kara, & Szymanski, Natalie. (2009, December 3). Introduction. [Special issue on Writing Across the Curriculum and Assessment] Across the Disciplines, 6. Retrieved from https://wac.colostate.edu/atd/assessment/kistleretal.cfm