WAC Clearinghouse home page
About ATD   Submissions   Advanced Search

WAC and Assessment: Activities, Programs, and Insights at the Intersection

Pairing WAC and Quantitative Reasoning through Portfolio Assessment and Faculty Development[1]

Abstract: Writing across the curriculum has been a pedagogy associated with faculty development since the earliest days of the movement. Carleton College, an early adopter of WAC pedagogy and faculty development, has, in the last decade, added portfolio assessment to the combination with positive results. Among the unexpected consequences has been a partnership with a curricular initiative in quantitative reasoning (QR), which has taken advantage of portfolio assessment as well as joint faculty development opportunities to successfully argue for the rhetorical power of numbers in teaching argumentation. We trace the history of WAC and QR at Carleton, describe the faculty development and assessment features, and argue that the combination of WAC and QR serves goals of liberal education: precision in language and ethical argumentation.

Introduction

Numbers serve rhetorical functions: providing context, making evidence specific, showing change over time, imparting precision in language, and authorizing confidence in writers and respect on the part of readers. Even well-prepared student writers need practice with these uses of numbers, because much of their experience with numbers is limited to formal situations that require them to solve problems with correct answers. Using numbers to reason and persuade, in contrast, draws on skills that are less mathematical and more a function of logic.

For the purposes of this article, we will employ the definition of quantitative reasoning (QR) used by the National Numeracy Network, which seeks to have students to acquire "the power and habit of mind to search out quantitative information, critique it, reflect upon it, and apply it in their public, personal and professional lives" (http://serc.carleton.edu/nnn/about/index.html). The QR skill set includes nuanced understanding of ratios, percentages, and averages rather than advanced mathematics. In fact, as Lynn Steen (2004) has argued, QR is "sophisticated reasoning with elementary mathematics more than elementary reasoning with sophisticated mathematics" (p. 9). Furthermore, QR is far less abstract than higher mathematics, its skills must be implemented in context (Bok, 2006, p. 129), and the results must be effectively communicated, which is where the marriage with WAC is most immediately advantageous.

We will argue that the lessons of WAC-driven portfolio assessment—a partnership between faculty development and assessment that promotes curricular change and improved student learning—can be adapted to other general education objectives, such as QR, with advantages for both programs. At Carleton College, we now have five years of experience drawing upon required writing portfolios submitted by all sophomores for evidence of QR in the context of written argument. Whereas other colleges and universities with QR programs have tended to require exams and/or specific courses (see, for example, Wellesley College and Yale University), Carleton has adapted an infusion strategy based on WAC principles. One result has been the rapid development of a committed fraction of faculty from the social sciences and humanities who endorse the benefits of combining WAC and QR to further improve communication on the part of their students.

We will begin with some institutional history about WAC and portfolio assessment at Carleton. We will show how careful introduction of portfolio assessment, supported by a rich faculty development program characterized as a curriculum for faculty, set the stage for a similar program directed at QR, even in the absence of a curricular requirement. Finally, we will argue that combining WAC and QR goals serves larger goals of liberal education: precision in language and ethical argumentation. On our campus, that rationale has recently produced a new graduation requirement for QR, which we will also describe.

Context: WAC at Carleton

Elaine Maimon's notes on the origins of WAC ("Beaver College," in Fulwiler and Young, 1990) credit Harriet Sheridan, a former dean at Carleton College, for inventing the faculty workshop as well as the idea of writing fellows. Both inventions spoke to Sheridan's notion in the mid-1970s that writing could be taught effectively in courses throughout the curriculum. However, delivering writing instruction across the curriculum required faculty to look beyond assigning writing to intentionally teaching students to write for the academy, especially when writing proficiency is a graduation requirement. Sheridan wisely reasoned that faculty are adept learners as well as teachers, and she organized what has become a staple in faculty experience: the pedagogical workshop. To supplement the faculty training, she turned to undergraduates chosen for their strong writing skills to serve as laboratory assistants in courses employing this new approach (not yet named WAC).

Some 25 years later, as described by Rutz (2007), Carleton refreshed its vintage WAC program with portfolio assessment. At the end of sophomore year, students are asked to submit a collection of between three and five papers that address specific rhetorical tasks—observation, analysis, interpretation, appropriate use of sources, and thesis driven arguments—along with a student-written reflection that argues for the writer's proficiency based on the contents. (See http://apps.carleton.edu/campus/writingprogram/carletonwritingprogram/ for more detail.) Before instituting the portfolio, Carleton had no way of describing what WAC actually meant on campus.

In 1999, when faculty morale regarding student writing was at its ebb, the college was invited to compete for regional grants dedicated to faculty development. An alert associate dean, worried about the possible collapse of WAC on campus, led the effort to obtain planning funds and, eventually, a full grant for three years that put portfolio assessment at the center of a faculty development program for WAC. The combination was essential: assessment had a dubious odor on campus, but a long, if faltering, WAC tradition and a thriving learning and teaching center spoke to positive attitudes toward faculty development.

Essential features of the faculty development programming that brought portfolio assessment into the culture at Carleton included: 1) outside experts who gave talks once per term on writing assessment, pedagogy, and research; 2) a consultant with extensive experience in both faculty development and writing assessment; 3) course releases for three senior faculty in humanities (classics), social science (economics), and science (physics & astronomy) departments to assist in programming; 4) summer grants for curricular development related to courses or assignments appropriate for the writing portfolio; 5) funding to improve library resources related to WAC; 6) support for faculty and administrators to attend and present at relevant conferences; 7) support for an annual three-day workshop on a WAC-related topic led by an outside facilitator; and 8) stipends and materials to support the reading of pilot portfolios from Carleton students. Taken together, this coherent and regularized programming constitutes a curriculum for professional development.

Faculty participation in this varied, iterative program created a group who brought their understanding of writing assessment up to date and were willing to adjust their pedagogy as well as their expectations to pilot portfolio assessment. In particular, the direct participation of senior colleagues in classics, economics, and physics & astronomy proved to be influential upon the faculty as a whole. This small group of senior professors took seriously their responsibility to guide the development of a writing assessment system that all faculty could implement, regardless of discipline. In addition, the design of the portfolio reflects the input of scientists and social scientists in one respect that proved crucial for the unforeseen future partnership with a curricular initiative on quantitative reasoning: In two of the rhetorical categories students must include in their portfolios, the criteria read as follows:

This language opened the door to a broad, multi-disciplinary basis for assessing student writing regardless of home department, genre, or any dominant set of rhetorical conventions. Once that door was open, however, fears emerged among faculty, and they eloquently voiced their worries.

Faculty concerns about portfolio assessment

Two objections on the part of faculty dominated initial campus debate on the merits of portfolio assessment. First, faculty expressed reluctance to read and evaluate student work outside of their fields, an objection we call the qualification argument. ATD readers will recognize immediately that the qualification argument represents a fundamental misunderstanding of portfolio assessment. Admittedly, Carleton faculty had no experience of rating a body of work that had been previously graded in the context of courses and then resubmitted for a new purpose. Consequently, they brought to portfolio assessment the expectation that they would be required to grade each item in the portfolio and come to a conclusion based on those combined ratings. Faculty did not yet see the benefit of reviewing a broad sample of student work, even if the subject matter was less familiar to the reader than it might be.

Part of the hesitation to assess work outside their fields was also based in respect for their colleagues' expertise, as suggested by a typical question: "Why would I pretend to properly understand an assignment in chemistry from my standpoint as a professor of literature?" This objection echoes a complaint that has consistently plagued WAC programs: "I am not trained to teach writing. Grading writing is too subjective. I am a chemist, not an English professor." Overlooking the reality that chemists (like all disciplinary scholars) write in a field with well-defined rhetorical features, the issue of who is qualified to assess writing comes partly out of a concern for understanding the disciplinary genres and conventions of another field.

Another facet of the qualification argument lies in faculty amnesia. Professors were once undergraduates who once successfully completed courses—including writing assignments—in a variety of disciplines. A reminder of that personal history can help faculty relax a bit and also think more empathically about the challenges student writers face as they negotiate a curriculum designed for breadth in the liberal arts as well as depth in a major.

Finally, the qualification argument overlooks the level of student work to be assessed. For the Carleton portfolio, assignments are drawn from the first two years, with introductory courses representing 35% of the papers collected in a typical year, intermediate courses providing 50% of those papers, and a mere 15% coming from advanced courses (Rutz, 2007). Although these percentages could not be known in advance, faculty came to understand that sophomore writing was probably within their reach. For the rare cases in which a particular assignment was too technical for one reader, another reader could be consulted.

The second faculty concern about portfolio assessment we call the exposure argument, often phrased something like this: "First- and second-year students are not good writers in many cases, and my colleagues will be tempted to judge my assignments, my courses, and me according to my students' work. I don't want immature writing to reflect badly on me and my teaching." The exposure argument speaks eloquently to a traditional notion that one's classroom is a sacred place set apart from distractions, where visitors, including one's colleagues, are admitted only by invitation and then rarely. While conversations about teaching are common at Carleton, classroom visits tend to be reserved for high-stakes occasions bearing on tenure and promotion. Over the past decade, this tension has relaxed somewhat, and colleagues are much more likely now to share student work with one another, and, to some extent, visit classes. Nevertheless, ten years ago, exposure felt threatening. We find it interesting that in these initial conversations fears of negative critique eclipsed hope for useful feedback or compliments.

These two concerns—qualification and exposure—were not dismissed. After debate, reassurance, and the reminder that a pilot would help us test these objections as we assessed student work together, nervous faculty agreed to give portfolio assessment a try.

Assessment as faculty development

Faculty who participated in workshops and attended talks on writing pedagogy, theory, and assessment quickly abandoned the stance of blaming students for being bad or unwilling writers. A keener appreciation of developmental considerations (e.g., Haswell, 1991) and assignment design (e.g., Bean, 1996) gave faculty a way of thinking about student writing as a matter of intervention and encouragement rather than despair. Portfolio assessment gave them the means to look at the assignments and course designs that produced the best work as well as the ways that students failed to meet expectations—and, more important, to speculate about the reasons for their difficulties.

Along the way, both the qualification and exposure arguments against portfolio assessment evaporated. Faculty found that they were indeed qualified to assess holistically student work they had not assigned, regardless of the subject matter. In the rare cases where they felt overmatched, they conferred with colleagues and were able to make decisions. And as for exposure, faculty consistently reported one of two experiences. They either marveled at the creative, thoughtful assignments that their colleagues were using and asked permission to adapt them for their own courses, or they were in the position of responding to requests for adapting their own assignments and course designs. Not only were colleagues collaborating as readers, but they were actively learning from one another through the lens of student work.

Post-rating session discussions emphasized a correlation between thoughtful assignments and successful student responses, an insight that has informed faculty development ever since. To date, all WAC workshops, whatever the advertised topic, take a curricular approach, covering course goals, assignment design, and responding to student writing—all of which have taken on increased importance among faculty who regularly read portfolios. The portion of faculty who have participated in portfolio reading or related faculty development events since 2000 stands at 52.8% of those teaching over the past eight years. This figure includes visiting faculty as well as tenure-track and tenured faculty, which gives us hope that programming on our campus has potential effects on behalf of WAC elsewhere as visiting faculty move on to more permanent positions.

WAC informs QR

As portfolio assessment gained traction, a parallel conversation at Carleton developed around faculty worries that students were not as literate in quantitative reasoning (QR) as they should be. Unsatisfied by anecdotal evidence of QR short-comings, we wondered how we might learn about the true nature of the problem if there were one. A geologist participating in the discussions pointed out that every Carleton sophomore already had to submit a writing portfolio that could—and often did—include material that featured the use of data. She suggested we might better understand both what kind of QR we want to see and how students are using QR by reading a sample of papers drawn from archived portfolios. After reading student papers, the group set four learning goals with nine associated outcomes. (See Table 1.)

Table 1: Goals and Outcomes for Quantitative Reasoning in Student Writing

Goal

Outcomes

I. Thinks quantitatively
  1. States questions and issues under consideration in numerical terms.
  2. Identifies appropriate quantitative or numerical evidence to address questions and issues.
  3. Investigates questions by selecting appropriate quantitative or numerical methods
II. Implements competently
  1. Generates, collects, or accesses appropriate data.
  2. Uses quantitative methods correctly.
  3. Focuses analysis appropriately on relevant data
III. Interprets and evaluates thoughtfully
  1. Interprets results to address questions and issues under consideration
  2. Assesses the limitations of the methods employed, if appropriate to the task or assignment
IV. Communicates effectively
  1. Presents and/or reports quantitative data appropriately

An initial read of sample student papers in 2004 identified areas of concern and led to the creation of a rubric for assessing QR in writing. More important, the experience shaped our understanding of QR and its relation to the construction of argument. While traditional programs have emphasized QR's connection with mathematics and statistics, Carleton's working group began to appreciate relationship between QR and argument. These observations developed through portfolio reading were confirmed in the theoretical QR literature summarized in Steen's Achieving Quantitative Literacy (Mathematical Association of America, 2004). Moreover, portfolio reading provided evidence that a broader conception of QR could benefit students across the curriculum: assessment revealed that QR was employed roughly 70% of the time when assignments specifically required it. However, in nearly 90% of cases where QR would have been appropriate to set the context for an argument (cases drawn from all four curricular divisions of the college), students failed to supply specific numbers, dates, comparisons, ratios, and the like.

The experience of reading portfolios coupled with our ongoing reading of the QR literature led us to a conclusion we would not have predicted at the outset of our work: quantitative reasoning on our campus makes the best sense when it is done in the context of written argument and in cooperation with our Writing Program. These insights gleaned from the integration of QR and portfolio assessment supported a FIPSE grant for a program of faculty development along the lines of WAC programming to infuse QR across the disciplines, particularly in the context of written argument.

The complementarity of WAC and QR was quickly reflected in our programming in two ways. First, the Quantitative Inquiry, Reasoning, and Knowledge (QuIRK) initiative created a professional development curriculum that mirrored the model of the Writing Program: annual assessment sets an agenda for follow-up professional development workshops and brown-bag discussions that equip faculty to pursue curricular revisions funded over the summer, which in turn generate student work that is subsequently assessed through portfolio reading. Second, as we implemented this program, we looked for ways to combine QuIRK events with those of the Writing Program. To do so, we had to find convincing links between reasoning with numbers and rhetoric. Toward that end, we organized a workshop titled Writing With Numbers that drew over 30 faculty and staff from 13 disciplines (plus the library, institutional research, and the writing center) to work on the usual cluster of WAC issues (course design, assignment design, and response strategies) with the additional emphasis of using numbers rhetorically. Assignments drafted in that workshop (and subsequent ones) are mounted on a web site, along with informative pages on quantitative writing authored by John Bean (http://serc.carleton.edu/nnn/quantitative_writing/index.html).

QR informs WAC

To demonstrate an important way that numbers behave rhetorically, we offer two examples of prose containing quantitative terms that convey context and precision. While these examples represent only two of many genres found in academic writing—the book review and the analysis of a social phenomenon—they effectively demonstrate the generalizable QR habit of mind that asks, "What do the numbers show?" First, a passage of contextualizing information for a review of two books on the deepest parts of the oceans:

Only the uppermost part of the oceans—the top two hundred meters—bears any resemblance to the sunlit waters we are familiar with, yet below that zone lies the largest habitat on Earth. Ninety percent of all the ocean's water lies below two hundred meters, and its volume is eleven times greater than that of all of the land above the sea. This great realm is divided into a twilight zone—between two hundred and one thousand meters deep—and a zone of total darkness, which is itself varyingly subdivided. Below six thousand meters lies a region known as the hadal zone (a term coined only in 1959 from the French Hadès); in the Marianas Trench off the Philippines it is 11,000 meters deep. Ships plying the waters over the trench glide as far above Earth's surface as do jet aircraft crossing the face of America.

The hadal zone with its freezing water, heavy pressure, and darkness is seemingly harsh, but some of the imagined hardships are illusory. The freezing water, for example—which comes from the Antarctic seas—carries oxygen necessary for life. Were it much warmer the oxygen content would be insufficient to support fish and giant squid. And while the pressure is extreme (at just four thousand meters deep it is equivalent to that of a cow standing on your thumbnail), the creatures of the hadal zone don't feel it, because the pressure inside their bodies matches that without. And while there is no sunlight, light from luminescent creatures abounds (Flannery, 2007, emphasis added).

In this evocative passage, the writer conveys the scope of the deep sea environment with specific measures that address depth, volume, temperature, chemical composition, and water pressure in absolute and comparative terms. Quantitative language offers efficient, precise wording for the phenomena under discussion as well as context for vivid metaphors, such as ships and jets with equal "altitude" or a cow standing on a thumbnail to express a painfully clear message about pressure.

Our second example comes from The New Yorker, from an article about the decline of reading among adults:

In 1937, twenty-nine per cent of American adults told the pollster George Gallup that they were reading a book. In 1955, only seventeen per cent said they were. Pollsters began asking the question with more latitude. In 1978, a survey found that fifty-five per cent of respondents had read a book in the previous six months. The question was even looser in 1998 and 2002, when the General Social Survey found that roughly seventy per cent of Americans had read a novel, a short story, a poem, or a play in the preceding twelve months. And, this August, seventy-three per cent of respondents to another poll said that they had read a book of some kind, not excluding those read for work or school, in the past year. If you didn't read the fine print, you might think that reading was on the rise.

You wouldn't think so, however, if you consulted the Census Bureau and the National Endowment for the Arts, who, since 1982, have asked thousands of Americans questions about reading that are not only detailed but consistent. The results, first reported by the N.E.A. in 2004, are dispiriting. In 1982, 56.9 per cent of Americans had read a work of creative literature in the previous twelve months. The proportion fell to fifty-four per cent in 1992 and to 46.7 per cent in 2002. Last month, the N.E.A. released a follow-up report, "To Read or Not to Read," which showed correlations between the decline of reading and social phenomena as diverse as income disparity, exercise, and voting. In his introduction, the N.E.A. chairman, Dana Gioia, wrote, "Poor reading skills correlate heavily with lack of employment, lower wages, and fewer opportunities for advancement." (Crain, 2007, emphasis added)

This writer not only uses numbers to report research findings, but he effectively contrasts quantitative sources, demonstrating that clear information about reading among adults is open to interpretation. Both of the examples quoted above rely on quantitative imagery as much as quantitative reasoning, reflecting Jane Miller's (2004) contention that "Even for works that are not inherently quantitative, one or two numeric facts can help convey the importance or context of your topic" (p. 1).

We have found that this contextual use of QR is a compelling way for faculty to grasp the potential for QR across the curriculum, including traditional non-quantitative disciplines.

QR assessment meets WAC

WAC and QR integration began with assessment. With a better sense of what we hoped to develop in terms of student outcomes, we began work on a rubric for assessing the relevance, extent, and quality of QR in student writing. Like the Writing Program, Carleton's QuIRK initiative gathers faculty and staff each summer to read portfolio papers. However, because QuIRK is not interested in student evaluation so much as program evaluation, QuIRK reads only a random sample of roughly 400 papers drawn from portfolios. (A detailed description of our assessment protocol and our current rubric can be found at http://serc.carleton.edu/quirk/Assessment/index.html.) Following the example of the Writing Program, QR assessment sessions conclude with a discussion of what faculty readers observe after examining a sample of student work. These conversations guide topical programming for professional development workshops and Learning and Teaching Center brown-bag seminars during the subsequent year. Equipped by this training, faculty members are given small stipends to create new courses and/or assignments in the following summer. Over time, student work from these new assignments will show up in the writing portfolio for assessment, thus bringing faculty development and assessment together, a process known in assessment lingo as closing the loop. We believe the success of the FIPSE initiative to motivate curricular change in all divisions of the College followed directly from the integration of the WAC and QR programs. Support for this novel partnership continues with funding from the WM Keck Foundation and the National Science Foundation.

This combination of faculty development and assessment can be considered a curriculum of sorts, one in which assessment provides research questions to be tested and improved through programs for professional development. In contrast to typical faculty development sessions offered, say, to new faculty at the beginning of their employment, a curricular approach assumes that 1) pedagogy can be taught to active practitioners; 2) faculty members are willing to exercise the habit of lifelong learning they hope to inspire in their students by sharing their expertise and gaining new skills; 3) faculty members are the smartest, most exciting students that anyone could ever hope to teach—and learn from; and that 4) pedagogy is best evaluated in the context of student work assessed by those invested in student learning: faculty.

Within a cordial partnership, at times QuIRK and the Writing Program operate independently. For all we have in common, we each also have initiative-specific objectives. For instance, QuIRK has sponsored mini-workshops introducing faculty to basic statistics, whereas the Writing Program has sponsored reading groups on topics such as academic honesty or approaches to teaching large writing assignments. But the dominant theme has been cooperation rather than competition or even independent co-existence.

Benefits of a WAC/QR partnership

Our experience suggests that cooperation between QR and WAC programs can yield mutual benefits. What follows summarizes an argument made to QR professionals in the journal Numeracy offered here with permission of Numeracy's editors (Grawe & Rutz, 2009): (http://services.bepress.com/numeracy/vol2/iss2/art2/).

  1. Cooperation improves writing instruction

    Readers of mainstream media from the New York Times to USA Today to the CNN web site know that arguments that rely on numbers, charts, graphs, tables, and maps are so common that they are taken for granted. However, incorporating such evidence needs to be specifically taught, for students tend to lack experience with the rhetorical use of quantitative evidence.

    Over the past decade, a substantial literature has developed around these QR-specific issues in writing. (For example, see Few, 2004; Miller, 2004; Tufte, 2001; and Wainer, 2005.)

  2. Cooperation captures facets of QR that are easily overlooked

    Student experience of using numbers in academic situations can be limited to solving mathematical problems, that is, to seeking correct answers. However, the most interesting problems require careful appraisal of the rhetorical situation as well as the data themselves. In that respect, QR is as subjective as literary analysis: a data set (or text) is subject to a variety of responsible interpretations.

    When students tell us that data do not speak for themselves—that data are chosen, arranged, and interpreted by human beings—we know that their thinking is changing and the habit of mind we hope to encourage is developing in the context of solving rhetorical problems.

  3. Cooperation naturally defuses objections to "remedial" or "inoculation" models

    Historically, WAC has offset thinking that assumes that students can be inoculated with good academic writing habits through one course, ideally in the fall term of the first year. WAC and WID approaches act out the reality that writing is not mastered in one course or in the context of one discipline; it is a more developmental undertaking (for a thorough treatment of developmental theories and college writers, see Haswell, 1991).

    Similarly, QR programs in higher education are too often founded on the assumption that they are entirely remedial (Madison and Steen, 2009, p.8). Steen (2004) responds: "Although the basic elements of reading and writing are part of the K-12 curriculum, continued growth in both is universally recognized as an essential aspect of college education" (p. 3). As is the case with the expectation of continued growth in reading and writing, we can assume that quantitative reasoning at the tertiary level is every bit as sophisticated and subtle as other subjects that students study in higher education (pp. 16-17).

  4. Cooperation surmounts hurdles of institutional culture

    Once faculty give up the idea that QR belongs to a narrow disciplinary range of inquiry, e.g., mathematics, the possibilities for inter-disciplinary QR offer a challenge to faculty development programs. As we have seen with WAC, QR fits best as an overlay across the curriculum rather than in a departmental silo. Steen (2004) argues that isolation within a single discipline can be very dangerous as students come to see QR as "something that happens only in the mathematics classroom" (p. 18). ATD readers know well that a similar complaint gave rise to WAC approaches a generation ago. Students who believed that writing only "counts" in English courses had to accommodate expectations for their writing from everyone, not just English professors. Like Steele and Kiliç-Bahi (2008, pp. 2-3), we have found that that a well-established WAC program prepares the way for a cross-cutting QR initiative; the combination is more powerful than either initiative on its own.

    At Carleton, this played out in three ways. First, any cross-cutting initiative has to make a place for itself in the minds of faculty before it can set up shop in the curriculum. On a WAC campus, that work has been done, and QR can piggyback on the infrastructure.

    Second, WAC folks have seen that an initiative that is owned by everyone can default to orphan status without continual stewardship through faculty development. In Carleton's case, steady faculty support for WAC invited QR to colonize the programming by establishing the QR quest as one of improving written argument through the use of data. Everyone wants students to make good arguments, and faculty understand the value of data in their own professional arguments. Ergo, a QR/WAC combination not only made good sense, but helped promote WAC among the more quantitative disciplines.

    Less than ten years ago, the QR conversation on campus was limited to a few die-hards. We now have evidence that the QR/WAC connection has proven attractive to a critical mass of faculty. During academic year 2007-2008, QuIRK events attracted 92 unique faculty participants—over 50% of the college's teaching full time equivalents (FTE). As one might expect, scientists and social scientists were overrepresented in this group with 57 participants (or 67% of FTE from those divisions). However, faculty from the arts, literature, and humanities were also well represented (35 participants or 41% of FTE). Counting last year and the first quarter of 2008-2009, QuIRK has involved 61% of FTE—72% in the sciences and social sciences and 51% in the arts literature and humanities. There is no way such a rapid launch would have been possible without cooperation with the Writing Program.

    Finally, collaboration with the Writing Program gave QuIRK a funding boost as well as pedagogical compatibility. Lacking a dedicated budget line, QR faculty were able to show early results in grant applications, thanks to the WAC assessment archive of student work and WAC-funded faculty workshops and speakers. More recently, QuIRK has become well funded, and WAC grants have expired, which has produced a reciprocal relationship that continues to benefit everyone involved.

Implications for Graduation Requirements

Like so many other institutions of higher education, Carleton has moved toward following Bok's (2006) suggestion of a QR graduation requirement. A writing requirement has been in place for decades, and the nature of our new QR requirement differs substantially from the models adopted elsewhere, reflecting the integration with WAC. Our interdisciplinary and multidisciplinary collaboration has led us to re-imagine core skills and outcomes—an important and largely unforeseen result of the Carleton WAC/QR experience to date. This is well reflected in recent work to revise the Carleton curriculum.

For context, the most common type of QR requirement asks students to take a course in mathematics, statistics, or other algorithmic problem solving. In some programs, this is coupled with a second QR applications course in which students analyze and manipulate data in the context of a real world problem. When asked to recommend a QR requirement for Carleton, the QuIRK steering committee seriously considered this model. However, the group soon decided that having emphasized the contextual and rhetorical aspects of QR for almost four years, it would be difficult to support a more narrowly defined skills requirement. In particular, the committee worried that this traditional model would relegate QR to mathematics and a few natural and social sciences (where the applications classes would presumably fall) to the exclusion of nearly all courses in the arts, literature, and humanities. This result seemed entirely at odds with a program that had come to see a deep connection between QR and rhetorical argument throughout the academy. Instead, the committee recommended that students experience three Quantitative Reasoning encounters. An "encounter" would include any course with a substantial assignment or module designed to teach at least one of six learning goals:

Upon completion of the general education requirements, all Carleton students should:

By recognizing the interdependence of effective QR and WAC, these learning goals make it possible to envision QR encounters in the English, Religion, and History departments. The QuIRK further initiated a conversation with the WAC committee about revising the College's writing portfolio requirement. Recognizing the importance of QR to effective writing in the 21st century, QuIRK suggested that we require students include one paper in their sophomore writing portfolio demonstrating their ability to "write with numbers." This change would reflect the full integration of QR and WAC on campus: just as our understanding of QR reflects the relevance of rhetoric, so too our understanding of writing proficiency now points toward the importance of QR. What began as something of a marriage of convenience is now a respectable union.

Conclusion

Reflecting on 10 years of portfolio assessment tied to faculty development, we are pleased to conclude that what began as a rather fractious process fraught with objections has evolved into a genial status quo. Furthermore, while we salute the ability of faculty to cooperate with programming and understand the benefits of combined faculty development and assessment, we are delighted to see creative applications of the principles that WAC has stood for nationally over the past 30-plus years. WAC ideas about faculty as learners, about the locating of assessment in student work, and about the appropriateness of teaching rhetorical approaches regardless of disciplinary boundaries have all contributed to the success of the QuIRK initiative. WAC has been the platform; the growing appreciation for rhetorical numeracy sits comfortably atop that foundation.

As we have prepared this argument, we have been keenly aware of our own obligation to write with numbers, using data as evidence with as much precision as possible. One of us, an economist, would perhaps have done so without having participated in Carleton's recent history of building strong ties between WAC and QR. The other of us, a rhetorician, might have reached for data, but perhaps not as eagerly as she now does as a member of a teaching faculty that has undergone a sea change by accommodating QR into a WAC environment.

Whether the Carleton approach to faculty development and assessment makes sense on other campuses is of great concern to us. We would like to believe that our success is exportable, and we are eager to hear from campuses with similar programs. Thanks to current funding for the QuIRK program, we also have the luxury of running feasibility studies on several campuses to test our QR rubric on samples of student work elsewhere. If we find, as we hope to, broad agreement that a natural affinity exists between writing and numbers, we can look forward to learning more about smart ways to design faculty development and appropriate assessment that will advance this work nationally. Our students are counting on us.

References

Bean, John. (1996). Engaging ideas: The professor's guide to integrating writing, critical thinking, and active learning in the classroom. San Francisco: Jossey-Bass.

Best, Joel. (2008). "Birds—dead and deadly: Why numeracy needs to address social construction," Numeracy, 1(1): Article 6. Retrieved from http://services.bepress.com/numeracy/vol1/iss1/art6

Bok, Derek. (2006). Our underachieving colleges: A candid look at how much students learn and why they should be learning more. Princeton, NJ: Princeton University Press.

Crain, Caleb. (2007). Twilight of the books. The New Yorker, December 24, 2007. Retrieved from http://www.newyorker.com/arts/critics/atlarge/2007/12/24/071224crat_atlarge_crain

Few, Stephen. (2004). Show me the numbers: Designing tables and graphs to enlighten. Oakland, CA: Analytics Press.

Flannery, Tim. (2007, December 20). Where wonders await us. The New York Review of Books, 54(20). Retrieved from http://www.nybooks.com/articles/20897

Fulwiler, Toby, & Young, Art (Eds.). (1990). Programs that work: Models and methods for writing across the curriculum. Portsmouth, NH: Boynton/Cook Heinemann.

Grawe, Nathan, & Rutz, Carol. (2009). Integration with writing programs: A strategy for quantitative reasoning program development. Numeracy, 2(2): Article 2. Retrieved from http://services.bepress.com/numeracy/vol2/iss2/art2

Haswell, Richard H. (1991). Gaining ground in college writing: Tales of development and interpretation. Dallas: Southern Methodist University Press.

Madison, Bernard L., & Steen, Lynn Arthur. (2009). Confronting challenges, overcoming obstacles: A conversation about quantitative literacy, Numeracy, 2(1): Article 2. Retrieved from http://services.bepress.com/numeracy/vol2/iss1/art2

Maimon, Elaine. (1990). Beaver College. In Toby Fulwiler & Art Young (Eds.), Programs that work: Models and methods for writing across the curriculum. Portsmouth, NH: Boynton/Cook Heinemann.

Miller, Jane E. (2004). The Chicago guide to writing about numbers. Chicago: University of Chicago Press.

National Numeracy Network. Vision statement. Retrieved from http://serc.carleton.edu/nnn/about/index.html

National Council on Education and the Disciplines. (2001). Mathematics and democracy: The case for quantitative literacy. Washington DC: Mathematical Association of America.

Rutz, Carol. (2007). Delivering composition at a liberal arts college: Making the implicit explicit. In Kathleen Blake Yancey (Ed.), Delivering college composition: The fifth canon. Portsmouth, NH: Boyton/Cook Heinemann.

Rutz, Carol. (2007). Learning from the writing portfolio: Assessment yields research. Unpublished handout for a session sponsored by the Perlman Center for Learning and Teaching, Carleton College, April 10, 2007.

Steele, Benjamin, & Semra Kiliç-Bahi. (2008). Quantitative literacy across the curriculum: A case study. Numeracy, 1(2): Article 3. Retrieved from http://services.bepress.com/numeracy/vol1/iss2/art3

Steen, Lynn. (Ed.). (2004). Achieving quantitative literacy. Washington, DC: Mathematical Association of America.

Tufte, Edward R. (2001). The visual display of quantitative information. Cheshire, CT: Graphics Press.

Wainer, Howard. (2005). Graphic discovery: A trout in the milk and other visual adventures. Princeton, NJ: Princeton University Press.

Yancey, Kathleen Blake. (Ed.). (2007). Delivering college composition: The fifth canon. Portsmouth, NH: Boyton/Cook Heinemann.

Notes

[1] Parts of this work are supported by generous grants from the National Science Foundation (#DUE-0717604) and the WM Keck Foundation.

Contact Information


Complete APA Citation

Rutz, Carol, & Grawe, Nathan D. (2009, December 3). Pairing WAC and quantitative reasoning through portfolio assessment and faculty development. [Special issue on Writing Across the Curriculum and Assessment] Across the Disciplines, 6. Retrieved October 20, 2014, from http://wac.colostate.edu/atd/assessment/rutz_grawe.cfm