Abstract: This study reports on learning outcomes of one-credit writing-intensive (W) courses in the disciplines at a large public university where three-credit W courses are the norm. An evaluation of 210 final papers from four departments—Allied Health, Animal Science, Economics, and Nutritional Sciences—revealed that writing outcomes, as defined and measured by faculty and doctoral students the four participating departments, met expectations for junior/senior-level writing in their respective fields. Results suggest that well-designed one-credit Ws are viable, albeit with two significant qualifications: student motivation and performance are better when one-credit Ws are tightly aligned with a companion 2- or 3-credit course (as opposed to when they stand alone); and one-credit Ws can unintentionally trigger troubling labor issues.
Over the last several decades some colleges and universities have experimented with one-credit writing-intensive courses—some keyed to first-year composition and other to advanced courses in the disciplines, some freestanding and others linked to a two- or three-credit course. Very little has been published on these courses, which are sometimes called studios, labs, or supplemental instruction, and most of what is available describes curricular models for basic writing and first-year writing (for example, Tassoni & Lewiecki-Wilson, 2005). However, Joan Graham (2000) does offer a helpful taxonomy of variable credit general education and WAC/WID possibilities, distinguishing among what she labels writing components, writing adjuncts, and writing links and following that with snapshots of how those models are manifest at six universities. Over the years a few exchanges on WPA-L have turned on how such arrangements can either affirm or undermine WAC and WID. For example, responding to 2002 thread ("seeking WPA advice!" 8 July 2002), Clyde Moneyhun writes, "In theory, the WAC element is great, the possibility of team teaching is great, etc. In reality, there's a wave of really block-headed thinking going across the country that involves reducing 'the writing part' of a course to a one-credit add-on, for example, so that a prof gets to lecture (business as usual) while somebody else has to 'correct the writing.' If this kind of stuff catches on, the revolution in writing instruction will be over." Bill Condon replies, "But has anyone taken up the challenge? I mean, this could be an opportunity for terrific interdisciplinary teaching…. It's time WAC stopped being so, well, territorial. After all, circling the wagons in the face of an initiative from outside the WAC program sort of goes against the very nature of WAC, doesn't it?". Nine years later, another thread surfaced that drew posts from Chris Thaiss, Doug Hesse, Martha Townsend, and David Schwalm, who debated the relative merits of one-credit W courses, with some worrying that an "add-on" mentality would prevail over an "integrated" approach ("One credit of writing instruction" 14 July 2011). While such informal exchanges raise vital concerns, and while published descriptions of curricular models are useful, I have found no empirical studies that investigate the efficacy of advanced one-credit courses in the disciplines by examining the student writing that emerges from them.
This article reports on a study of upper-division, one-credit writing-intensive (W) courses offered by four different departments at a public university where the dominant mode of meeting the writing-in-the-major requirement is the three-credit W course. Student outcomes, as measured in 210 final papers through rubric scoring and several other qualitative measures, affirm the promise of one-credit W courses. Indeed, the quality of the papers emerging from one-credit Ws was roughly commensurate with outcomes of three-credit W courses at the same university (those had been assessed in previous years).
The largely favorable outcomes evident in the one-credit Ws come with an important qualification: the one-credit Ws studied here worked better not as freestanding courses but instead when tightly aligned with a companion course in the same field. It also comes with major caution: the one-credit hour designation can mask the substantial out-of-class grading and conferencing involved in writing-intensive pedagogies, potentially leading to unfair labor practices. In other words, department chairs and upper administrators, accustomed to assigning teaching loads by credit hours, need to be attentive to the labor-intensive nature of one-credit Ws when determining teaching assignments.
Beyond delivering empirical evidence that thoughtfully designed one-credit Ws are viable, this article features one new instrument for assessing how student writers select and deploy sources in their papers, what we call a deep audit. This may prove useful to those who undertake future outcomes-based assessments of WAC and WID.
This research took place at the main campus of University of Connecticut, where two institutional factors set the stage for current study: longstanding W requirements that are the centerpiece of a relatively healthy WID program; and a multi-year initiative co-sponsored by the University Senate and the University Writing Center that invites departments to engage in outcomes-based, direct assessment of student writing in W courses.
UConn requires students to take two W courses, and at least one of those must be in the major. This means that every department must offer at least one W course, and criteria for such courses are pretty typical for WID programs (Townsend, 2001): a minimum page count (15 pages); a mandate that revision and feedback happen; an enrollment cap (19 students); and a stipulation that students must pass the writing component in order to pass the course (for details, see the "Writing Competency" section of UConn's General Education Guidelines). New course proposals must travel through a rigorous approval process, and faculty may not convert a non-W course into a W on the fly by simply adding a paper. The University Writing Center provides ongoing support: voluntary orientations and workshops for faculty, mandatory orientations for TAs involved in W instruction, individual consultations, and peer tutorials for students. University guidelines state that W courses have no credit hour restrictions—departments even have the option of designing an alternate route for meeting the W-in-the-major requirement (for example a portfolio system, though to date none have opted for an alternate route). The default and dominant practice has been the three-credit W course, with some departments placing it at the front of the major in the form of a methods course, others placing it at the back end of the major as a capstone, and others offering a number of introductory, mid-level and/or advanced W options from which students may choose.
Shortly after the latest W requirements were set in 2004, two units secured approval for one-credit Ws, but they took two very different approaches. The School of Business opted for a freestanding course, Effective Business Writing, which focuses on workplace genres and is staffed with lecturers who teach multiple sections. The much smaller Department of Animal Science opted for a one-credit W companion to a three-credit endocrinology course. That course, Scientific Writing in Endocrinology of Farm Animals, is typically taught by the same professor who teaches the three-credit course, although some semesters a graduate student has been assigned to work with the faculty member. This linked, co-requisite section, which Joan Graham (2000) would call a "writing adjunct," meets weekly and guides students through researching, composing, and editing a literature review on a topic of their choice related to animal endocrinology. A handful of departments, all in the applied sciences, have followed Animal Science and created similar one-credit W companion courses. Only one department, Economics, has followed the School of Business in creating a freestanding one-credit W course.
Organizers of an ongoing UConn writing assessment project, myself among them, noted this slow but significant increase in one-credit W courses over ten years and decided to make them the focus of assessment efforts in 2013-14. The larger writing assessment project had begun much earlier, in 2008, and was focused on learning outcomes as evident in the final papers submitted for three-credit W courses. In several rounds of three-credit W course assessment, as well as in the most recent round of one-credit W assessment, we have been attentive to the complex nature of writing—that is, we approached writing not only as a set of sub-skills but also as a context-dependent mode of learning and communicating that is intertwined with reading, research, disciplinary content, and information literacy. Three departments volunteered to participate in the first round of three-credit W assessment: Art History, Human Development and Family Studies, and Political Science. In years that followed, Nursing, Freshman English, Mechanical Engineering and Electrical Engineering opted in.Â Study methods are detailed in this report and findings for each of the participating departments are detailed in the four separate reports linked above.
Those first four waves of assessment focused exclusively on three-credit W courses, and a meta-analysis of them, "Summary Report on the Assessment of Academic Writing at the University of Connecticut, 2010," reveals the most significant cross-departmental findings, although it omits discussion of department-specific strengths and weaknesses, which are in many ways the heart of the project. Some of the most important cross-departmental findings were:
Through several rounds of assessment, we had learned a great deal about our three-credit W courses. What we did not know was how the one-credit Ws were faring, and therefore we turned our gaze to them in 2013-14.
The one-credit study maintained the core practices of collecting, scoring, and discussing student writing in disciplinary cohorts. However, we decided to stop collecting grades and demographic information and we discontinued a self-efficacy questionnaire, which allowed us to revise the IRB protocol to omit the informed consent process, which had been leading to about one-third of students opting out. For this round, we collected all final papers directly from instructors. The changes meant omitting some kinds of analysis but they allowed us to maintain the core of our work (attending directly to student writing), collect a more complete set of UConn student writing from each department, and determine whether past findings were influenced by self-selection bias.
During the fall of 2013 I invited several departments that offered one-credit W sections to participate—some declined. I worked with the department heads of those that accepted—Allied Health, Animal Science, Economics, and Nutritional Sciences—to recruit a faculty coordinator for each department, offering a stipend as incentive. We collected final student papers from across W sections in fall 2013 and spring 2014 (some departments offered them only one semester, others both). Papers were stripped of identifying information and assigned a code. During spring 2014 I asked each of the four faculty coordinators to develop a ten-item rubric: the first six items on each rubric needed to reflect the writing priorities of each department, and the final four items on each rubric were held consistent across departments: editing/mechanics, style, citations, and holistic score (see Results section for these rubrics). We required that each discipline-specific rubric needed to be approved by either the whole department or its curriculum committee.
The bulk of the assessment work took place during eight days in the summer and the faculty coordinators and readers were compensated with stipends. I began with a one-day orientation to WAC assessment through a discussion of several readings. We also read and discussed, as a group of fourteen (four teams of three from departments, plus me, plus a doctoral student in rhetoric and composition who was assisting with the project), several student papers that had been collected during past assessments. And we reviewed the rubrics, in some cases making last-minute adjustments. On day two we moved on to scoring in departmental cohorts. To improve the reliability, the three-person faculty/doctoral student team from each department engaged a calibration process of scoring four to eight practice papers (the number depended on how quickly they arrived at consistent agreement). Once reliability was reasonably established, two readers scored each paper independently; in cases where the readers did not agree, a third reader (the faculty coordinator) scored the paper and the trio reconciled differences. By the end of this process, which took two days, each paper had a master score on each of the ten rubric items.
37 final paper/literature reviews from Nutritional Sciences sections and 53 from Animal Science were scored, representing all the papers submitted in departments, with the exception of those from one section in Nutritional Sciences because that instructor opted not to participate. We collected well over 100 papers from Allied Health and Economics, both larger departments, then selected 60 at random from each department.
In addition to the rubric scoring, we employed four additional qualitative components:
A significant shortcoming of the study is that it does not include student perspectives (beyond what is evident in their final papers); nor did the methods include gathering feedback from faculty beyond the six professors and six doctoral students involved in the assessment project (although the final step of the process involved them talking reports back to their home departments for discussion and action).
While the study focused on student outcomes as evident in their papers, we also learned something about the nature of the courses from which the papers emerged. They typically involved rigorous assignments and robust revising processes. All sections demanded long, source-driven final papers, involved deliberate stages of drafting and revising (most peer review too), and featured discipline-specific research and writing. They clearly met internal UConn guidelines for W courses, and the genres assigned were fitting for undergraduate capstone writing experiences: two departments (Animal Science, Nutritional Sciences) required a long literature review; one (Allied Health) required a research proposal with a literature review embedded in it; and one (Economics) required an open-topic, thesis-driven research paper. The one-credit structure guaranteed weekly dedicated class time for lectures and activities keyed to research, writing, and/or peer review. All this was affirming, although perhaps not surprising because the departments that opted to participate in the assessment were among the more pro-active about teaching on our campus—indeed, they saw the project as an opportunity to further improve W courses that they had designed carefully and staffed with committed faculty and teaching assistants.
What follows are four sections that describe what the rubric scoring, deep audits, and qualitative discussions revealed about the student papers from each department. Also included is what the study recommended to each department.
The Animal Science one-credit Ws were among the earliest approved at UConn and have always been tightly integrated with a companion three-credit course. Although there is some variation by section in assignment expectations, all assign the same genre: a long literature review requiring at least ten sources. The assignment is designed to engage majors with published research. Students must find, summarize, and synthesize multiple sources to discover the research consensus on a topic of their own choice. The tight integration of the W with the companion three-credit course, the use of core faculty to teach most sections, and cycles of faculty and peer review of drafts built in to the writing process produced good student outcomes.
We collected all 53 final papers from four sections. On nine of ten rubric items—including the holistic score—the mean and median scores for literature reviews fell in the range of moderate proficiency for advanced undergraduate writing in the major. In this study, scorers set a fairly high bar for moderate proficiency. When students did the major elements of the assignment competently, they were scored minimal proficiency; moderate and excellent were reserved for work that went beyond the basics. About one-third of the literature reviews were rated as excellent overall, which is a higher rate than we have found in other departments.
As the rubric scores suggest, Animal Science majors demonstrated strengths in several higher-order concerns: development, structure, and conclusions. This was affirmed in qualitative discussions, which also led to comments on other strengths: students selected fitting topics, and they drew on good peer-reviewed journal articles. Papers cited an average of fourteen sources per paper, going beyond the required ten; the deep audit process also showed that students were using sources for a wide variety of purposes: offering context, supplying evidence, and introducing dissenting points of view. Scores for sentence-level editing were high.
The lowest mean rubric item was "Objective of Paper," which was probably the result of assignment variation across sections: only one of the three sections required that students include an explicit statement of objectives. This raised the question of whether the department should continue to include this item on its rubric. (Students scored fairly well on "Conclusions," which suggested that they had an adequate grasp on their objectives by the end of the review.)
A theme that emerged in discussion was the need for student writers to better integrate and synthesize their sources (even though the median rubric score for "Use of Sources" was moderate proficiency). Many students moved from source to source, discussing each in turn as a series of summaries, whereas the strongest reviews progressed topic by topic, with a cluster of sources discussed under each topic. Deep Audits of eight randomly selected literature reviews offered a more nuanced view of how students were using their sources—and revealed some systemic problems that were not visible during rubric scoring. The Animal Science team discovered that too many students were masking serious research, writing and ethical problems. One of the eight papers deep audited was judged as committing gross plagiarism/intentional fraud: a student closely mimicked the content and conclusions of published meta-analysis but hid that fact, leaving the reader to think that he or she had actually reviewed the primary sources covered in the meta-analysis. While that was an exception, the norm was a pattern of lesser but still serious source attribution problems: six of the seven remaining papers that faculty audited featured one or a combination of transgressions that could technically be defined as plagiarism but that were judged as unknowing, unintentional or careless source misuse (i.e., tracking too closely one or two sources without making that clear to readers, paraphrasing improperly, not attributing occasional paraphrases to the original author, making questionable omissions). To put this in a wider context, some recent studies (Citation Project) of student writing suggest that this is fairly common. Still, the Animal Science team concluded that these patterns merited attention and could not be addressed with one-shot solutions such as plagiarism software. They recommended more comprehensive responses, such as sending a "quality over quantity" message by adapting the assignment to require fewer sources, including more explicit instruction on source use, and requiring students to reflect explicitly on their research and writing methods.
These W sections required students to undertake a particularly challenging writing task: compose a 15+ page research proposal that includes a literature review. Paired with a two-credit lecture course on research, the one-credit W sections were taught by advanced doctoral students who coached students through the literature review/proposal writing process, which involved a series of drafts and cycles of instructor feedback. As judged by their final submissions, students performed well in meeting departmental writing expectations.
We collected all final papers from 2013-14 W sections and randomly selected 60 to include in this study. On eight of ten rubric items—including the holistic score—the median scores for literature reviews fell in the moderate proficiency range for advanced undergraduate writing in the major. In this study, scorers set a fairly high bar for moderate proficiency. When students did the major elements of the assignment competently, they were scored as achieving minimal proficiency; moderate and excellent were reserved for work that went beyond those basics. While Allied Health students scored well on nearly all rubric measures, very few papers were rated excellent overall, perhaps because, some speculated, the assignment was designed to prepare students for research but most Allied Health majors did not intend to move directly on to graduate-level research.
Rubric scoring revealed that students were strongest in selecting sources (relevant, recent peer-reviewed articles), in composing "Predictions/Discussion," and in doing sentence-level editing. Through qualitative discussions we noted other patterns of strength as well: students understood the assignment well and dutifully adhered to the expected format (following the samples provided to them); most used more than the required five sources and many papers ran longer than twenty pages, which showed student investment (although papers with more sources and more pages were generally not better in quality than the shorter ones); most attempted some critique of the literature; and most attempted to integrate/synthesize their sources, using topic subheadings to prompt comparisons of two or three articles.
"Style" was among the minimally proficient subskills, indicating that too many students came across as too loose and opinion-like in their prose. The Allied Health team emphasized that this did not suggest that instructors should focus more on grammar or mechanics—in fact, "Editing" [for correctness] scores were fine at moderately proficient. Instead, they suggested that TAs might devote a lesson or two to teaching novice writers to adopt an appropriate scientific voice. "Citations" was the lowest mean among rubric items, but the team interpreted this not as students being unaware of the need to cite their sources, but instead as their not strictly following APA documentation conventions.
Deep audits of a subset of eight papers revealed that most students read journal articles all the way through and used them purposefully. Although there were a few cases of improper paraphrasing and absences or misplacements of appropriate in-text citations, there were no cases of gross/intentional plagiarism. On the whole, Allied Health students were found to use sources more effectively and ethically than majors in the other departments assigning literature reviews that we studied. This suggests not only that instructors were careful to teach students sound research and writing practices but also that they were probably wise to have the assignment require five sources/articles, not the ten required in some other UConn one-credit Ws studied in this round.
The main shortfall, which was captured in discussions rather than by the rubric scoring, was that students often did not link their literature reviews to their proposals closely enough—that is, they often did not draw on the articles from the literature review when formulating the objectives for the study design. The Allied Health team concluded that students need to better understand how one aspect of the paper feeds the others and create a consistent thread that runs through the whole paper. This may be a symptom, they speculated, of how the assignment was taught section by section, and might be addressed by directing students to consider the alignment between the two parts of the project midway through the writing process and focus their final round of revisions explicitly of synthesis of the literature review and proposal sections.
Economics changed its W curriculum more recently than the other departments involved in this study, offering its first one-credit Ws in 2012-13. The economics Ws differed from the others in this study in two other significant ways: (1) they were not attached to a companion two- or three-credit lecture course; and (2) the core assignment was a thesis-driven paper rather than a literature review. Students selected a topic of interest, engaged in research, and composed an argument or analysis grounded in sources. A professor from the department coordinated the course and oversaw a cohort of graduate students who taught most of the sections, coaching students through the researching, drafting, and revising processes in weekly sessions.
We collected all final papers from 2013-14 sections and randomly selected 60 to include in this study. The median for most rubric items was 2, or minimally proficient, for advanced undergraduates in the major.
Students scored well on "Structure" and "Presentation," which were points of emphasis in the course. The relatively strong showing on structure was encouraging because students were not given a format to follow; instead each had to decide on a structure appropriate to his or her argument. The relatively high scores for "Presentation" (their language for "grammar" or "editing") were consistent with findings from earlier rounds of W assessment that show UConn students are generally more proficient in mechanics than in higher order concerns such as argument and analysis.
Economics majors also proved relatively strong in selecting a topic, articulating a clear thesis, and setting the context for that thesis. Proficiency fell off, however, when it came to developing, supporting, and sustaining that thesis. Indeed, the lowest rubric scores were in "Depth of Argument" and "Use of Sources," both of which were good predictors of the holistic score.
As for "Depth of Argument," most students showed minimal proficiency in sustaining an extended, research-driven thesis, with a quarter of papers rated as unsatisfactory in this area. An important contributing issue was identified through qualitative discussions: when students argued for or against a particular public policy, they did not ground their analyses enough in economic theories that they should have learned or be learning in their economics courses. The original expectation for the one-credit W course was that students would transfer what they had learned in other economics courses to their W papers, but that generally did not happen. Most students did not seem to perceive this course as an extension of earlier courses. In future iterations of the W course, the Economics team recommended, instructors might coach students more explicitly on how to bring specific theories learned in other economics courses to bear on their arguments; the assignment could even require that one section of the paper name and discuss which particular economic theory or theories would serve as the foundation for the paper's argument.
Where students seemed to use sources best was at the front end of the paper to set up the background for the thesis, but overall source use was a relative weakness, and one third of the cohort scored unsatisfactory in this area. The low scores on this subskill were due both to the kinds of sources students selected and to how they brought them to bear on their arguments. While students in the other three one-credit W departments relied almost exclusively on peer reviewed journal articles, economics majors drew more popular press sources, as well as on journal articles outside economics (healthcare, human rights, political science). This habit contributed to the phenomenon discussed in the "Depth of Argument" paragraph above: too often the sources, while trafficking in economic issues, where outside the mainstream of the discipline. A related issue was that students often failed to consider the essential economic thinkers on their respective topics. These patterns in source use were largely confirmed when doctoral students did deep audits of seven randomly selected papers. Of those, one was found to have used sources impressively and one showed evidence of serious plagiarism, but the other five hovered in the low-middle range, achieving minimal proficiency in source use.
When Nutritional Sciences moved to a one-credit W requirement, it followed the model of Animal Science by offering a one-credit W tightly integrated with a companion three-credit course. All sections assign the same genre: a literature review on a topic of the student's choice (but within the scope of the course content) that required at least ten sources. Students were charged with finding, summarizing, and synthesizing multiple sources to discover the research consensus on their topic. The assignment engages majors with published research in ways that would prepare them for graduate study or professional life. The department saw the W and its companion course as a capstone experience.
We collected 37 final papers from three sections. On eight of ten rubric items—including the holistic score—median scores for literature reviews evinced moderate proficiency for advanced undergraduate writing in the major. The subskill scores suggested that students were doing fairly well across the board.
Students performed particularly well in "Use of Sources" and "References." This was affirmed in our broader discussion of the papers—most agreed that Nutritional Sciences students were performing up to or beyond expectations in identifying relevant topics, using appropriate databases, selecting good sources (thirteen per paper, on average), reading their sources all the way through (not just relying on the abstract), using sources purposefully, and articulating the aim of the paper.
The high mean for "References" affirmed that students documented their sources according to disciplinary conventions. However, when doctoral students conducted extensive deep audits of eight randomly selected papers, the impressive rubric scores on Use of Sources turned out to mask some problems. Of the eight papers source-checked, two included occasions of gross plagiarism and five featured one or a combination of transgressions that could technically be defined as plagiarism but that were judged as unknowing, unintentional or careless source misuse (most common for Nutritional Sciences was copying relatively short passages from articles without attribution). After performing the deep audits, doctoral students scored two of eight papers as "poor" in source use; however, these same papers scored "moderately proficient" on source use during the initial rubric scoring. As one scorer reflected, "Some of the papers that were most impressive on the surface turned out to be using sources badly, and some of the papers that not polished on writing turned out to be using sources most honestly." Nutritional Sciences majors were performing well in finding and selecting appropriate sources; they also seemed to know the basic purpose and expectations for the literature review genre. But when the Nutritional Sciences team looked more closely at how students translated their sources into a review, they saw them falling short of intellectual and ethical expectations for advanced undergraduates. Despite delivering relatively refined prose, too many were fudging sources in ways that could get them into trouble in graduate studies or professional life. One curricular response they suggested would be to send a "quality over quantity" message by revising the assignment to require fewer sources, which would allow faculty more time to teach students how to use sources more responsibly.
The lowest rubric items—although both still crossed the threshold of minimal proficiency—were "Quality of Analysis" and "Conclusions and Implications." The "Quality of Analysis" issues were of two main kinds: students were not fully comprehending individual research articles (even though they seem to be finding good sources, reading them through, and documenting them correctly); and students were not synthesizing studies to articulate the body of evidence or consensus on a given topic. Some scorers suggested that to address reading comprehension, the whole class might read one article together and interpret it (this might need to be done in the three-credit companion course). As discussed above in the Animal Science section, one practical way to steer students away from moving from source to source, discussing each in turn, would be to teach them to progress instead from subtopic to subtopic, with a cluster of sources discussed under each subtopic heading. Given the relative deficit in synthesis, many students were not prepared to articulate conclusions or implications. Conclusions/implications had not only the lowest mean score but also the largest number of students who scored "unsatisfactory." This suggested that instructors should do more modeling of how to synthesize research, state conclusions with confidence, and articulate implications.
The Nutritional Sciences team identified four additional areas for improvement (albeit less pressing ones) during our qualitative discussions: (1) students often used scientific terms—prove, correlation, accuracy, validity—wrongly or imprecisely; (2) most did not see the need to define key terms to establish scope and set consistency (for example, defining "obesity" when setting up the review on that topic); (3) they did not seem to understand the genre of the review article (as compared to the journal article); and (4) they did not have a clear sense of their audience. Although our study revealed several areas of concern, we also noted that a 15-page literature review with 10+ sources is a challenging assignment, particularly for a one-credit course, and that students were achieving moderate proficiency in most areas.
This study takes the department as the unit of analysis. Indeed, most of each rubric reflects discipline-specific (and genre-specific) priorities of each respective discourse community; moreover, the scoring cannot help but reflect the habits of each departmental trio (calibration was done within, not across, department cohorts). This means that there was no neat or universal measure for comparing writing performance across the four departments, although this circumstance was mitigated to a significant degree by the common training that participants received, the shared paper reading we did at the start of the project, the common four-point scoring scale, and the cross-departmental reading of papers we did at the end. Moreover, another common denominator is that I was present, as project coordinator, to coach the groups in parallel ways, as well as for all iterations the research across several years, facilitating the discussions, training readers, observing the scoring, and reading selected papers from all the departmental batches. Several conclusions emerged from tallying the rubric scores and reviewing qualitative data, all of which were raised as tentative conclusions during the final day of our summer session in 2014 and discussed by project participants.
The driving question for this study was whether—at least in the context of UConn—students in one-credit Ws were achieving academic writing outcomes commensurate with those of students in three-credit W courses.Â We concluded that outcomes for the one-credit W sections that were part of this study were reasonably similar. In fact, the collective outcomes proved uncannily consistent with some patterns summarized in the 2010 meta-analysis of the first several years of three-credit W assessment. The two most significant overlapping outcomes:
Among the one-credit phenomena that we found mostly consistent with earlier three-credit outcomes, but that proved more intense or prominent in the one-credit courses, were the following:
One outcome was not relevant to three-credit W but proved especially relevant to one-credit Ws: The more explicit the integration of the one-credit course/lab with a companion three-credit or two-credit course—and/or with the major curriculum—the better the outcomes. Stand-alone one-credit courses seem to invite motivational problems (several instructors reported that many students take a one-credit course less seriously); they also present structural challenges (most students compartmentalize their learning, thinking course by course, focusing on grades, but the assignments in these W courses were long and difficult, requiring a synthesis of research skills, content knowledge, and theory from previous and concurrent courses in the major). If students are not coached explicitly on how to draw on previous and/or concurrent coursework to develop such major research/writing projects, most will not make those connections on their own (a similar breakdown can happen when there is little explicit coordination between lecture and lab sections of a science course). The three departments in this study that have linked courses fared fairly well in making such connections visible to students, but even when considering those linked courses as a set, instructors noted that sections worked better when instructors highlighted connections and employed some common vocabulary for research and writing across the companion lecture and W sections (in-class activities to help students recognize and/or forge such connections might also have helped). The stand-alone economics course presented bigger challenges, and as noted earlier, students too seldom brought theories from their other economics courses to bear on their papers. Any floating one-credit W course that is not explicitly linked to another course—or that does not devote time to showing students how to make such links—will likely encounter similar problems. I am more optimistic about one-credit W courses that are co-requisite with a lecture or methods course—what Graham calls "adjunct" writing components—than I am about stand-alone one-credit Ws.
As noted in the introduction of this article: the one-credit designation can mask the amount of instructional labor involved in teaching these courses. Thinking through the lens of credit hours can turn pernicious if department heads or administrators reason, "OK, now that I know that well-designed one-credit Ws are viable, I can convert one three-credit W into three one-credit Ws and get three times as many students through the writing requirement for the same labor costs." Such logic does not accord with the realities of teaching writing sections that involve drafting, revision, and substantial out-of-class grading and conferencing responsibilities. Ignoring those realities could lead to exploitive labor conditions. Fortunately, the departments involved in this study were duly aware of such factors and kept teaching loads for both faculty and TAs reasonable; they also mentored and supported their teaching assistants. All the department chairs understood that teaching a one-credit W section adds up to more work than one-third of a traditional three-credit course, and that they are more work than a leading a traditional (non-writing-intensive) discussion section of a lecture course. When considering teaching loads, the departments wisely thought more in terms of covering sections or labs than in terms of compensation by credit hour. Enrollment caps are also important. Our university strictly enforces its 19-person enrollment cap on all Ws, but some departments opt to set even lower W section caps of 15 to ensure robust instructor/student interaction and careful attention to drafts.
As for the faculty development consequences of this kind of assessment, the faculty and doctoral students who participated in this project affirmed not only that would they bring data-driven findings back to their home departments but also that the process of assessment itself had widened and enriched their own thinking about writing in the disciplines. While they testified to learning much about writing assessment and pedagogy, they also had ownership of much of the process—developing the department-specific rubrics, doing the scoring, driving the qualitative discussions. It is rare for instructors from a range of disciplines to sit down with others for nearly two weeks in the summer to discuss readings, score student papers, engage in semi-structured discussions, and share thoughts, jokes, frustrations, coffee. It was assessment, sure, and it was research, but it was also faculty development. Given the intensive, collaborative and dialogic nature of this research, it is perhaps no surprise that all the participants came to reflect anew on how they teach writing in their own courses.
Bean, John C., Carrithers, David, & Earenfight, Theresa. (2005). Transforming WAC through a discourse-based approach to university outcomes assessment. WAC Journal, 16, 5-21. Retrieved from http://wac.colostate.edu/journal/vol16/bean.pdf
The Citation Project: Publications. Retrieved from http://site.citationproject.net/publications-and-presentations/publications/
Graham, Joan. (2000). Writing components, writing adjuncts, writing links. In Susan H. McLeod and Margot Soven (Eds.), Writing across the curriculum: A guide to developing programs (pp. 78-93). Newbury Park, CA: Sage (Original work published 1992). Retrieved from http://wac.colostate.edu/books/mcleod_soven/
Holmes, Colette O., & Warden, Joseph T. (1996). The chemical information instructor CIStudio: A worldwide web-based, interactive chemical information course. Journal of Chemical Education, 73(4), 325-331.
Howard, Rebecca Moore, Rodrigue, Tanya K., & Serviss, Tricia C. (2010). Writing from sources, writing from sentences. Writing & Pedagogy, 2(2), 177-192.
Minerick, Adrienne R. (2011). Journal Club: A forum to encourage graduate and undergraduate research students to critically review the literature. Chemical Engineering Education, 45(1), 73-82.
Murphy, Sandra, & Yancey, Kathleen Blake. (2013). Construct and consequence: Validity in writing assessment. In Charles Bazerman (Ed.), Handbook of research on writing: History, society, school, individual, text (pp. 365-386). NY: Routledge.
Nelson, Jennie, & Kelly-Riley, Diane. (2001). Students as stakeholders: Maintaining a responsive assessment. In Richard H. Haswell (Ed.), Beyond outcomes: Assessment and instruction within a university writing program (pp. 143-160). Westport, CT: Ablex Publishing.
Tassoni, John Paul, & Lewiecki-Wilson, Cynthia. (2005). Not just anywhere, anywhen: Mapping change through studio work. Journal of Basic Writing 24(1), 68-92.
Thompson, Leigh, & Blankinship, Lisa Ann. (2015). Teaching information literacy skills to sophomore-level biology majors. Journal of Microbiology & Biology Education, 16(1), 29-33.
Townsend, Martha A. (2001). Writing intensive courses and WAC. In Susan McLeod, Eric Miraglia, Margot Soven, and Christopher Thaiss (Eds.), WAC for the new millennium: Strategies for writing-across-the-curriculum programs (pp. 233-258). Urbana, IL: NCTE.
White, Edward M. (1984). Holisticism. College Composition and Communication, 35(4), 400-409.
Yancey, Kathleen Blake, & Huot, Brian. (1997). Introduction—assumptions about assessing WAC programs: some axioms, some observations, some context. In Kathleen Blake Yancey and Brian Huot (Eds.), Assessing writing across the curriculum: Diverse approaches and practices (pp. 7-14). Westport, CT: Ablex.
Young, Mark R., & Murphy, J. William. (2003). Integrating communications skills into the marketing curriculum: A case study. Journal of Marketing Education, 25(1), 57-70.
 There is a small strand of publications—most of it descriptive—on one-credit undergraduate courses in the sciences that aim to cultivate skills in writing-related areas such as information literacy (Thompson & Blankinship, 2015; Holmes &Warden, 1996) and reading research articles (Minerick, 2011). Young and Murphy (2003) describe a set of 6 one-credit communications modules to complement a marketing/business curriculum, one of which is specifically keyed to writing. Nelson and Kelly-Riley (2001) describe and assess a one-credit course that helps students who fail a university-wide portfolio assessment to recover.
 Participants were asked to read the following before the first day of the summer meetings: p. 7-13 of Yancey and Huot (1997); Bean, Carrithers, and Earenfight (2005); Murphy and Yancey (2008); and White (1984).
 A similar phenomenon was noted in a 2010 round of assessment of UConn's Mechanical Engineering senior design W courses: students writing up their senior design projects seldom explicitly drew on earlier coursework in their reports. Research on transfer of learning, most of which endorses devoting more attention to meta-cognition, helps to explain such gaps and lends support to approaches that emphasize reflection, recursivity, and explicit instruction.
Deans, Thomas. (2017, March 29). One-credit writing-intensive courses in the disciplines: Results from a study of four departments. Across the Disciplines, 14(1). Retrieved November 18, 2017, from http://wac.colostate.edu/atd/articles/deans2017.cfm