Not at all! If you assign write-to-learn tasks, you won't want to mark any grammatical flaws because the writing is designed to be impromptu and informal. If you assign more polished pieces, especially those that adhere to disciplinary conventions, then we suggest putting the burden of proofreading squarely where it belongs—on the writer.
Editing write-to-learn (WTL) responses is counterproductive. This kind of writing must be informal for students to reap the benefits of thinking through ideas and questioning what they understand and what confuses them. Moreover, most WTL activities are impromptu. By asking students to summarize a key point in the three minutes at the end of class, you get students to focus on ideas. They don't need to edit for spelling and sentence punctuation, and if you mark those errors on their WTL writing, students shift their focus from ideas to form. In other words, marking errors on WTL pieces distracts students from the main goal—learning.
Disciplinary writing or formal documents do need to be edited, but not by the teacher. The most efficient way to make sure students edit for as many grammatical and stylistic flaws as they can find is to base a large portion of the grade on how easy the draft is to read. If you get a badly edited piece, you can just hand it back and tell the student you'll grade it when the errors are gone. Or you can take 20-30% off the grade. Students get the message very quickly and turn in remarkably clean writing.
Some teachers think that basing 20-30% of the grade on grammatical and stylistic matters is unfair unless they mark all the flaws. We approach this issue from the perspective of readers. If I review a textbook and find editing mistakes, I don't label each one and send the text back to the publisher. No, I just stop reading and don't adopt the textbook. Readers who are not teachers don't keep reading if a text is too confusing or if errors are too distracting. Readers who are teachers are perfectly justified in simply noting with an X in the margin where a sentence gets too confusing or where mistaken punctuation leads the reader astray. Students are resourceful (they can get help from our on-campus Writing Center or Writing@CSU) and will figure out the problem once a reader points out where the text stumbles. That's really all it takes.
Perhaps the most helpful tool in getting clean, readable drafts from students is the peer editing session (not to be confused with peer review sessions that focus on substantive issues for revision). Most students are better editors of someone else's draft than proofreaders of their own, so having students exchange drafts and look for flaws helps them find many more glitches than they'll find on their own.
If you feel compelled to mark grammatical and stylistic flaws, work out a shorthand for yourself and give students a handout explaining your marks. Most teachers can get by with one symbol for a sentence that gets derailed or confused, another for faulty punctuation of all sorts, and a third for inaccurate words (spelling or meaning). Save your time and energy for commenting on substance rather than form.
In the basic advice above, we noted that marking errors, particularly on writing-to-learn pieces, is counter-productive. Many writing teachers argue strenuously that marking errors even on formal writing can distract writers from focusing on larger issues in their writing—attention to audience and writing context, the overall claim or thesis, detailed development, and coherent organization. We have worked with hundreds of teachers who have insisted that marking errors in student writing—all student writing—is their obligation and responsibility. The resulting burden on these teachers typically led them to assign few if any writing assignments in their courses. Rather than fall into the "all or nothing" trap, we offer these sources as particularly useful in explaining why marking errors is not the best use of time for teachers responding to students' writing.
First, let's address in more detail the need to respond to the substance of students' thinking in WTL writing. Peter Elbow (1997b) explains in detail how low-stakes writing, including informal, in-class responses to WTL prompts or out-of-class electronic forum or discussion postings, functions most effectively. He argues that low-stakes writing works toward several goals:
If teachers respond to this preliminary, low-stakes writing with editing corrections, students will perceive the writing as high-stakes and will adopt less productive strategies that involve much less learning and critical thinking. And, Elbow argues, we certainly create much more work for ourselves.
He argues in this text and a follow-on piece (Elbow, 1997a) that low-stakes writing needs low-stakes response to maintain its effectiveness—zero response (teachers don't even collect the writing), a completion response (acceptable if turned in), a two-level response like satisfactory/unsatisfactory, or a three-level response such as excellent/OK/weak (128). For high-stakes writing, he maintains that grading need not rely on carefully parsed, multiple levels that do not necessarily convey meaning to students but that do increase the time required for response:
There is a traditional and crude distinction between form and content that many teachers use quite successfully (despite some criticism of it as old fashioned or even theoretically suspect). For example, one might explain one's criteria for a high stakes essay in a large course as follows:
I will grade these important essays on a three-level scale, unsatisfactory, satisfactory, excellent. I will count roughly two-thirds for content and one-third for form. By content,I mean thinking, analysis, support, examples [or one might talk about specific concepts or issues in the topic]. By form, I mean clarity and correctness.
Teachers sometimes break out these two broad criteria into four more explicit ones: correct understanding of course material, good ideas and interesting thinking, clarity, mechanics. These are traditional but sturdy, workable criteria. (136)
Elbow is particularly helpful for teachers across disciplines as he fully explains how and why grades often interfere with learning, and he points to other key pieces of research that further underscore why marking errors is problematic. Most important in his view is that students often do not understand our feedback (see Hodges, 1997, for more details). As Sommers (1982), based on her research, explains:
...[W]hen teachers identify errors in usage, diction, and style in a first draft and ask students to correct these errors when they revise, such comments give the student an impression of the importance of these errors that is all out of proportion to how they should view these errors at this point in the process.... It would not be so bad if students were only commanded to correct errors, but, more often than not, students are given contradictory messages: they are commanded to edit a sentence to avoid an error or to condense a sentence to achieve greater brevity of style, and then told in the margins that the particular paragraphs needs to be more specific or to be developed more" (150).
We would add that students are particularly challenged to understand our cryptic notations about style or mechanics, even when those notations point students toward handbooks or other tools. (For a cognitive psychology perspective on how and when to mark student errors, see also Perrault, 2011.)
Interestingly, much research on error marking points to an inescapable conclusion—teachers themselves often inconsistently note errors (see, for instance, Williams, 1981; Connors & Lunsford, 1988; Lunsford, 1997), both because our notion of what an error is changes over time and because our understanding of what a student needs in response to a particular paper is contextual. Connors & Lunsford (1988) are particularly helpful in explaining the former issue: "Most early [1915-1935] error research is hard to understand today because the researchers used terms widely understood at the time but now incomprehensible or at best strange.... Small-scale studies of changes in student writing over the past thirty years have shown that formal error patterns have shifted radically even since the 1950s" (397). Error, then, is contextual, and our cultural attitudes toward error spotlight those flaws that rise to the level of obligation to note the error. (Consider, for instance, the use of "their" to follow "everyone'; when Kate began teaching, this error was one that she and her colleagues tried to mark but now few readers would even notice that a plural form is serving as a substitute for a singular form. It has, in fact, become widely accepted.)
Given student reactions to marked errors (most simply tend to accept the correction without troubling themselves to understand why the correction improves the text), teachers who mark all errors turn themselves into editors for their students. Students tend to learn little, if anything, from the marked up text, and teachers feel unnecessarily burdened by the student writing they pick up.
A host of experts (see Huot, 2002, for his review of the literature) argue that instead of marking errors, giving students feedback about substantive issues in their writing can and does help students learn about the content and about writing more generally. However, in a replication of the Connors & Lunsford 1988 study, Stern & Solomon (2006) found that substantive response is remarkably skimpy compared to editing comments. These researchers collected and coded all teacher comments on 598 papers. They found that 61% of papers had at least one comment of overall assessment, but only 15% of papers had comments addressing "the sufficiency or quality" of evidence or supporting material (35). Over 40% of the papers had one request for clarification of ideas, and 26% of the papers had multiple requests, but relatively few papers included comments on organization beyond a particular paragraph. Stern & Solomon conclude:
Are faculty providing feedback consistent with effective feedback principles? After reviewing the findings of this study, we can conclude that they are not. The first principle, to provide more positive feedback to students, was underutilized. The bulk of comments were negatively valenced. The second principle, to use selective marking tied to student learning goals, does not appear to be utilized much. Although we cannot answer this question soundly, we can speculate that the main learning objectives of these 598 papers were not to "use correct grammar and spelling," which would coincide with the majority of comments being technical corrections to writing such as grammar, spelling, and comma usage. The third principle, to provide comments that identified patterns of weaknesses and strengths were almost absent in the data. Indeed, only a handful of comments explicitly identified patterns of errors and none of them explicitly showed a student (in the comment) how to correct the pattern. (38)
One solution to the dilemma of marking error shows up in the disciplinary literature as exemplified by Rodgers' description of his holistic scoring approach in "How holistic scoring kept writing alive in chemistry" (1995). Rodgers notes that he sees the value of assigning formal reports but that the 15-20 minutes per report for "grading" was overwhelming. He adopted a rubric (see p. 21) with 21 criteria, of which four attend to issues of grammar, usage, format, or presentation. The rubric uses a 2/4/6 range of scores to rate the reports on each criterion. As he notes, "My old method of analytical grading took fifteen to twenty minutes per report. Using two student assistants to holistically score 256 reports in fall 1990, I realized a mean time of 3.5 minutes to grade a report" (21). For those teachers who feel compelled to correct errors in grammar, mechanics, and usage, holistic scoring may offer an approach different enough to move teachers into a new way of looking at student writing.
Connors, R., & Lunsford. A. (1988). Frequency of formal errors in current college writing, or Ma and Pa Kettle do research. College Composition and Communication, 39(4), 396-409.
Elbow, P. (1997). "Grading student writing: making it simpler, fairer, clearer." In M.D. Sorcinelli & P. Elbow (Eds.), Writing to learn: strategies for assigning and responding to writing across the disciplines; pp.127-140. New Directions for Teaching and Learning 69. San Francisco: Jossey-Bass.
Elbow, P. (1997). "High stakes and low stakes in assigning and responding to student writing." In M.D. Sorcinelli & P. Elbow (Eds.), Writing to learn: strategies for assigning and responding to writing across the disciplines; pp. 3-15. New Directions for Teaching and Learning 69. San Francisco: Jossey-Bass.
Hodges, E. (1997). "Negotiating the margins: some principles for responding to our students' writing, some strategies for helping students read our comments." In M.D. Sorcinelli & P. Elbow (Eds.), Writing to learn: strategies for assigning and responding to writing across the disciplines; pp. 77-89. New Directions for Teaching and Learning 69. San Francisco: Jossey-Bass.
Huot, B. (2002). (Re)Articulating writing assessment for teaching and learning. Logan, UT: Utah State University Press.
Lunsford, R.F. (1997). "When less is more: Principles for responding in the disciplines." In M.D. Sorcinelli & P. Elbow (Eds.), Writing to learn: strategies for assigning and responding to writing across the disciplines; pp. 91-104. New Directions for Teaching and Learning 69. San Francisco: Jossey-Bass.
Perrault, T.S. (2011). Cognition and error in student writing. Journal on Excellence in College Teaching, 22(3), 47-73.
Rodgers, M.L. (1995). How holistic scoring kept writing alive in chemistry. College Teaching, 43(1), 19-22.
Sommers, N. (1982). Responding to student writing. College Composition and Communication,33(2), 148-156.
Stern, L.A., & Solomon, A. (2006). Effective faculty feedback: the road less traveled. Assessing Writing, 11(1), 22-41.
Williams, J. M. (1981). The Phenomenology of Error. College Composition and Communication, 32(2), 152-168.