Abstract: This study examines the writing of 30 engineering students, faculty response, students' reading of the response, subsequent revision, and faculty evaluation to ask what factors contribute to constructive conversation about writing. It affirms previous research that suggests engineering faculty do not provide the facilitative commentary widely believed to promote revision, but argues that faculty commentary may be more appropriate than it appears, especially if complementary activity in the larger activity system is taken into consideration. It further suggests that to teach any genre well, faculty need to reinforce general principles in slightly altered contexts, exposing both choice and constraints. Offering directive comments in slightly altered contexts over time may be more important than simply offering facilitative commentary in stand-alone writing classes. Surprisingly, even very directive commentary can elicit rhetorical thinking in a robust, disciplinary activity system.
"Sir, I am so sorry to tell you that the Geo-Wall reports (task 2) are too bad," emailed a graduate teaching assistant charged with commenting on a dozen group reports in a civil engineering design class. He implored the professor to have the students re-do the reports before wasting valuable time on detailed commentary. Neither the professor nor the GTA questioned the value of faculty response, but they did question not only the kinds of commentary needed, but also the conditions under which those comments would be most constructive. Under what conditions does faculty response best facilitate revision, particularly in undergraduate engineering classrooms? Are existing conditions conducive to a meaningful conversation between faculty and students? What kind of conversation about writing is optimal? These are hardly new questions, and there is a body of literature about best teaching practices for responding to student writing in and outside of engineering (Anson, 1989; Bazerman & Prior, 2004; Beason, 1993; O'Neill & Fife, 1999; Gottschalk, 2003; Leki, 1995; Poe et al., 2010; Prior, 1994; Rutz, 2006; Smith & Patton, 2003; Sommers, 2006; Straub, 2000; Straub & Lunsford, 1995; Tardy, 2009—see especially Haswell's 2006 extensive bibliography). However, surprisingly few studies trace a classroom conversation about writing from student writing through to faculty response, student reading of the response, subsequent revision, and faculty evaluation.
This IRB-approved collaborative study of faculty comments and student revision traces the conversation associated with the writing of 30 engineering students at two public, land-grant research universities, one a medium-sized university located in the southeast of the US and one a large university in the midwest. According to the Carnegie Foundation for the Advancement of Teaching, both universities are ranked "research extensive," have "more selective" student populations, and have well-established colleges of engineering. The universities were ranked 64th and 97th respectively in America's Best Colleges 2012 (2012). This study builds on a pair of earlier studies: One study noted a disjunction between faculty preferences and actual comments (conducted by both collaborators), and another study documented student mis-readings of faculty comments (conducted by one of the collaborators).
The collaborators used emergent design as described in Cresswell's 2007 book on qualitative inquiry to follow 1) the comments of ten engineering faculty (five at each institution) who responded to a call for participation (making it a convenience sample), 2) the reading of those comments and the revised writings of 30 students (15 students at each institution), and 3) the faculty reflections. Participants represented civil, computer, environmental, industrial, and mechanical engineering. First, early and revised iterations of an engineering assignment were collected and coded, and then both faculty and students were interviewed. The collaborators conducted both student and faculty interviews with the relevant student's paper and revision in hand and referred to the papers throughout, modifying questions as new ones suggested themselves. Student interviews entailed getting each student's sense of the assignment and then exploring each change in the revised paper with questions such as this: "I noticed that you changed this aspect of your paper…can you tell me why?" and so on for each change. The collaborators sought the faculty perspective with parallel questions about the assignment and purpose of revision and the particular changes in the papers at hand: "The student made this change…did it achieve what you intended?" The collaborators also asked faculty about expected changes that did not appear. In keeping with emergent design, the inquiry process was adapted as needed, particularly with additional coding.
Emergent design is particularly appropriate for studies grounded in a socio-historic view of writing as situated activity. Socio-historic views of writing draw on the early work of Lev Vygotsky (1978) and more recent work by Charles Bazerman and Paul Prior (2004), among others, and accept Carolyn Miller's definition of genre as "a typified response to a recurring situation" (Miller, 1984). Socio-historic theorists ask the strange question: What does writing do—how does it work and what constrains it? In this study then, the perspective widens from just a textual focus on faculty written comments and student written revision to a broader view of the feedback loop, one that includes many other stakeholders and layers of indirect feedback. Genres understood by socio-historic theorists, then, are not static formats existing in isolation (progress reports always have XYZ parts), but rather are dynamic and evolving processes functioning in complex systems (progress reports accomplish work by repeating some actions common to most progress reports as well as by responding selectively to particular needs of the complex situation at hand). As Poe et al. (2010) say, genres "travel together in sets and even operate in entire systems, all supporting a human activity" (12). A writing assignment might be in a particular genre (progress reports), but the genre is not just the assignment; rather, it is a larger, more complex process. Writing understood this way is not simply a textual product but rather an activity that emerges from the intersection of many things, including not only students and teachers, but also department chairs, husbands, university administrators, clients, writing requirements, accrediting agencies, non-academic work schedules, background knowledge, personal motivation, peers, access to technology, career aspirations, and more (Evans, 2003; Gottschalk, 2003; Haswell, 2006; O'Neill & Fife, 2001; Patton, 2011; Poe et al., 2010; Prior, 1998; Russell, 1995; Tardy, 2009). Writing understood this way is messy, complex.
Although all of the student writing included in the study was in some sense "school writing," the range of genres suggests that the walls separating "school" and "work" are porous and the relationships between and among students, peers, near-peers, graduate students, faculty, and outside practitioners of wide-ranging experience are layered and complex. For example, assigned progress reports ranged from assignments aimed at introducing the genre, among other things (limited to a student-teacher conversation), to reports about a campus-based educational project (involving a more layered conversation with individuals outside the classroom) to numerous client-based reports about issues ranging from the design of a new indoor community basketball facility, to the re-design of a bridge needing to accommodate a third rail line, to the design of a bio-reactor landfill (all of which involved complex conversations extending over time and situation). Progress reports for these projects were technically addressed to the teacher but involved a broad audience on and off campus with a range of experience, both technical and non-technical.
Even when considering a particular genre, such as the progress report, the conditions shaping the assignment and complementing the written commentary varied enormously. For example, the teacher of one capstone class was a seasoned civil engineer hired as a temporary "visiting instructor." His handling of the assigned progress reports was perfunctory, but he regaled his students with tales of entering the profession, including memories of drafting reports that were circulated around his engineering firm and then eviscerated by senior engineers, illustrating the layers and layers of expertise and authorship involved when writing in engineering. The students in his class received little constructive written feedback on their progress reports (predictably, every "that" was crossed out, whether it was needed in the sentence or not) but they benefited from his stories and from both oral and written feedback offered by the clients.
Other genres included short writing-to-learn assignments (for example, a three-page memo meant to help "confused customers" understand the crystal orientation of silicon wafers), long reports for the teacher only, often embedding numerous genres along the way (for example, a proposal to design and fabricate a neonatal ventilator that could be used in developing countries) as well as many client-based reports (including those already mentioned, usually involving a series of progress reports, an executive summary, and the report turned in piece by piece over the semester).
Socio-historic research understands "text" as activity that may be interpreted differently by different readers (by different agents functioning in different contexts), so text-based coding schemes must be used cautiously. Quantitative data by itself can be deceptive, so qualitative narrative is important to both qualify and justify the particular interpretations. The boundary between "results" or data and "discussion" or interpretation is therefore less clear-cut than it is in conventional scientific and social scientific research. Nonetheless, a text-based coding scheme can still provide a useful starting point, especially if the resulting data are understood in the context of other emerging criteria as well as the larger narrative.
This study counted the number of faculty comments on first and final submissions as well as the number of student-authored revisions. Additionally, a coding scheme was used that distinguished between directive and facilitative faculty comments, to be discussed below. It should be noted early on, however, that three emerging conditions presented themselves in the research process: One was the emerging importance of distinguishing between comments that facilitated and constrained choice. Two was the emerging importance of external, non-text-based constraints on faculty commentary. Three was a tragic circumstance that radically limited the possibility of quantitative analysis of some of the original commentary, the premature death of one of the collaborators.
To report as responsibly as possible the data that could be collected, one table summarizes the use of directive and facilitative comments, another table summarizes the use of comments that facilitate and constrain choice, while two case studies present subsets of data with much more quantitative and qualitative detail.
Although this study examines what faculty comments do within the context of the larger activity system, it begins by coding both directive and facilitative faculty comments, using the scheme Straub and Lunsford derived from the commenting styles of twelve well-known writing scholars (1995). In this scheme, the directive-leaning styles function to correct, critique, or direct students with the teacher in an editor mode, while facilitative-leaning styles (including advisory, Socratic, dialectic, and analytical styles) function to guide, prompt, question, or reflect on the writing with the teacher in a reader's role. Directive commentary tends to be interventionist and product-oriented, while facilitative commentary tends to be "maturationist" and process-oriented. Examples of directive comments in this study, often expressed as imperatives, include "Add your results," "Check decimal" and "Wrong!"; examples of facilitative comments include statements that are reader responses such as "This sentence is confusing to me" and questions such as "Why only these?" or "Why can we assume that the thrust is constant through each stage?" Straub and Lunsford acknowledge the virtues of the full range of commenting styles in various contexts but nonetheless suggest that more process-oriented, readerly, and facilitative comments help students develop as writers, a view that has continued to be widely accepted by other writing scholars.
Given preliminary findings of this study, showing an overwhelming use of "directive" comments, many of which "constrained" the writers' choices, a related question arose about the relative importance of "choice" and "constraint" in student writing.
In this study, none of the ten faculty wrote mostly facilitative comments and three of the faculty made no facilitative comments whatsoever (See Table 1). Of the seven faculty members who wrote some facilitative comments, they still wrote primarily directive comments (over 80%). These findings could be depressing and seem to suggest that more needs to be done to offer faculty development workshops about responding to student writing for faculty in the disciplines. However, two points might be made. One, writing in the natural and applied sciences, including engineering, typically functions in more tightly constrained genre systems than does writing in the humanities. Not only are some directive comments necessary to foreground the factual and situational constraints, but also directive comments may elicit more rhetorical thinking than humanists assume, especially when the feedback loop in the larger activity system is layered and complex. Two, even in more traditional "English" writing classes, feedback is seldom limited to written commentary, a point made well by O'Neill and Fife (2001). Without disputing the basic wisdom of encouraging facilitative commentary, it might behoove writing scholars to look more closely at the conditions when less facilitative commentary is fruitful.
F1 = Faculty member 1, F2 = Faculty member 2, F3 = Faculty member 3, etc.
In this study, directive comments that constrained choice were explicit and dominant. Although only one faculty member (F6) made more than two or three facilitative comments explicitly inviting choice, most of the others made comments that implicitly invited choice—or that somehow prompted rhetorical judgment in the students as revealed in the interviews. The exception was the one visiting instructor whose only written comments were striking out "that's" in his students' progress reports. The other nine faculty members offered comments that at least implicitly mixed choice and no choice. Moreover, one or two well-selected facilitative questions or comments about the main claim could ripple through an entire project, suggesting that the number of facilitative comments is not as important as the kind of facilitative comment.
How might a directive comment, which seemingly demands compliance, implicitly invite choice or implicitly facilitate thinking? Consider comments such as "First describe problem" and "Explain why this is important." The comments are directive—they are imperatives—but they position the student to make important rhetorical judgments. One professor who anticipated that his students would have problems thinking critically about their data said this: "Students at this level will either not describe the results at all and start to make conclusions, or they describe the results and don't make the conclusions. So if they can do a little more of each, that's great! . . .if I can at least guide them on the path so when they get to analyze, using the model, they have a much better chance of doing it well . . .I needed more than numbers." On some students' papers, this professor simply directed them to "add results" or directed them to "explain" them, but he was doing so to set the students up to make more substantive interpretations.
Just as some directive comments facilitated rhetorical thinking, some facilitative comments were processed as if they were orders. F6 wrote facilitative comments on several papers asking students how they planned to justify some of their assumptions. One student described deepening her discovery process as a result: "We needed more justification about our assumptions . . .so we did more research and realized that this would be a pretty big force on a rocket as it was going into orbit and, well, we needed to add in . . .whenever we were doing our assumptions he had talked about the justification of kind of 'why can we assume that the thrust is constant through each stage?' And so I think we put this in there to kind of provide more justification for that . . .he had commented on our assumption 'reasonable assumption' in the previous interim memo." However, another student receiving similar facilitative comments confessed, "I only put in some more justification because I had to."
Table 2 offers a slightly qualified reading of Table 1 based on interview data. All ten professors in the study made many explicit directive comments that left no room for argument, so the "no choice" row in the middle of the table has "explicit" for each subject. One professor, F6, made facilitative comments that explicitly encouraged thinking and choosing, so the "choice" row running across of the table features "explicit" for F6. With the exception of F3, all of the other professors made at least some comments that, however directive, motivated students to find out why something was wrong or how it could be written differently, as revealed in the interviews. If students were learning to make informed judgments when interpreting data, they were making judgments, but the judgments needed to be constrained in particular ways—so even directive comments were helping them with interpretation.
F1 = Faculty member 1, F2 = Faculty member 2, F3 = Faculty member 3, etc.
The number of faculty comments on first submissions ranged widely, from four to 91, but the number of textual changes in the student's second submission did not range as widely: Students typically made between 10 and 20 textual changes, even though the lengths of their papers varied enormously. In most cases, students directly attributed whatever changes they made to faculty comments. For example, one professor made seven comments on a student's first submission, his student made a dozen textual changes in the second submission of the paper, and the student attributed each of the textual changes to one of the faculty comments. This was the pattern overall: Seldom did the number of changes the student made in the second submission match the number of comments the professor had made, but nearly all of the changes that the student made were triggered by a particular faculty comment. This suggests that written comments do matter, even if they are not the only agents for constructive change.
Few faculty reported dramatic revision. Perceptions ranged from "I'd say about 25% had really significant revision, but about half of the class didn't do much" to "Oh gosh! She got it right the first time but not the second!" Other perceptions include ones such as these: "We tried to give them examples, but they have a hard time because they don't understand all the relationships in the writing" and "She improved, but her draft was totally incomplete." This underwhelming response to the student revision could be very depressing if we take a short-term view of revision. Indeed, the "glass half-empty" view of faculty response and student revision may be warranted by these data, as will be explained below.
Surprisingly, though, most students described thinking and learning when responding to the written comments, even when the faculty commentary was problematic or when the revised product did not provide evidence of fully understanding the commentary. One woman who was deeply engaged in a client-based project but who was understandably skeptical about some of the particular stylistic feedback she had received was still motivated to think professionally about her readers and to pay more attention to the weight of each word. "I found myself, you know, typing and then saying, 'Well, this word and this word don't really add any value to the point I'm trying to get across,' and, you know, I'd come back and take those out and, again, it just kind of shortened and consolidated, condensed what I was trying to say.'" She was responding to the spirit, not the letter of her mentor's feedback. This suggests that even very directive commentary has the potential to elicit rhetorical thinking, something that was evident in many other cases.
What may have elicited rhetorical thinking most was the combination of the commentary (however directive) and other conditions in the genre system, such as having real-world problems and concerned readers who could be affected by what was said. Put another way, the meaningfulness of real-world, problem-based projects and the authority (or ethos) of the faculty may trump good, facilitative commentary, narrowly understood.
To further explore some of the variations in commentary and the conditions under which they were made, one collaborator re-examined the data from two sets of participants: in one set are two faculty members with substantially different commenting styles who nonetheless worked together closely and team-taught a large writing-intensive course; and in the second set are faculty who made the most and least "facilitative" comments.
In this mechanical engineering class, two seasoned faculty members team-taught a large, writing-intensive course and used mostly "writing-to-learn" assignments geared to help students think more deeply about key principles in the class. For the first assignment, students were asked to imagine that they worked for a silicon wafer producer and needed to clarify for a confused customer the difference between 111 oriented crystals and 110 oriented crystals. The professors shared many beliefs about teaching and learning and worked closely together on whole-class handouts, but they differed considerably in the ways in which they put their beliefs into practice.
Each professor read half of the class set of papers, met to discuss them and to create a whole-class handout with 24 prioritized (directive) comments, and then commented individually on half of the papers (mostly directive). Faculty member 1 was very selective in his comments, choosing to have a few individually-written comments (seven on student F1S1's paper) supplement the whole-group handout. Faculty member 2 was less selective, making 46 comments on one student's paper (F2/S1-a) and 40 on another (F2/S2). This professor also made 91 comments on a different assignment (F2/S1-b) that was not going to be revised. Unfortunately—and predictably—the student author reported not even reading those 91 comments since the paper was not going to be revised. Both professors' individually written comments were mostly directive, even though some of the comments still elicited rhetorical thinking, as was evident in the student interviews. Imperfect as the process was and varied as the comments were (a range of 7 to 46 for one assignment), both teachers and students reflected on the importance of writing and revision in helping students think.
One student (F1/S1) received only seven comments but made 12 textual changes and attributed all of them to one of the comments. The other two students, both students of F2, received 46 and 40 comments respectively but made a similar number of textual changes, 12 and 15 respectively, and attributed most but not all of them to the professor's comments. The professors found the revision significant in the revised writings of F1/S1 and F2/S2 (among the best in the class) but not of F2/S1. This nearly controlled-study of commenting affirms previous research that suggests that there may be a point of diminishing returns with the number of written comments (Harris 1979, Harvey 2003, Lunsford 1997).
As already noted, both professors in this case valued using writing to help students think, although F1 may have had more realistic expectations and less frustration than F2 did, as might be evident in the following (long!) passage:
The main thing is that if you teach somebody, you learn yourself. We can't really have our students teach the material, but we can make them think and writing makes them think and, I suppose, teach themselves. If they have to write about the problem, it's more holistic . . .they can't just give piecemeal facts. Writing makes them think, and they have to come up with a more cohesive whole . . .and revision is important . . .I revise, say, any proposal I'm working on and the first draft is always a bit more just a sequence of discrete pieces put together in spite of your best efforts. . .Then, as I revise it, the pieces start to gel and everything comes together. . .You know, when I was a PhD student, my advisor told me to write a draft of a paper and I took it to him. He just looked at it for maybe thirty seconds and said "revise it." And I was so mad—he didn't even read it and he was asking me to revise it! [laughs] I went back and started revising and found a lot of things I could revise. Then the second time he read it and put a lot of ink on it, you know, and then [laughs] and so, he'd make us write like four or five drafts. And that time—later on—it helped a lot. I believe it matters, and I think they appreciate it later…We wanted them to get the science right—that's the first thing, the most important…second they need to know—this is a more difficult thing to do—to try to change their style of writing. But at least we try—we try to point out things—and there are certain things they can change. (F1 personal interview)
The other professor (F2) had a somewhat narrower and shorter-term perspective of revision, even though he believed in the importance of "writing-to-learn." Consequently he was disappointed in some of his student's work, as suggested here about F2/S1:
She improved but her draft was totally incomplete. But this is the trouble: sometimes they sort of do what you tell them, but the writing still needs revision. She halfway responded to comments 9-15 …Oh, gosh, she got it right the first time but not the second! (F2 personal interview)
The less successful student still talked about learning from the many red-ink comments:
I didn't know what he wanted the first time. Other than a lab report, this is my first time writing anything like it. I didn't do well, and when I saw all the red I was overwhelmed, but I found it helpful. Felt like I gained more. English 1000 [FYC] didn't prepare me for this class. These comments helped me know the style… He said I didn't have a thesis. It made me go back and look and see I had just jumped in. I realized I had none. (F2/S2 personal interview)
The same student, however, did not bother to read the 91 comments on a paper that was not to be revised. Problematic as required writing can be, including required second drafts, enforced revision can still serve a purpose if only to sustain the dialogue, to motivate students to read the comments on the first draft. As a professor of another class remarked about revision: "I guess it's valuable because it does force them to read the comments we made and, you know, look and see what corrections we made and then they can go back and make corrections. So, it's valuable in that sense—in that it kind of forces them to look at our feedback." (F5 personal interview)
Interestingly, the student who was most successful revising his paper for F2 made some of his most significant thinking and revision in response to a comment directing him to characterize the angle:
The other areas . . .he left it open for me to figure how to fix it, even though I don't think I exactly got it was right… This is where I spent the vast amount of my time—was fixing that figure and the corresponding explanation with it…okay, so I put a second picture showing the angle and reference. So I edited figure F-a. And he also—and I also redrew Figure F-b because the triangular direction that it's supposed to point toward, the reference lab, it wasn't technically pointing toward the reference lab, so it was technically wrong and would have confused the reader because I was saying one thing but showing something else. So, now it's pointing in the right direction toward the reference flat and it's facing the reader and it helps the reader understand the directions . . .I spent like five hours on the rough draft figures and that took a while to figure out. (F2/S2 personal interview)
This student reported spending another five hours thinking about and revising another angle in the diagram. Throughout his interview, he described the importance of getting it right and understanding why. This suggests that commentary for many writing-to-learn assignments in the natural and applied sciences needs to invite and constrain choices, to be both directive and facilitative.
The previous case compares and contrasts two faculty who taught the same course but who commented more and less extensively on their student papers. This case compares and contrasts the commentary of three faculty who made the most and least facilitative comments. Two of the professors (F6 and F8) made some facilitative comments (12%); one professor (F3) made none. All of the students featured in this case attributed all of their textual changes to faculty comments, even though the quality of those comments varied significantly. The least facilitative commenter, described earlier, limited his written comments to the issue of "that," even though he offered extensive oral feedback. These three professors had significantly different experience and levels of understanding about writing and, yet, their students all grew as writers, even when the written commentary was impoverished.
Even though the students in the least facilitative commenter's class did develop as stylists and writers, partly because of oral feedback from their real-world clients, their professor lacked the nuanced view of writing and learning that the most facilitative commenters had.
The most facilitative commenters' comments were mostly directive, but the facilitative comments they did make were powerful and encouraged their students to think:
Explain, perhaps an example.
Why only these? Justify.
Explain to the reader how these are connected . . . it is unclear how they function together.
Why a C-s? Are you modeling [?] landing gear as well?
That professor, F6, understood the limitations of required revision but also believed that it prompted students to pay close attention to his comments. He paced his students, he honored the need to provide timely feedback, and he had a clear sense of priorities, which he expressed in a rubric. He anticipated that his students would struggle with priorities, with discriminating between most and least important details, and believed that a rubric was a useful even if imperfect way to communicate his priorities to students who were just beginning to figure out their own priorities. He, like most of the other faculty, recognized the rhetorical dimensions of visual as well as verbal communication and the need to make important judgments about what to communicate to particular readers. He said this about his rubric:
Let me pull up my rubric, because everything's going to come back to that. Well, that's not necessarily true. I make a lot of comments, but when I evaluate the grade, I come to the rubric… Students struggle with that, what's important. So, they usually go into lots of detail, perhaps you can discuss the color, but that's not relevant to this class… So I think the most important information feedback I give them is I try to point them in the direction where they should go from here. I thought they met the criteria here very well, and now I just ask them some questions to help guide their thought…this is an open-ended project, so the teams struggle a lot, in defining the problem, in describing their system. I really want them to take some independent initiative to try to tackle some problem, and it's challenging. And so I suggested that they indicate the critical components because the reader is overwhelmed with this information. And so here they tried to highlight some information. (F6 personal interview)
Not surprisingly, his students referenced the rubric when describing their revision process:
He follows his rubric…a lot of these things were straight from his rubric that we just didn't do. They were helpful. …we needed more justification about our assumptions …we did more research and realized that this would be a pretty big force on a rocket as it was going into orbit that we needed to add in…whenever we were doing our assumptions he had talked about the justification of kind of "why can we assume that the thrust is constant through each stage?" And so I think we put this in there to kind of more justification for that…he had commented on our assumption, reasonable assumption, in the previous interim memo… We did more research and decided this was a more effective way to model. (F6/S2 personal interview)
Nearly all of the faculty interviewed asserted the belief that "writing is thinking," but F6 was able to point his students to different places requiring judgment at different points in the process. He had a sense of their point of readiness.
I gave a lot of feedback on their interim memos, and I spent a lot of time because I think it's much more valuable at that time. So, in the final here, I was reading with really a big picture…and I want to just read it overall for technical content at this point. (F6 Personal interview)
Students described reading the commentary purposefully, with the intention of understanding why comments were made. Even the most directive comments could elicit rhetorical judgment and thinking, as suggested in this interview:
I didn't just say, "Oh, he wants this!" so I put that. I made sure I read his comment and said, "Okay, well, that makes more sense. I can see why he said this way versus that way"… He had said… "no chemical subject," so I went and looked more and found out that having no chemical subject meant it was essentially not regulated under this. So, instead of just trying to explain that…I just cut that out and said it's not regulated. (F8/S2 personal interview)
The same student described extensive research in response to other comments:
I looked at it more to try to find more information or to elaborate on why it wasn't a hazardous chemical. But I ended up finding out it just didn't have a permissible exposure, which was the same. Instead of writing it doesn't—they say it's not hazardous because it doesn't have this—I just said it doesn't have one of those, so, therefore, it wouldn't be hazardous… So, instead I went back and found out like intoxication from this range can cause this. This range can cause this and this and this, because he wanted more specific things, so just went ahead and found that there were essentially like three levels they broke it down into and just gave him the span on what that span could cause… I went back and read the sentence…I went back and I don't know what I was thinking when I wrote it the first time. (F8/S2 personal interview)
Similarly, most of the students in the least facilitative commenters' class reported re-thinking elements of the project, especially in terms of readability for a given audience.
Even when faculty had the best of intentions for writing constructive comments, a number of issues got in the way, some of which were extra-textual, rooted elsewhere in the larger institutional genre system. One unfortunate force was the overburdening of a limited number of faculty with heavy paper loads and unreasonable expectations—which translated to less desirable commentary, even when the intent was otherwise. At one institution, the five professors interviewed taught writing intensive classes, and several of them had offered their WI courses at least twenty times and their classes had grown from ones initially capped at 20 students to classes now enrolling 50, 60, and 70 students. Another force we witnessed was the gap between actual and professed commenting practices, affirming previous research findings (Taylor & Patton, 2006). One mechanical engineer said that revision was important because it required students to think, yet few of his 91 comments on one student's paper explicitly called for re-thinking. Instead, he corrected factual and mechanical errors. Another consequence was putting the activity off on teaching assistants, many of whom had little authority, training, or experience in commenting. Even if a teaching assistant's intentions and intuitions were good, students often failed to take the teaching assistant's comments seriously because the teaching assistant did not carry the same authority or ethos. But, the overarching concern, the overarching negative force impeding ideal commentary, was a failure on the part of administrators and department chairs to admit that rhetorical knowledge requires reinforcement just like mathematical knowledge does. Lip service was given at both institutions to the importance of writing, but too little was done to reward teachers for teaching writing well, to cap enrollments, or to offer students with multiple opportunities to write to reinforce key principles in slightly different contexts.
Writing instruction needs to be integrated and reinforced throughout the curriculum—and it simply was not at these two institutions. All of these negative forces need to be addressed, but addressing them requires long-term effort, most of which is extra-textual. Under current conditions, even the best-intentions are unlikely to be fully realized.
Another force affecting ideal commentary is experience. As Luis Godoy (2006) argues about peer review, there is a learning curve for making constructive comments, even comments for peers submitting articles to peer-reviewed journals. Often novices begin with more directive, editorial commentary, while experts tend to offer more facilitative, readerly commentary. For novices to develop into experts, they need opportunities to practice.
Similarly, some forces that affect student reading of faculty comments can be understood developmentally (Lillis, 2003; Patton, 2011; Tardy, 2009). Before students are socialized into a discipline or activity system, they may be inclined to see written feedback literally, out of situated context, and independent of a longer-term process of professionalization. Understood developmentally, it is somewhat natural that students have a narrower and more superficial motivation for revising papers early in their careers. Paul Prior finds that procedural revision—just following a sequence of activities and instructions—typically precedes meaningful revision, although he refrains from overstating the case, noting that it is important to understand human variation and the range of socio-historic forces that complicate any stage or developmental model of writing (1994). In the short term, such literal-mindedness and superficial engagement might be seen as negative; however, in the long run, most students mature and develop more intrinsic and more socially and cognitively complex reasons for revision. They also simply become more familiar with the genres that, for many of them, were completely new not long before. Many of the students in this study reported things like this: "I've never written a real technical report before."
For students to develop and mature over the long run, once again, they need to have multiple opportunities to purposefully engage in writing. If an institution requires one or two standalone writing courses only and does not reward faculty for assigning some writing throughout the curriculum, then it is less likely that students will develop their potential as writers. Other negative issues that arose in student interviews are listed in Table 5, but many of them are not so negative if understood developmentally.
If the professor is inviting thinking, then he or she is exposing students to the unknown and may be letting students risk being wrong as they attempt to respond. The challenge for the professor is to push the students to the edge of their competence, to their zone of proximal development, without paralyzing them, but finding that edge requires getting to the know the students.
All of this suggests that some of the "negative" forces affecting faculty commentary, student (mis)readings, and subsequent revision are negative only if students do not have an opportunity to experience writing in varied contexts with varied purposes and audiences. When students are exposed to a range of writing situations, then they can do the rhetorical work of making inferences—why this person directed me to do this and why that person directed me to do that. In this study, students at both institutions had more than two standalone writing courses; however, students would have benefited from a much richer writing culture throughout their courses of study.
Interviews suggested that those students who had encountered similar directive advice in slightly different contexts did ask themselves how and why they should apply the directive advice in the given situation. Perhaps they had heard before that a diagram typically is labeled in standard ways. Why, then, does this diagram need to have a particular feature highlighted to be understood by the audience? To really understand what is variable and what is not, students need a range of opportunities to practice. However, for students to have an optimal number of writing contexts, administrators need to take more seriously the need to cap class enrollments and spread the responsibility of teaching writing throughout a faculty and to reward faculty for time invested in teaching writing.
By suggesting that we re-evaluate our profession's norms for text-based response and the potential of some directive commentary to be positive, we are not suggesting that the practices we found in this study were ideal. Much could be done to improve the effectiveness of text-based response. Most importantly, students could have had more opportunity to have rhetorical principles reinforced throughout the curriculum. Less important but perhaps more "do-able," writing consultants could invite faculty to read papers with illegible handwriting and brainstorm alternatives for commenting, including using track-changes and computerized comments. Illegible handwriting was a pervasive problem in this sudy, as was also the case in Taylor's research of student mis-readings of faculty comments (2011). Over and over, students confessed that they could not read some of the faculty comments. Other "fixable" problems were failure to respond in a timely manner, failure to prioritize comments, failure to qualify comments (and consequently overgeneralizing context-specific advice), and, most importantly, failure to have a long-term view of revision. All of these can be addressed to some degree in faculty development workshops and in brown bag lunches.
If we adopt a narrow, short-term, text-based perspective, this study affirms the "glass half-empty" view of faculty response and student revision suggested by numerous other studies. The collaborators concur that, ideally, written commentary should include some facilitative comments and, ideally, revision should reveal significant changes from earlier drafts of a document, and those findings clearly were not dominant in this study. Most of the written faculty response on the thirty students' papers was directive, not facilitative, and only a fraction of the revised documents revealed dramatic revision. In the short run, the written conversation between faculty and students was problematic and needs to be remedied. Faculty would benefit from learning how to design more meaningful and engaging assignments, how to provide more facilitative feedback, and how to check with students to see if students understand both the assignment and the feedback, among other things. Also, students would benefit from learning that those who do make the most productive revision and receive the best grades tend to invest many hours up front in planning and drafting the first submission. Moreover, teachers of stand-alone writing courses such as freshman English would be well advised not to over-generalize some of their advice, some of which is marginally applicable to writing in non-humanistic disciplines.
So, to use the familiar Burkean metaphor of conversation (1941), the written dialogue between professors and students in this study was problematic. Neither professor nor student felt completely "heard." In this regard, the study affirms what several decades of research in composition and technical writing have suggested about gaps between ideal best practices and actual activity in the classroom.
However, if we shift the perspective from a narrow to a broad view of the "conversation," from a textual focus on faculty written comments and student written revision to a broader view of the feedback loop, one that includes many other stakeholders and layers of indirect feedback, then the picture is not nearly so grim, not as problematic. What is problematic is not directive commentary, but rather the limited opportunity students have to write, over and over again.
An activity perspective of writing reminds us that we should attend to non-textual factors affecting the production of texts. For one, important as written faculty feedback is to significant revision and growth in writing, it is not the only medium for feedback affecting revision, especially in professional disciplines such as engineering, law, nursing, education, and business. When other feedback forces are at work, there is less pressure on written faculty feedback to achieve some of the same goals. Among the other feedback forces are oral comments from other stakeholders in the activity system, including teaching assistants, clients, other professional engineers, the professor, and even peers. Even when the classroom activity is very conceptual or theoretical, faculty can complement written feedback with oral commentary, both individual and group, with examples of his or her own writing and growth, with examples and counterexamples of the target genre, and with engaged conversation about the problems that gave rise to the written document.
The picture is not entirely problematic from an activity perspective for another reason. Faculty often unwittingly adopt a narrow, short-term view of writing instruction, but, when prompted with a few questions, most faculty agree that learning to write is a long-term process that requires lots of varied opportunities to write in somewhat different conditions. Faculty in the disciplines can become stronger advocates for improved conditions for writing in the larger activity system if they are asked questions that make them reflect on their tacit beliefs. Learning to write takes time and involves many mentors.
To address writing instruction in the long-term from an activity perspective, more is needed than attention to pedagogy. Needed are politically and institutionally savvy writing administrators who, together with informed faculty in the disciplines, can press for more funding and rewards for the teaching of writing, partly by educating (or reminding) higher administration of the need to integrate writing into the whole academic culture, not to offer standalone writing classes only. Small classes should be the norm (National Council of Teachers of English, 2002). Additional feedback and assessment policies might be adopted, such as establishing discipline-based review boards staffed by alumni or other interested parties. If the classes and departments in which students write are also evaluated by academic peers and accrediting agencies with evidence that includes student writing, then more pressure can be brought to bear on the larger writing culture. Although others have made similar claims, it is important here to consider the implications: Textual commentary is not the only medium for stimulating thinking about writing. Textual feedback matters but may be secondary to other feedback mechanisms.
A long-term perspective of writing instruction also begs writing administrators to consider the relationships between and among designated writing classes, including first year composition and writing-intensive courses. Although there has been considerable attention to transfer of writing skills from first year composition to other courses, there has been less attention to what teachers of first year composition can learn from studies such as this and from faculty in the disciplines. In this study, most (not all) writers felt there was a significant gap between their first year composition and subsequent technical communication or writing-intensive courses. It might behoove teachers of first-year writing to learn more about writing in the disciplines and to offer less unqualified humanistic advice about writing.
That is, written feedback in the first year composition may be entirely appropriate for the assignments at hand but may be counter-productive in the long run if the feedback is not sufficiently qualified. For example, teachers of first year composition would benefit from learning that writers in many fields outside the humanities tend to invest proportionately more time on first submissions and less time on later submissions, partly because the locus of thinking is elsewhere in the activity system, not primarily in the written documents (Patton, 2011). Haswell, too, argues eloquently that greatly contributing to the "complexity of responding" are the "complexities of disciplinarity" (2006, p. 4). So, again, teachers of first-year composition would benefit from having a more nuanced understanding of the constraints (factual and otherwise) on writing in other disciplines so that general writing advice is not overstated.
Teachers of first year composition also might do more to help writers unpack complex assignments, to help students "learn to read" the comments, and to encourage students to communicate their confusion when they do not understanding. Additionally, first year composition teachers might do more to reinforce their written comments with oral discussion and whole-class handouts. Most importantly, they might do more to heighten awareness of making informed choices within constraints, since very few of their students' future writing situations will be unconstrained.
As other research does (Evans, 2003; Fife & O'Neill, 1999; Gottschalk, 2003; Haswell, 2006; O'Neill & Fife, 2001; Patton, 2011; Russell, 1995; Tardy, 2009), this study directs attention to extra-textual conditions that help shape the conversation about writing. In this study, some of those extra-textual conditions are dynamic and positive, suggesting that the glass for these college writers may be more than half full.
That is, the study suggests that more important than simply offering a particular kind of facilitative commentary is meeting certain conditions (offering comments in slightly altered contexts over time, exposing both choices and constraints). Surprisingly, even very directive commentary (not typically considered a best practice) can elicit rhetorical thinking. Perhaps good revisers become good writers when they are exposed repeatedly to variations within constraints and have to make rhetorical inferences.
The distressed teaching assistant quoted at the beginning of the article argued that his students were not ready to receive his written feedback. Whether or not he was right about that, we applaud the teaching assistant for considering extra-textual issues of motivation and readiness and for resisting a narrow and formulaic view of the written conversation.
Anson, Chris M. (1989). Response styles and ways of knowing. In Anson, Chris M. (Ed.), Writing and response: Theory, practice and research (pp. 332-366).
Bazerman, Charles and Prior, Paul. (2004). What writing does and how it does it, Mahwah, New Jersey: Lawrence Erlbaum Associates.
Beason, Larry. (1993). Feedback and revision in writing across the curriculum classes. Research in the Teaching of English, 27(4), 395-422.
Burke, Kenneth. (1941). The philosophy of literary form. Berkeley: University of California Press.
Cresswell, John. W. (2007). Qualitative inquiry & research design: Choosing among five approaches (2nd ed.). Thousand Oaks, CA: Sage.
Evans, Kathryn. (2003). Accounting for conflicting mental models of communication in student-teacher interaction: An activity theory analysis. In Charles Bazerman & David R. Russell (Eds.), Writing selves / writing societies: Research from activity perspectives. Fort Collins, CO: WAC Clearinghouse, https://wac.colostate.edu/books/selves_societies/
Fife, Jane M., & O'Neill, Peggy. (2001). Moving beyond the written comment: Narrowing the gap between response practice and research. College Composition and Communication, 53 (2), 300-321.
Gottschalk, Katherine K. (2003). The ecology of response to student essays. ADE Bulletin, (no. 134-135), 49-56. http://220.127.116.11/adefl_bulletin_d_ade_134_49.pdf
Godoy, Luis A. (2006). Differences between experts and novices in the review of engineering journal papers. Journal of Professional Issues in Engineering Education and Practice, 132 (1), 24-28.
Harris, Muriel. (1979). The overgraded paper: Another case of more is less. In Stanford, G. (Ed.), How to handle the paper load: Classroom practices in teaching English 1979-1980 (pp. 91-94). Urbana, IL: National Council of Teachers of English.
Harvey, Gordon. (2003). Repetitive strain: The injuries of responding to student writing. ADE Bulletin, Nos. 134/135, 43-48.
Haswell, Richard. (2006, November 9). The complexities of responding to student writing; or, looking for shortcuts via the road of excess. Across the Disciplines, 3. Retrieved October 31, 2012, from https://wac.colostate.edu/atd/articles/haswell2006.cfm
Leki, Ilona. (1995). Coping strategies of ESL students in writing tasks across the curriculum. TESOL Quarterly, 29, 235–260.
Lillis, Theresa. (2003). Student writing as 'Academic Literacies': Drawing on Bakhtin to move from critique to design. Language and Education, 17 (3), 192-207.
Lunsford, Ronald F. (1997). When less is more: Principles for responding in the disciplines. In Sorcinelli, Mary D., Elbow, Peter (Eds.), Writing to learn: Strategies for assigning and responding to writing across the disciplines (New directions for teaching the learning, No. 69) (pp. 91-104). San Francisco, CA: Jossey-Bass.
Miller, Carolyn R. (1984). "Genre as social action." Quarterly Journal of Speech (70), 151-67.
National Council of Teachers of English. (2002). More than a number: Why class size matters. http://www.ncte.org/positions/more_than_a_number.shtml
National Universities Rankings. (2011). America's Best Colleges 2012. U.S. News & World Report. September 13, 2011. Retrieved September 25, 2011.
O'Neill, Peggy, & Fife, Jane M. (1999). Listening to students: Contextualizing response to student writing. Composition Studies, 27 (2), 39-51.
Patton, Martha D. (2011). Writing in the research university: A Darwinian study of WID with cases from civil engineering. Cresskill, NJ: Hampton Press.
Poe, Mya et al. (2010). Learning to communicate in science and engineering: Case studies from MIT. Cambridge, MA: MIT P.
Prior, Paul. (1994). Response, revision, disciplinarity: A microhistory of a dissertation prospectus in sociology. Written Communication, 11 (4), 483-533.
Prior, Paul. (1998). Writing/disciplinarity: A sociohistoric account of literate activity in the academy. Mahwah, New Jersey: Lawrence Erlbaum Associates.
Russell, David. (1995). Activity theory and its implications for writing instruction. In Thomas Kent (Ed.), Post-process theory: Beyond the writing-process paradigm (pp. 80-95). Carbondale, IL: Southern Illinois University Press.
Rutz, Carol. (2006). Recovering the conversation: A response to "Responding to student writing" via "Across the drafts" College Composition and Communication, 58 (2), 257-262.
Smith, Summer. (2003a). The role of technical expertise in engineering and writing teachers' evaluations of students' writing. Written Communication, 20, 37–80.
Smith, Summer. (2003b). What is 'good' technical communication: A comparison of the standards of engineering and writing teachers. Technical Communication Quarterly, 12(1) 7-24, 2003.
Sommers, Nancy. (2006). Across the drafts. College Composition and Communication, 58 (2), 248-257.
Straub, Richard. (2000). The practice of response: Strategies for commenting on student writing. Cresskill, NJ: Hampton Press.
Straub, Richard and Lunsford, Ronald. (1995). Twelve readers readings: Responding to college student writing. Cresskill, NJ: Hampton Press.
Tardy, Christine M. (2009). Building Genre Knowledge. West Lafayette, IN: Parlor Press.
Taylor, Summer Smith. (2011). "I really don't know what he meant by that": How well do engineering students understand teachers' comments on their writing? Technical Communication Quarterly 20 (2), 139-166.
Taylor, Summer S., & Patton, Martha D. (2006). Ten engineers reading: Disjunctions between preference and practice in civil engineering faculty responses. Journal of Technical Writing and Communication, 36, 253–271.
Vygotsky, Lev S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.
 I had the good fortune to collaborate on two projects with Summer Smith Taylor, one from 2003 – 2006 and another from 2009 until her untimely death. Although we lived hundreds of miles apart, we communicated often, especially when setting up the second study. Summer had just emailed me files with transcripts of her interviews and most of her study data when suddenly our correspondence ended. Shortly, I received an email from her chair, who had the unpleasant task of informing me and others that she was in intensive care and would not be teaching the following semester. Summer died on February 15th, 2011, from complications of Acute Respiratory Distress Syndrome. I was still transcribing my interviews, and we had not begun asking ourselves what our data meant. I couldn't even verify that I had everything she intended to send me. Here I wish to give Summer all due credit without, on the other hand, putting words in her mouth that she might not have voiced. I have tried my best to honor her perspective to the extent I could know what it was. --mdp
Patton, Martha Davis, & Taylor, Summer Smith. (2013, May 21). "Re-evaluating directive commentary in an engineering activity systems. Across the Disciplines, 10(1). Retrieved from https://wac.colostate.edu/atd/articles/patton_taylor2013.cfm