The recent influx of electronic media into the traditional humanities curriculum has provided a plethora of new opportunities of "making knowledge" in the way we teach, the way we publish and share our ideas, and the way we perform "service" to the academy at large. I expect to have some opportunity to discuss these opportunities, and how they differ from the traditional ones, in the other two questions I will answer this week; of course, I will also be happy to discuss them face to face.
Answer Deery/Zappen | Answer Porush
Exam Questions | Introduction
However, for the purposes of this response, I am going to posit this "plethora" as true; as the question states, questions are being raised about how to value such work in the academy, so I believe I am safe within assuming that the work is at least viewed as somehow different. If the work is, in fact, "different," then perhaps the old standards of evaluation do not apply; this is a slippery slope away from recognition for the tenure- track "technorhetorician" (the term is Eric Crump's), however. If we cannot slot our work into the traditional valuation standards, there is some likelihood that it will simply not be valued at all. (This point is discussed at length in the Spring 1997 _Kairos_ Coverweb, various authors, forthcoming.)
Therefore, I think it is vital that we re-imagine tenure/promotion (T/P) documents as documents for rhetorical invention. By "rhetorical invention," I am referring to the process of creating knowledge through argument (in the classical sense of the word); rhetorical invention is, as LeFevre points out in _Invention as a Social Act_, a collaborative effort engaging the knowledge of a discourse community in new ways. It is precisely this definition that not only allows, but requires, we treat T/P documents as inventional.
Most department-level T/P documents -- I have looked at about 30 -- are in the habit of *weighting* the expectations the community (the department, I suppose also representative of academia as a whole) presents to the prospective member. For instance, the documents at Bradley University and Kent State University both explicitly state that, while all three are necessary, excellence in teaching is valued moreso than consistent publication, which in turn is weighted moreso than that most nebulous of requirements "service to the academy." Since these expectations are weighted, it is critical that technorhetoricians be able to explain and defend what they are doing in terms of these labels.
The classic example of how this can be problematic is this: last year, three junior faculty members at different universities performed what is nominally the same task of electronic scholarship: building a "homepage" for the department on the World-Wide Web. One of these faculty was granted a course release in return for the work; another was told it was "department service"; the third was told to include it in the tenure dossier as a "non-refereed, invited publication." Similar disjointed approaches to "counting" electronic scholarship has been noted in, for instance, design and upkeep of a computer classroom (teaching? service?); leading a national MOO-based colloquium later archived and available on the WWW (presentation? publication? service?); and (ahem) editing a web-based academic journal which advertises itself as "peer-reviewed" but means something quite different than the standard print publication.
If technorhetoricians want to get traditional recognition leading to tenure and promotion (another debate entirely), then it is our responsibility to learn the language of the document representing our discourse communities (departments) and attempt to explain what we are doing *on the terms of those already in the discourse community.*
For this reason, I have found myself drawn to Porter's (1992) introduction of the "forum analysis" as a tool for analyzing and perhaps entering into a discourse community. The forum analysis was designed to provide a *heuristic* for prospective members of a discourse community. It borrows from Foucalt's previous "archaelogical analysis" in that it encourages a reader/author to approach a text as a set of discursive practices rather than as directed at some imagined "real reader" -- it is in this way that the forum analysis is different from more standard, traditional audience analyses. Engaging a forum analysis means examining a text to discover the terminology that is most rooted, seemingly most "solid" within the boundaries of the discourse community which the text represents. The prospective contributor to this community identifies and then questions the very foundations of the communal ethos as represented textually; Porter stands with Foucalt in believing that we cannot understand the future of a discourse community, or how we may be able to contribute to it, without first questioning and understanding its birth, past, predispositions and presumptions, and how that history intersects with the present and potential futures.
Examining the standard, accepted terminology (we might suggest the term "topoi") of a community can allow us to identify both the boundaries those terms delimit, and the gaps they allow -- this latter means the kinds of knowledge perhaps presumed by the community but not yet articulated by it. Articulation of this knowledge in new ways -- filling the gaps in the discourse -- is the "entry point" of the prospective member of the community.
Unfortunately, even in his book, Porter's examples of forum analyses are all performed on discourse communities to which the analyst already belongs; that is, there is a mode of discovery engaged, but not toward the goal of joining a community. The only other published instance of a forum analysis (Porter admits he does not know of any either) is Berkenkotter's (1991) analysis of the journal Reader.
This does not in any way negate the usefulness of the tool; in fact, it is Porter's engagement of classical rhetorical terminology throughout the book that led me to re-think "forum analysis" precisely as a form of *topical* analysis. (So, the short answer to "C" in this question is "yes, it is appropriate" -- here I will demonstrate a particular way this is true.) I have already noted above that it is reasonable to think about "deep terms" of a community as their "topoi"; now I will briefly discuss the way I am conceptualizing "topoi" and how it relates back to the forum analysis, particularly in the case of analyzing T/P documents on technorhetorical terms.
Topoi are, from tradition, conceptual starting places; they represent communal predispositions. According to Perelman, the topoi of a community are rooted in and reflect the ethos of the community. They are preformed (and in a sense performed) arguments which, due to their communal grounding, can be (to engage the electronic metaphor) cut and pasted from one situation to another. As Lanham has noted, of course, repeating an argument without changing the object repeated changes it intrinsically based on the context in which it is presented; therefore, topoi are (as above) *starting* places but in no way should they be viewed as *conclusionary.* Lauer has posited that the original Aristotelian topoi may have served any or all of three purposes -- they may have been useful mnemonics; they may have been epistemic; or -- and most useful to this exercise -- they may have been heuristic, that is, shared communal tools for creating probable knowledge. (This relates directly back to the presented definition of rhetorical invention.) This concept of topos-as- heuristic is what makes Porter's forum analysis so useful in the context I am creating.
Topoi are engaged in both the act of reading and the act of writing (or, perhaps, on both sides of the one act); we bring our predispositions to the reading, and we must recognize the predispositions which have entered into the act of writing the document at hand. However, when we are first coming to a discourse community, it is clearly the act of reading that is (or should be) forefronted.
Answer Deery/Zappen | Answer Porush
Exam Questions | Introduction
Given this, it seems it might be most appropriate to do a "reading analysis" of a document, such as Brent's "Dialogic Criticism" developed in his "Reading as Rhetorical Invention." This approach evaluates a series of documents and what rhetorical approaches worked well and why; however, it is nearly impossible to perform this type of analysis on a single top-down text like a T/P document. Brent has suggested to me that this kind of approach might be interesting and useful, but to do so we would need all drafts of a T/P document, and some record of the memos or conversations that went into re-creating the drafts; the document may be deeply intertextual (as all texts are), but the "dialogic" aspect is (obviously) hidden from analysis. (I would like to say that I am currently developing a controlled study for my dissertation which will use parts of Brent's work interacting with parts of Porter's; I will be happy to discuss this face to face.)
So if we are left with only one document left to analyze as supposedly representative of a discourse community, diaologic criticism is not possible. However, if we understand (as above) that the initial reading of a document is not "reading to respond" but rather "reading to analyze and understand," we are brought back precisely to Porter's forum analysis. A forum, according to Porter, is a "trace" of a discourse community, one of its texts (not always written, but in this case, obviously so); a forum analysis demands that the writer begin by "listening to the audience" to determine how and why they say what they say and do. Presuming that the junior faculty member wants to "writer herself into the community" (by assembling a defensible tenure portfolio), then she must begin by reading the singular text presented.
To this end, a T/P document can be treated precisely as a list of topoi (or commonplaces; the terms are not always used interchangeably, but I am doing so in this instance) which a prospective member of the "academic humanities" discourse community must read, understand, and use a basis for presenting her own arguments (new knowledge) into the community's lexicon. This task is especially important if the "new knowledge" does not easily fit into the pre-established community ethos; in this instance, for example, bringing a dedication to electronic scholarship to a traditional humanities department. Only when the new rhetor has learned the "topoi" and is able to re-articulate her knowledge back to the community in ways translatable, accessible, and understandable to the "established" members of the community can she be sure of being heard, and (perhaps) accepted.
The one problem with treating a T/P document as a forum/trace, and attempting to analyze its keywords/topoi, is that those department-level documents I have seen have been presented as authorless and authoritative; it is not truly possible to begin answering the questions Porter says are key: who writes? who is given voice? what are their credentials? Furthermore, this anonymous author construct allows the T/P document to build an "ideal reader" -- the Eager Junior Faculty Member (Ong says the writer's audience is always a fiction; in this presented-as- authorless case, it seems more like the writers' audience is impossible fantasy!) Of course, it is the responsibility of the reader to decode and respond to the document, not the responsibility of the community to make it easy and accessible to do so.
Fortunately, there has recently been an influx of "meta-documents" into the academy, those sponsored and authored by organizations such as MLA, CAA, NCTE, CCCC, and others, which purport to be documents which departments can reference -- "guidelines" -- in order to make decisions about how to count electronic scholarship.
I have spent some time with three of these documents (attached) -- the recently-published MLA document, the year-old CAA document, and the most recent draft of the currently unpublished CCCC/NCTE effort. (It's interesting to note, I think, that this collaborative effort is unofficial, and, given the slightly but importantly varied audiences of the two groups, perhaps somewhat inappropriate.) I have found that the three documents all share in common two characteristics: 1) they identify by name the actual authors of the document, and in two cases even provide contact information; and 2) all three strictly adhere to the three "special topics" of T/P documents -- teaching, publication, and service.
Other than these similarities (and even these are slightly different, as I will show), the three documents take wildly different approaches to completing the presumed task at hand; they all share in their title the word "Guidelines," and in fact only one of them truly begins to present any. In examining these three documents I developed a shorthand approach to forum analysis (due to the time constraints) in which I attempted to answer the following questions (all of which are representative of Porter's original heuristic):
Time permitting, I will examine all three documents. First, a brief compare/contrast of them.
The MLA document seems to have as its primary approach "departments should re-write their T/P documents in order to recognize electronic scholarship," apparently leaving the specifics to the individual department. While this is a defensible approach (sort of a Republican "power in the hands of the local authorities" mantra), I have seen at least three instances where the department decided to address the need to alter their T/P document by *attaching the MLA Guidelines* -- which essentially accomplishes nothing!
The CAA document is a brilliantly-authored argument from pathos (upon my first reading, I admit I was laughing so hard I was crying) which serves to inform administrators about just what electronic scholarship in the arts is all about, without actually providing any specific guidelines on how to recognize it.
The CCCC/NCTE draft(s), I would argue, being to make a concerted effort to employ the traditional Aristotelian topos of "Comparison" (similarity, difference, degree) in such a way that, by engaging analogy, T/P boards might be able to understand specific instantiations of electronic scholarship. While this last is probably the best approach of the three -- I hope to demonstrate why -- it is also the least-developed, as the document is still in unpublished draft stage.
Each of the documents presents an authorship; or, as I return to the copies Karen has provided me, I thought so! The MLA document which I have been working with was printed out from the WWW, and contained the names of nine co-authors (at the end, what would have been p. 8) I note with interest that in this copy, the one printed in _Profession_, all those names are removed. Therefore, we cannot answer the question "what are their qualifications?" The NCTE/CCCC document lists two authors, Rebecca Rickly and Traci Gardner, both of whom I know well and know to be qualified to approach the task; however, neither one is identified by affiliation (unless you count the second part of their email address), and there is mention in the document of several participants in collaborative sessions, who are not mentioned. Later (this is a web document, so I define "later" as "after the front node") Eric Crump and Judi Kirkpatrick are also listed (p. 3), with affiliations, and all the people listed on this website have direct contact information. (An aside: Karen has asked for clarification of the abbreviations at the beginning of this document: ITC is Instructional Technology Committee, a subgroup of NCTE; CCCCCCC, or 7C, is the CCCC Committee on Computers in Composition.) The CAA document lists four authors, without contact information (p. 12); since this group is outside my personal domain of knowledge, I cannot speculate on their qualifications. I do note that the last one listed, Sokol, has a title nearly identical to Gardner's and Rickly's. The point of listing these names is simple: we can engage a truly archaelogical/forum analysis much better when we can name and engage the authors and their backgrounds and predispositions.
All three sources list "external links" to legitimize their work. Interestingly, the MLA authors are apparently comfortable enough with their own ethos (again, it is presented anonymously) so the only document cited is a previous MLA document, published in 1993 (p. 7) to which this document is named as a supplement. The NCTE/CCCC document has links (literally, since it is a web document) to the MLA document which is upweb, perhaps adding a bit of a hierarchy to the document acceptance. And the CAA document announces that it is "Unanimously adopted by the CAA Board of Directors." (p. 9) However, I am left wondering whether this organizational approval might be counter-productive. I have the impression that the CAA is akin to the ACW (Alliance for Computers and Writing) to which I belong, which is seen somewhat skeptically (and perhaps rightfully so) in the larger academy as *biased* toward the subject of electronic scholarship. A T/P statement from the ACW would have, I suspect, somewhat less implicit value in the profession than would a statement from NCTE or MLA, which is not specifically geared toward technorhetoricians.
The MLA document explicitly nods to the three T/P topoi early on , in noting "Departments should ensure that computer-related work can be evaluated within their tenure and promotion procedures," (p. 8) and providing a bulleted list of the kinds of work that need to be assessed. But again, nowhere in the document to the author(s) present "Guidelines" (as the title would suggest) for doing so.
The one place where the MLA document seems to be ready to make a significant point -- "Documentation of projects might include ... reviews and citations of work in print or in electronic journals" (p. 8) -- is quickly negated. Where this phrase seems to being to collapse the perceived boundary between the print journal and the electronic journal that exists in academia (which would indeed be a significant "guideline"), two paragraphs later the document lists 17 journals from whose editorial boards departments might ask for peer review or expert advice in evaluation of electronic scholarship. All 17 are *print* journals, implying again that the technorhetorician must be capable of defining her work in terms of print standards. Now all she needs is some guidelines on how to do so; this document provides none, and we are back where we began. (I also find it somewhat unsettling that the point of the document seems to be "Go ask people if it's good. Here are some lists of people you might ask.) The other major point of the MLA document is that the candidate must be prepared to discuss her work in terms of what theory informs it, why it's useful, and with evidence of rigor and intellectual content (bulleted list, p. 8); this is, of course, utterly no different than what any academic, in any milieu would be expected to do, electronic or otherwise. At the very least, however, this does support the idea that the work must be "translated" into terms the traditional academy can understand. This is something that the NCTE/CCCC document starts to do, and to do very well.
The document (or, more realistically at this point, the series of nodes which will eventually make up the document) never explicitly list the three T/P topoi (as MLA does), but the recognition that they exist and must be worked within is clearly implied throughout. The authors of this document have clearly succeeded as readers in decoding the T/P documents as a genre, for their presumptions, and are responding by using the same starting places to enter the discourse with new knowledge.
The very best examples of this are places where they utilize analogy as a starting point, a place where the topoi of the T/P document is *read* to include certain things, and then the new knowledge of the technorhetorician is in turn *written* into the conversation via comparison and contrast. See for instance p. 3 (this is the first example available; there are others) where under the bullet "Create evaluative paradigms ..." the authors claim explicitly, "Non-traditional forms of publication and research should be presented as extensions of the traditional system for sharing findings in the academy." Then if we can recognize that "participation in a professional conference fits within the T/P topos "Service to the Academy," and the authors of this document offer the specific analogy that "listserv participation and MUD discussions should be compared to ... [participation in] a professional conference," that is a precise and useful starting point for offering guidelines to those who wish to evaluate electronic scholarship. Significantly, they also note that "the comparison should also explore the differences." A MOO colloquium, as mentioned earlier, is very like participation in a conference, but also may be logged and accessible via WWW, more publication-like. We have a useful conceptual starting point for evaluating the scholarship.
Later in the document draft(s) (p.5) the authors begin to offer a more comprehensive list of activities that will need categorizing and analogizing. Unfortunately, since this is an early draft, the "Defining Terms" section is not yet developed.
I see that my two hours has ended; I am sorry that I will not be able to discuss the CAA document, which in many ways is the richest of the three, though partly because it is the least effective. I will be happy to use that document as a point for discussion in the oral defense. The meta-document authors have all recognized a need for guidelines; they have all stayed within the premise of the three T/P topoi. But only the one which begins to question the terms, and to use them as starting points for re-defining "new knowledge" they bring to the academy, to see the T/P demands as *topoi* and the reading of the T/P document as what I am calling a topical forum analysis -- only in that case do we actually begin to see useful translations and guidelines for evaluating electronic scholarship in the humanities.
As a technorhetorician, I am perhaps naturally drawn to treating document analysis in a playful, heuretic way; I would be tempted, for instance, to play with acronyms like ToPoI (Tenure, or Promotion, or Instructorship) or ToPoS (Tenure, or Publication, or Service) ... but that would not be appropriately engaging the language of the discourse community at hand. If I want to gain admittance to the discourse community "academia," I am bound to the traditional topical analysis as presented above, and may find it useful to utilize a version of this analysis -- the forum analysis -- in learning to re-articulate my "new knowledge" in ways acceptable to the standards presented by the trace of the community that is the T/P document. This is traditional rhetorical invention; it is collaborative and communal in that it forces me to learn a language precisely so I can introduce new knowledge into the community -- identify and fill some of those Foucaltian gaps. I can engage the ethos of the community by studying and questioning its topoi. If I am successful in doing so, I have the opportunity to be accepted as a member of the discourse community.
Answer Deery/Zappen | Answer Porush
Exam Questions | Introduction