The Gen/ReGen Log: Refining the Rhetoric of Structured Prompts

Brian Gogan
Western Michigan University

This assignment offers students an exploratory space—literally, a tabular document accompanied by an iterative process—within which they can refine the rhetoric of structured prompts and experiment with prompt engineering. As students focus on discrete rhetorical concepts, honing prompts to yield more optimal results, they further explore the relationships between production and reception, intent and interpretation. The assignment may be adapted for use in a single class or iterated across a weeks-long portion of a course to allow for differentiated learning. Weeks after having completed their last Gen/ReGen Log, my students still reference this assignment. Some of my students—themselves, college or high school educators—report success using Gen/ReGen Logs with their own students.

Learning Goals

Students will...

  • Craft GenAI prompts that foreground specific rhetorical concepts (e.g., diction, detail, imperatives, persona, style, sentiment, audience, purpose, genre, context, constraints)
  • Compare the intent behind their prompting with the completion produced by the GenAI tool
  • Evaluate prompt completions for effectiveness by applying expertise from a job, field of study, hobby, or passion  
  • Develop awareness of structured prompting strategies

Original Assignment Context

This assignment was originally used as a recurring assignment that spanned the first seven weeks of a fall 2023 course titled “AI Writing: Prompt + Response,” which enrolled undergraduate, graduate, and non-degree seeking students. Our first iteration of the assignment focused on diction and detail.

Materials Needed

  • Any generative artificial intelligence platform capable of completing a prompt and outputting text, image, or other content
  • A blank copy of a Gen/ReGen Log (columns for intent, input/prompt, output/completion, interpretation/response; rows for sequential attempts)
  • A sample of a completed Gen/ReGen Log (optional)

Time Frame

~30 minutes for focused versions that apply to one rhetorical concept and are taught in-class

~7 weeks for extended versions that apply to more than one rhetorical concept and are assigned as homework across classes


In early 2023, a number of my colleagues at Western Michigan University were discussing ways to engage with GenAI platforms before discounting their value. From these discussions, a new university initiative, called AI @ WMU, and a new Department of English course offering, titled “AI Writing: Prompt + Response,” emerged. I became the instructor of record for the new “AI Writing” course.

The course sought to enhance what we might call prompt engineering pedagogy by placing the field of rhetoric and writing studies into conversation with GenAI technology. In particular, I sought to connect rich research traditions on writing assignment prompt design, rhetorical theory, and feedback literacy with nascent discussions of prompting—hard and soft; engineering, optimization, and finetuning. Although I was sure that rhetoric and writing studies had something to offer GenAI prompting and I was willing to bet that this connection would make for a provocative course, I was much less sure of the audience for the new course. My colleagues were, likewise, unsure. 

Considering that this course was brand new and that it was focusing on a newfound technology, my colleagues and I decided to cross-list the “AI Writing” course, so that both undergraduate students, graduate students, and even non-degree seeking students could enroll in it. We scheduled the course to convene once weekly, deciding to offer the course with maximum flexibility and deliver it in the HyFlex modality, which was our department’s first such offering. This modality presented students with the choice to participate in the class either in-person through a face-to-face class option or online through a remote synchronous class option.

As a result of our choices in course listing and course modality, the “AI Writing: Prompt + Response” course welcomed an incredibly diverse range of 27 students. Students participated from as far east as Maine and as far west as New Mexico. The class enrolled both graduate students and undergraduate students, as well as tenure-line faculty members, staff members, and non-degree seeking working professionals. Some of the students had been teaching college-level classes for 20 years. Some of these students had not taken a college-level class in 20 years. And, 20 years ago, some of these students were not much older than one-year-old.

My challenge was to design the “AI Writing” course and its assignments so that all students would find value in their educational experiences and be able to apply coursework to their individual pursuits. Consequently, I decided to split our fourteen-week semester into roughly two seven-week segments. The first segment invited students to explore prompting, while the second segment tasked students with designing a research project that studied an applied case of prompting.

The Gen/ReGen Log assignment provided a framework for the first seven weeks of the class, when the main objective was for students to explore prompting in depth. The log is something like a tabular journal or a structured field notebook, and it serves much the same purpose as some of the journals profiled by Toby Fulwiler—namely, to add “a personal dimension to keeping records” and to give students “a place to make connections between one observation and the next” (1982, p. 22). The “Gen” of the log’s title signals its use to record the content generated by one prompting attempt. While the “ReGen” of the log’s title is a playful nod to the regeneration function on ChatGPT or the “Retry” button on Claude 2, the “ReGen” of the log’s title also signals that exploration of prompting will take many repeated attempts. The assignment, therefore, involved iterative prompting.

I folded the Gen/ReGen Log assignment into the opening seven weeks of the “AI Writing” course and used the assignment variably. Early in the course, I would share my own computer screen and demonstrate the log process by crafting my own prompt and describing my intentions behind that prompt. Put differently, I would explain what kind of response I sought from the GenAI platform, or describe the completion that I intended to receive. I would, then, prompt a GenAI platform, read the completion, and interpret the generated text all in front of my students.

At other times, I asked small teams of students to work collaboratively to complete a brief log as an in-class activity. These prompting teams would generate, regenerate, and regenerate again, attempting many related prompting strategies and discussing whether or not they had landed on a more generalizable, perhaps even transferrable, prompting strategy.

More strategically, though, I assigned a Gen/ReGen Log as homework during five of the first seven weeks of the “AI Writing” course. Although each log was structured identically, the rhetorical focus of each log shifted: From week to week, our class focused on different rhetorical dimensions of prompting and the Gen/ReGen Log assignment allowed each student to experiment with the rhetorical concepts in their prompting outside of class. Our conceptual clusters included:

  • sentiment and persona
  • directives and questions
  • purposes, context, and constraints
  • genre and pattern
  • audience and voice

On account of these different rhetorical foci, each log felt new and different from its predecessor. Perhaps most interestingly, each student’s logs seemed to layer upon each other in additive fashion. Students carried forward their prompt engineering techniques and their rhetorical knowledge to the next Gen/ReGen Log. The result was that the logs become progressively more complex, more rhetorically sophisticated, and, ultimately, more effective.

During class meetings when a Gen/ReGen Log was due to be submitted for evaluation, students would share prompting strategies with one another, approaching these strategies as pieces of advice that might be useful to other prompt engineers. Frequently, students would post these pieces of advice to a digital whiteboard application. The activity would most often culminate in a whole-class discussion of the prompting strategies alongside the previous week’s rhetorical foci.

Deep into the term, weeks after having completed their last Gen/ReGen Log and ensconced in their research studies, the “AI Writing” students still reference their Gen/ReGen Logs. Some have even mobilized versions of these logs as part of their studies—as a method to gather data on how study participants interact with particular prompting strategies. More encouraging still are the “AI Writing” students who are, themselves, teachers—in college or in high school—and who are assigning their own students Ge/ReGen Logs.



This assignment offers you a chance to explore GenAI prompting, as applied to an area in which you have some expertise. For this assignment, expertise should be understood as a kind of knowledge derived from your experience: Expertise is something that you know how to do well because you have studied it or, better yet, done it before. Your choice of an expertise area could connect to your job, your field of study, your hobby, or your passion.  

Your exploration of prompting will focus on two ways in which you use language to make meaning in your expertise area: (1) your choice of words used to construct a communication; and, (2) the level of detail you include in a communication.

One persistent challenge of any communicative act is matching your word choice and level of detail with your audience. This assignment allows you to explore tactics for best communicating about your expertise area with what John R. Gallagher calls an “algorithmic audience” (2017, p. 26). More specifically, this assignment asks you to log your engagement with a GenAI platform using a five-column table. As you complete this Gen/ReGen Log, ask yourself: How does expertise, as represented in word choice and detail, influence the effectiveness of GenAI prompting?


To complete this assignment, you will need a blank, digital copy of Gen/ReGen Log and you will need access to an artificial intelligence platform that can generate text, such as ChatGPT or Claude 2.


After completing this assignment, you should be able to:

  • Craft GenAI prompts that foreground the rhetorical concepts of diction and detail
  • Compare the intent behind your prompting with the completion produced by your chosen GenAI platform
  • Evaluate prompt completions for effectiveness by applying your expertise of a professional domain, field of study, passion area, or hobby  
  • Develop your awareness of structured prompting strategies


To complete this assignment, follow these steps.

  1. Identify an area (job, field of study, hobby, passion) in which you possess expertise.
  2. Choose a particular task or activity in your area of expertise, focusing on how GenAI might produce content related to this area.
  3. In the “Prompt” column of your Gen/ReGen Log, write a prompt that engages your chosen GenAI platform with that task or activity in your area of expertise.
  4. As you are crafting your prompt, complete the “Intent” column of your Gen/ReGen Log, explaining the decisions behind your prompt engineering choices and what completion you expect to receive.
  5. Enter the prompt into your chosen GenAI platform.
  6. Copy-and-paste the completion in the “Completion” column of your Gen/ReGen Log.
  7. Interpret the completion, drawing upon your expertise to assess its effectiveness and note its deficiencies. Write this interpretation and these notes in the “Interpretation + Notes” column.
  8. Repeat this process, again and again, trying other prompts and varying choices of diction and levels of detail. For each attempt, complete a row of the log by entering the required information into each column associated with the row.
  9. Add attempt rows to your log, as needed.  


The audience for your Gen/ReGen Log is primarily you. You’ll use this log to develop an awareness of how prompting that uses a certain choice of words and a particular level of detail may differ from prompting that uses other word choices and selects other details.

Secondary audiences for your log include your classmates and your instructor, all of whom might study the log to trace your prompting and attempt to discern effective tactics or strategies. 


Your Gen/ReGen Log will be assessed pass/fail for completion. No set number of attempts will indicate completion of this assignment; rather, the expectation is that you will spend two hours before next class experimenting with different prompting strategies. Completed attempt rows should include thorough entries for each column of the log.  

Gen/ReGen Log: Focus on Diction and Detail





Interpretation + Notes

















This assignment emerged from helpful conversations with members of the Teaching and Learning team within WMUx, Western Michigan University’s innovation hub: Dr. Gwen Athene Tarbox, Professor of English and Director of the WMUx Office of Faculty Development, Alyssa Moon, Associate Director of Instructional Development and Design, Megan Hess, Senior Instructional Designer, and Anders Christensen, Instructional Designer.

The assignment extends the longstanding use of learning logs and journals in the writing classroom for use with emerging GenAI technologies. For example, see Toby Fulwiler’s “The Personal Connection: Journal Writing across the Curriculum” in the 1982 collection Language Connections: Writing and Reading Across the Curriculum, which Fulwiler edited along with Art Young.

Fulwiler, T. (1982). Personal connection: Journal writing across the curriculum. In T. Fulwiler & A. Young (Eds.), Language connections: Writing and reading across the curriculum (pp. 15-31). National Council of Teachers of English.

Gallagher, J. R. (2017). Writing for algorithmic audiences. Computers and Composition, 45, 25-35.