Introductory Activity for Generative AI

Josh Anthony
Gonzaga University

The basic mechanics of this activity are simple: choose a reading from class and try to manipulate the LLM to recreate it as close to the original as possible. As a rule, I generally try to produce and run activities that are modular, interchangeable, and adaptable. This activity is no different. In one course, we used the prompt and only the prompt—no scaffolding. In other courses, we scaffolded LLMs, understood how prompt engineering might work, and worked as a class to construct the prompt we thought would produce a suitable outcome. 

For students, the limitations of the LLMs become clear almost immediately. With non-scaffolded classes, students will often attempt to put in the name of a text and instruct the LLM to recreate it—often to some hilarious results. Even with the scaffolded and carefully crafted prompts students begin to recognize the limitations. This might come in the form of length, depth, or even understanding of an abstract concept.

The most unexpected outcome was how deeply the students had to re-engage with content from the class. Of course, we have had discussions about voice, style, tone, message, etc. But, in this case, they felt compelled to clearly articulate these ideas to the LLM.

Learning Goals

  • The original, stated goals were to understand how prompt engineering works and how LLMs respond to input. After running this activity a few times, a hidden goal emerged: to re-engage with content from the class in a radically different way.
  • Introduce students to LLMs and AI generative software.
  • Discover how the LLM understands and responds to prompts.
  • Improve our understanding of terms like tone, style, or voice.

Original Assignment Context: Writing 101 course, mid-semester

Materials Needed: An accessible LLM text generation program (i.e. ChatGPT)

Time Frame: Modular, but generally run as a 50 minute activity


The idea for this assignment came about while reading and discussing LLMs with friends and colleagues. At the time, there was discussion about downloading the LLM software to your own computer, feeding it with your own interests, and—the idea at least—then receiving content perfectly curated for your tastes.

This had me considering what current LLM software could really produce that I might want to engage with. On my own, I began experimenting. I attempted engineering prompts to encourage the software to create something worth reading. At some point, with a little frustration, I simply prompted: “Write Woven by Lidia Yuknavitch,” a text I use in class.

Obviously, the software failed to reproduce the essay. But it was here that I saw the activity.

I’ve run this assignment nearly eight times over two semesters. These classes have always been Writing 101 courses, introductory college writing courses. For me, the most fascinating part of the assignment is seeing students organically develop prompt engineering skills.


In my classes, we had done a little bit of scaffolding towards LLMs and generative writing. We had discussed some of the limitations, the ethics, and how the software really functions. We then worked on and agreed to an addendum to include in our syllabus:

ChatGPT and other AI writing software will not be used to create and populate the complete content of our essays, writings, paragraphs, and thoughts. 

ChatGPT and other AI writing software may be used as we might use a peer. It may be used as we use many other writing tools. For example, it may be used to gain ideas and feedback. 

We understand that content generated from ChatGPT and other AI writing software is not our own. If we use ChatGPT and other AI writing software in an appropriate way, we will make an effort to document this and include ChatGPT and other AI writing software as a collaborator on the writing. 

Failure to follow this policy will impose charges of plagiarism and/or unethical academic use.

However, as a class we hadn’t engaged with the software in a meaningful way. This agreement and the ideas behind them only existed in the abstract.


When I first ran this activity, I gave little to no scaffolding. I asked students to choose a text they enjoyed from class. Next, I asked them to form groups. Finally, I instructed them to open a generative AI LLM and prompt it to recreate the chosen text.

Most groups began by doing what I had: “Write X by Y.” This always and immediately fails. Next, students will generally try to write a summary in the prompt and have the LLM rewrite the text by way of the summary. Here’s a student example trying to rewrite The Trash Heap has Spoken by Carmen Machado: “Write an 8 page story that discusses issues and stereotypes of female obesity and the power that comes with it. Reference Ursula from the little mermaid, a grandma, and Marjory from the muppets with her sidekicks Philo and Gunge.

Once this, too, fails, the students can feel stumped. They begin to see some of the limitations of the software. They say, “This doesn’t sound right” or “It has the ideas, but not the feeling.”

At this point, they begin dissecting the original piece. They see that it isn’t an essay, it’s a personal narrative, this needs to be written in first person. They see the original has an ambiguous introduction; how can I prompt that? They see a need for a catalyst. They prompt towards voice and tone. They want to give the author some life.


What originally began as an activity to understand the mechanics and function of LLMs and generative writing quickly became a tool for in-depth discussion of our texts. Ideas like tone and style no longer became an abstract question to answer, but a quantifiable piece of information that needed to be articulated.

With this assignment, I find the most value in its multimodality. This activity can fit into any class at almost any time. Scaffolding helps, but isn’t always necessary, and students build that scaffolding themselves when attempting the activity. This can be used as an introduction to LLMs or as a capstone to a discussion. The activity can be shortened to 20 minutes or lengthened to a week or more. Mostly, I want to share this activity with those in the field and see what might be done with it.


This activity is simple, modular, and open to experimentation (it’s actively encouraged).


Choose a reading from class.

Form small groups.

Use an LLM AI generative software to attempt and recreate the chosen reading.


I would like to thank the English Department at Gonzaga University for giving me the space to attempt activities like the one discussed. More specifically, I’d like to thank Chase Bollig for his encouragement in submitting this activity. I wouldn’t be attempting this without his support.