by Carly Schnitzler and Annette Vee
Ahead of the coming academic year, we are so excited to share another 15 open-access assignments in our August 2024 edition of TextGenEd: Continuing Experiments. These assignments were developed and tested after the watershed moment of ChatGPT's release in fall 2022 and all feature large language models (LLMs) as the text generation technology of choice. In the January 2024 edition of Continuing Experiments, we noticed a new emphasis on prompt engineering in the submissions—how to engage with these technologies in a creative, humanistically-informed way. In this edition, we noticed another trend: increased attention to the ethics and philosophical status of AI. Prior assignments had, of course, wrestled with human versus machine writing. But this time hits different. We're moving beyond comparisons and into deeper questions of what it means to be human and also how we situate our human and writerly selves amongst these brave new writing machines.
We nominate Shannon Vallor as the philosopher of the moment. Her recent Noema article nails the implicit conflict in viewing AI as potentially "superhuman." How can we call AI superhuman when AI has few of the characteristics of humanity? It is not embodied or emotive; it cannot experience love, joy, or fear, and it has no self-awareness. It can answer our questions but it is “dark inside.” Vallor highlights a public disagreement she had with prominent AI researcher Yoshua Bengio, noting that their "disagreement was not about the capabilities of machine learning models at all. It was about the capabilities of human beings, and what descriptions of those capabilities we can and should license.” AI can write and draw and compose music much faster than humans, but that's not the point, Vallor says. She writes:
The most ordinary human does vastly more than the most powerful AI system, which can only calculate optimally efficient paths through high-dimensional vector space and return the corresponding symbols, word tokens or pixels. Playing with your kid or making a work of art is intelligent human behavior, but if you view either one as a process of finding the most efficient solution to a problem or generating predictable tokens, you’re doing it wrong.
Similarly, teaching isn’t about efficient solutions. We want students to do it right, where right is an expansive, ever-shifting, human goal connected to their own desires, skills, and the world beyond the classroom. We want writers to use their embodied, emotive selves and experiences to inform the writing and thinking that they do. And we want the technology that they use in their writing to support, not supplant, their humanity.
The assignments in this collection use LLMs to do just this. Lessons on how LLMs operate serve to demystify their technical capabilities and further separate where these tools excel from what writers do, through fine-tuning models and writing with them (Gonçalves and Young) and visualizing token use (Wells). Investigations of the sociotechnical terroir of various popular LLMs (Poncin Reeves) and critical annotations of AI media coverage (Gegg-Harrison) situate these tools culturally and ethically. A number of assignments focus on distinct stages of the writing process as a way to emphasize that understanding process over product is a uniquely human, uniquely writerly way of doing things and that targeted use of LLMs can support these stages (Velasquez). LLMs can support writerly process when used carefully and critically—from reading (Tan), to researching (Moore), to understanding genre (Taylor), to inventing and brainstorming (VanProoyen), to drafting (Meskin and Harding), to speaking (Girdharry and Wynstra), to translating (Xu). And LLMs can, even if by negative example, help students articulate and understand their own voices and experiences, something we explore in our own contribution to this collection (Schnitzler and Vee).
There are now a wealth of guides for teaching with AI, including collections like this one, books on teaching with AI, and online courses and workshops. So much is out there now that teachers are turning to listservs and Facebook groups to navigate it all. We’re proud of our sense of kairos: TextGenEd was the first to the gate, and we're capturing kairos again by slowing down production moving forward. We plan for the next edition of TextGenEd: Continuing Experiments to be published in August 2025. We hope to feature an array of assignments across the curriculum at that time. And in the meantime, let's continue to share our work navigating AI together.
The AI literacy grouping helps students to develop a crucial suite of critical thinking skills needed to work with emerging technologies: functional awareness, skepticism about claims, and critical evaluation of outputs.
Create Your Own ChatGPT, by João Gonçalves and Sarah Young
Using AI as a Tutor for Writing Initiation, by Victoria VanProoyen
Emphasizing Process with AI-Augmented Writing, by Elizabeth Velasquez
Learning about AI Token Use through Essays and Prompt Responses, by Joshua J. Wells
Creative explorations play around the edges of text generation technologies, asking students to consider the technical, ethical, and creative opportunities as well as limitations of using these technologies to create art and literature.
Illuminating Manuscripts: Words, Images, and AI, by Amy Anderson
Ghostwriting: Your Voice in the Machine, by Carly Schnitzler and Annette Vee
ChatGPT and Poetic Mechanics, by Tristan B Taylor
In the ethical considerations category, assignments are split between two primary foci—the first engages students in the institutional ethics of using LLMs in undergraduate classrooms and the second attends to the ethical implications of LLMs and their outputs.
Critical Media Analysis Project (CMAP), by Whitney Gegg-Harrison
Critical AI Analysis Presentation, by Margaret Poncin Reeves
Finding Forgotten Chinese Women: Critically Engaging with ChatGPT-Generated History, by Shu Wan
This category reflect the continued importance of iterating prompts and platforms to achieve writing goals with generative AI, across genres and writing contexts.
Where Worlds Entwine: The Generative AI Poetry Exercise, by Aaron Meskin and Lindsey Harding
Bilingual Genre Redesign with AI, by Wei Xu
These assignments ask students to consider how computational machines have already and will become enmeshed in communicative acts and how we work with them to produce symbolic meaning.
Embodying Rhetoric: Quick Scripts and ‘Acts’ of Persuasion, by Kristi Girdharry and Beth Wynstra
They Say, I Say, Robots Play, by Emily Moore
Can AI Read for You? Teaching Rhetorical Reading, by Xiao Tan