Mike Frazier, Michigan State University
Lauren Hensley, Ohio Wesleyan University
This assignment proposes the integration of generative AI tools, such as ChatGPT, into a college learning and motivation strategies course, with the dual focus of enhancing metacognition and promoting ethical AI use. Students engage with AI-generated artifacts, compare outputs with their own work, and reflect on the implications of AI in their academic and professional lives. The approach can be adapted for various courses, encouraging a critical examination of AI's role in learning and its potential impact on future careers.
Paul Fyfe
North Carolina State University
This assignment asks students to use an accessible language model to write their term papers—with the goal of fooling the instructor. While initially framed as something sneaky or as a shortcut for writing, the assignment makes students confront and then reflect upon the unexpected difficulties, ethical dimensions, and collaborative possibilities of computationally-assisted writing. It can use any web-based text-generating platform, be adapted to various courses, and does not require significant technical knowledge.
Christopher D. Jimenez
Stetson University
This interactive survey assignment prompts upper-level humanities students to reflect on their social and cultural identities in relation to the textual inputs & outputs of large language models, such as ChatGPT. Successful implementation of the assignment can improve student understanding of the relationship between textual meaning and personal identity as well as the ways in which AI text-generation models may reproduce biases in response to prompt design and a given method of data curation.
Jentery Sayers
University of Victoria
This low-tech, tool-agnostic, small-stakes assignment prompts students to attend to issues of power and governance in artificial intelligence (AI), with an emphasis on what students do not know and may thus want to learn about algorithmic decision-making. Students first consider a hypothetical scenario where AI is assessing university entrance essays. They then consult publications on “algorithmic accountability” to articulate questions they would want to ask key decision-makers about the AI decision-making process. They conclude the exercise by reflecting on what they learned about algorithmic accountability, transparency, and social responsibility.
Marc Watkins
University of Mississippi
This chapter discusses the integration of generative AI (GenAI) in education, particularly in first-year writing courses. Recognizing the transformative potential of GenAI, the assignment proposes framing principles to guide students towards ethical and responsible AI use in an assistive role. Two assignments were developed using AI-powered tools upgraded to GPT-3.5 or GPT-4 to help students explore research and counterarguments.
Zach Whalen
University of Mary Washington
Computational text generation is having a moment right now, with large-language models at the forefront of what many people may have in mind when thinking about computer-generated text. A major shortcoming of these approaches—including ChatGPT, Bard, and similar systems—is their opacity. It is difficult, and probably impossible, to explain the origins of any specific textual prediction generated by these systems, so writers working with these systems have to think carefully about the ethical implications of any text produced. The assignment or exercise below is, in contrast to the AI language models currently in vogue, minimalist and fully transparent in its operations. Students working with this beginner-level programming exercise in repetition can, in spite of the nominal simplicity of the prompt, nevertheless produce computational literary works that surprise and delight. This can be an opportunity for students to learn how other poets have used repetition in their work, and by asking students to explain or defend their choices, the activity can open a discussion about the ethical decision-making involved in the data curated for LLM training.