AI Literacy: Real-World Cautionary Tales

Maureen Gallagher
University of Pittsburgh

This assignment asks students to engage critically with several publicized cases involving misuse or controversial use of Generative AI (often ChatGPT).  While many students have considered ChatGPT within classroom contexts–often as a forbidden technology, associated with cheating–this assignment about “real world” uses of AI, in specific rhetorical situations beyond the classroom, offers students a new, valuable perspective to develop their AI Literacy. This assignment encourages students to reflect on the ownership of their writing and intellectual labor, as well as ethical considerations of rhetorical situations within the classroom and beyond.


Learning Goals

  • Students will critically engage with several publicized cases involving misuse or controversial use of Generative AI through Large Language Models (LLMs). Students will better understand how LLMs work, with a focus on the limitations and risks of the use of Generative AI in writing in public sector and workplace contexts.
  • Students will learn about considerations related to ethics, rhetoric, privacy, and reliability when integrating AI text for use in real-world contexts. Students will also apply that understanding as they make choices in their writing for college courses.

Original Assignment Context: First-year composition courses and intermediate-level course in public and professional writing

Materials Needed: Internet connection, Canvas (or similar course software), and subscription to media publications, as needed (example: The New York Times)

Time Frame: ~ 1 week

Overview

This assignment includes a selection of four real-world cautionary tales involving generative AI, as reported in various news outlets. In Fall 2023, I assigned these readings at the very beginning (week 1 or 2) of two types of courses: an intermediate course in writing for the public and first-year composition courses, and students responded on a Discussion Board. An in-class discussion followed.

Based on my observations in classroom discussions and students’ written responses, some benefits of starting the semester by reading and discussing cautionary tales include:

Heightened Engagement: This assignment features cases with a sensationalistic, “ripped-from-the-headlines” dimension, where professionals face consequences for decisions they have made while adopting generative AI in the early months of Chat GPT as an emerging technology. Many students clearly enjoyed discussing the foibles of professional adults. Class discussion was spirited, as some expressed shock and even moral outrage at perceived use of professionals improperly using “shortcuts” in high-stakes contexts.

AI Literacy: Some students articulated that they were learning, for the first time, the serious limitations of use of an LLM, such as ChatGPT, as a suboptimal search engine for research (i.e., “hallucinated” sources, facts). Several students also expressed that they had not previously known that any text or data they input into an LLM was no longer exclusively private. These cases also bring up the use of LLMs in coding as well as text generation. Finally, the framing of reflection responses and classroom discussions reinforced ChatGPT and other LLMs as “chatbots,” which potentially discourages the tendency to anthropomorphize the technology.

Ethical and Rhetorical Considerations: Students argued (strongly!) that there are certain contexts and rhetorical situations for which human-to-human communication is highly valued, even indispensable. Many asserted that there are some cases where use of generative AI text can be considered a breach of trust or care between rhetor and readers.

Enhanced Sense of Ownership of Written Work: I found long-term benefits of introducing considerations of AI literacy early in the semester. For example, for later written assignments that allowed the option to use generative AI, many students wrote reflections that expressed their ambition to complete assigned tasks without consulting an LLM or “chatbot.” Also, some students remarked that they chose not to use LLMs on written assignments precisely because they did not want to share their labor with a for-profit company.

Critical Thinking and Peer Modeling: This assignment enables students to serve as models of critical thinking for their peers. In the discussion board responses and class discussion, many students modeled what I judge to be a healthy skepticism of emerging technologies for peers who hadn’t had much experience with LLMs, or who perhaps had not considered potential downsides.

Citation of generative AI: Several students posed questions about citation of AI-generated text material within academic settings. Many students expressed much interest in avoiding plagiarism. 

In Class Discussion

In the beginning of the semester, I aimed to set a framework of trust rather than emphasize concerns of “cheating.” To establish this framework in class, I concluded the class discussion about these readings with an interactive mini-lecture and slide show, in which I drew from students’ own observations and questions. I responded to the reality of possible unauthorized use of LLMs by:

  • Framing concerns in rhetorical and ethical terms: trust is the fundamental basis of rhetorical situations.
  • Communicating the centrality of trust within learning contexts. I clarify my default position as one of trusting students, respecting their writing choices, and taking seriously their reflections and rationale for their writing choices.
  • Indicating how trust is fundamental in their work with one another. Students will participate in the course community both as writers (as they draft written work) and as readers (as they engage in peer review). How might unacknowledged use of generative AI impact their relationship with fellow members of our learning community?
  • Addressing students’ citation questions by clarifying the expectation of citation, attribution, and explanation of AI-generated text in our course, for assignments where its use is authorized.

Adaptability of Cautionary Tales Assignment

This assignment can be adapted as a portable module that can be completed within one week. Instructors can update this assignment with newer cases that emerge as text generation technologies proliferate and as misuses and missteps in various industries invariably become publicized. This assignment was beneficial for developing AI Literacy at both introductory and intermediate levels and can be readily tailored for a variety of courses, according to specific disciplines or areas of professional development. I assigned responses on a Canvas discussion board, but the format could be readily adapted to various learning platforms and formats.


Assignment

Materials

I delivered this assignment on Canvas and linked to electronic texts. This assignment includes several web-based news sources. A subscription to The New York Times is needed to access several of the articles.

Discussion Board Directions

Select ONE of the cases below and write 1-2 paragraphs in response in your own words, without using generative AI. If you want to quote a passage from an article, put all exact wording in quotation marks and refer to the author (for example: according to Weiser and Schweber, "..."). Make sure to include responses to the following questions: 

1. Identify the case, title of the article, and name of author(s) that you are referring to. 

2. In a sentence or two, explain the details of the case in your own words, as if you were telling your curious Aunt Flo about it: who was involved and what happened? (Address the 5 W’s) 

3. Identify: what was the issue? what were the harms or potential harms in this case? who was concerned and why? 

4. Why are you interested in this case? For example, what about your personal interests, experience, background, or professional aspirations leads you to be interested in this case?

5. What detail(s) about this case did you find particularly revealing, interesting, or strange/surprising? What is your perspective about the stakes and the gravity of the issue?

6. These cases can serve as cautionary tales. What takeaways do you have as considerations to keep in mind, for yourself, now, as a college student, and/or as you professionalize in a future career? 

7. Optional: What questions does this case lead you to have?

 

Case 1: Vanderbilt University – Peabody College Office of Equity, Diversity and Inclusion

Topic: ChatGPT and Public Relations

Case 2: National Eating Disorders Association

Topic: Generative AI and Mental Health Counseling

Case 3: Samsung

Topic: Proprietary Data, Coding

Case 4: Lawyer Gets Sued in ChatGPT Legal Case

Topic: Legal Issues; LLMs and Research


Acknowledgements

Annette Vee, director of the Composition Studies Program in the Department of English at the University of Pittsburgh, has been a key influence on this work. In May 2023, I also participated in the Pitt Research, Ethics, and Society Initiative (RESI) Interdisciplinary Faculty Seminar on Generative AI. Professor Vee and fellow faculty presenters introduced me to the fundamentals and key limitations of Large Language Models and helped my thinking through the centrality of ethics, trust, and communication for teaching about, and with, text generation technologies.