Emily Gillo
University of Memphis
This assignment asks students to critically analyze and annotate a Generative Artificial Intelligence platform’s Privacy Policy or Terms of Service (TOS) Agreement. Students then remediate a selection of those annotations to better align with their own personal ethics surrounding data privacy, authorship, and intellectual property rights. Analyzing the TOS for these platforms challenges students to reflect on the data and privacy surrendered when interacting with these platforms. Asking students to revise the agreement and compare their revisions to revisions suggested by the GenAI platform allows them the opportunity to examine bias that is often reflected in GenAI output.
Learning Goals
Original Assignment Context
I am a First-Year Writing instructor and PhD student; the original context of this assignment was a discussion activity that I led in a graduate course as a student. The purpose was to pilot this idea with my classmates and professor and to receive feedback before I lead this activity in my own First-Year Writing course in Spring 2024.
Materials Needed
Terms of Service and Privacy Policy statements from each of the provided AI tools (provided to students as links, as they are ever-changing documents). Students will need access to computers. Students will need access to each of the listed tools for the AI-generated remediation component.
Time Frame: A full class period, or approximately 60 minutes.
Overview
This assignment asks students to rhetorically and critically analyze and annotate a Generative Artificial Intelligence platform’s Privacy Policy or Terms of Service (TOS) Agreement. Students then remediate a selection of those annotations to better align with their own personal ethics surrounding data privacy, authorship, and intellectual property rights. Students also prompt the GenAI platform to remediate those annotations and examine the output for potential bias. Then students will compile the three versions of the same statement (original, their own remediation, and the AI-generated remediation), swap with a partner, and review each statement with a focus on data security, privacy, autonomy, and bias. Students then discuss and reflect on what impact this may have on future online behavior and GenAI platform use. This activity, which lasted approximately one hour, has been conducted once so far in a graduate-level seminar. The students who participated in this activity were engaged with the privacy impact assessment portion, often delving into additional research questions that directed them away from the initial list of questions, which generated a more lively discussion amongst the class. During the post-activity discussion, students reported that this immersive activity helped them better understand the importance of critically analyzing TOS Agreements. Students also reported being surprised by the intricacy and detail of these agreements and stated that conducting a privacy assessment helped them gain additional insight into important ethical considerations surrounding privacy concerns and user consent.
Recommended Readings
Collins, Cory, and Kate Shuster. “Learning the Landscape of Digital Literacy.” Learning for Justice, Southern Poverty Law Center, 6 Nov. 2017, www.learningforjustice.org/magazine/publications/learning-the-landscape-of-digital-literacy.
Paris, Britt, et al. “Platforms like Canvas Play Fast and Loose with Students’ Data.” The Nation, 22 Apr. 2021, www.thenation.com/article/society/canvas-surveillance/.
+ student chooses one, depending on which policy/agreement they choose to analyze:
Klosowski, Thorin. “How to Quickly Read a Terms of Service.” Lifehacker, 12 Mar. 2012, lifehacker.com/how-to-quickly-read-a-terms-of-service-5892422.
“How to Read a Privacy Policy.” State of California Department of Justice, 11 Oct. 2012, oag.ca.gov/privacy/facts/online-privacy/privacy-policy.
Choose one of the following AI tools’ Privacy Policy and/or Terms of Service Statement:
Part 1: Review
Part 2: Remediate
Part 3: Reflect
Optional Part 4 (Group Component)
Byrd, A. (2023). “Truth-Telling: Critical Inquiries on LLMs and the Corpus Texts that Train Them.” Composition Studies 51.1.
Critical Digital Pedagogy: A Collection. Edited by Jesse Stommel et al., Hybrid Pedagogy, 17 July 2020.
Woods, Charles. “The Rhetorical Implications of Data Aggregation: Becoming a “Dividual” in a Data-Driven World.” Journal of Interactive Technology and Pedagogy, 11 May 2021.
Woods, Charles. 2021. “Privacy Policy Genre Remediation Assignment.” The Digital Rhetorical Privacy Collective. drpcollective.com/assignments-activities.