GenAI Platform Privacy Impact Assessment & Remediation

Emily Gillo
University of Memphis

This assignment asks students to critically analyze and annotate a Generative Artificial Intelligence platform’s Privacy Policy or Terms of Service (TOS) Agreement. Students then remediate a selection of those annotations to better align with their own personal ethics surrounding data privacy, authorship, and intellectual property rights. Analyzing the TOS for these platforms challenges students to reflect on the data and privacy surrendered when interacting with these platforms. Asking students to revise the agreement and compare their revisions to revisions suggested by the GenAI platform allows them the opportunity to examine bias that is often reflected in GenAI output. 

Learning Goals

  • Develop critical analysis skills necessary for safe and secure online behavior.
  • Gain a deeper understanding of the data collection practices of online tools, specifically Generative Artificial Intelligence platforms.
  • Gain insight into the ethical considerations surrounding data privacy and user consent.
  • Evaluate how popular GenAI tools address user privacy concerns. 

Original Assignment Context

I am a First-Year Writing instructor and PhD student; the original context of this assignment was a discussion activity that I led in a graduate course as a student. The purpose was to pilot this idea with my classmates and professor and to receive feedback before I lead this activity in my own First-Year Writing course in Spring 2024. 

Materials Needed

Terms of Service and Privacy Policy statements from each of the provided AI tools (provided to students as links, as they are ever-changing documents). Students will need access to computers. Students will need access to each of the listed tools for the AI-generated remediation component.

Time Frame: A full class period, or approximately 60 minutes. 


This assignment asks students to rhetorically and critically analyze and annotate a Generative Artificial Intelligence platform’s Privacy Policy or Terms of Service (TOS) Agreement. Students then remediate a selection of those annotations to better align with their own personal ethics surrounding data privacy, authorship, and intellectual property rights. Students also prompt the GenAI platform to remediate those annotations and examine the output for potential bias. Then students will compile the three versions of the same statement (original, their own remediation, and the AI-generated remediation), swap with a partner, and review each statement with a focus on data security, privacy, autonomy, and bias. Students then discuss and reflect on what impact this may have on future online behavior and GenAI platform use. This activity, which lasted approximately one hour, has been conducted once so far in a graduate-level seminar. The students who participated in this activity were engaged with the privacy impact assessment portion, often delving into additional research questions that directed them away from the initial list of questions, which generated a more lively discussion amongst the class. During the post-activity discussion, students reported that this immersive activity helped them better understand the importance of critically analyzing TOS Agreements. Students also reported being surprised by the intricacy and detail of these agreements and stated that conducting a privacy assessment helped them gain additional insight into important ethical considerations surrounding privacy concerns and user consent. 


Recommended Readings

Collins, Cory, and Kate Shuster. “Learning the Landscape of Digital Literacy.” Learning for Justice, Southern Poverty Law Center, 6 Nov. 2017,

Paris, Britt, et al. “Platforms like Canvas Play Fast and Loose with Students’ Data.” The Nation, 22 Apr. 2021,

+ student chooses one, depending on which policy/agreement they choose to analyze: 

Klosowski, Thorin. “How to Quickly Read a Terms of Service.” Lifehacker, 12 Mar. 2012,

“How to Read a Privacy Policy.” State of California Department of Justice, 11 Oct. 2012,

Choose one of the following AI tools’ Privacy Policy and/or Terms of Service Statement:

  • OpenAI’s ChatGPT  
  • Google’s Bard  
  • Grammarly 
  • Sudowrite 

Part 1: Review

  • Analyze and annotate the TOS statement or the Privacy Policy Statement from one of the GenAI platforms listed above, thinking about your own personal ethics and beliefs surrounding data privacy, surveillance, authorship/intellectual property rights; as well as bias and/or hallucinations in the AI platform’s training data, output, and/or moderation. Use these questions to guide your analysis: 
    • What data is collected from users? 
    • How is the data stored and secured? 
    • Who has access to that data? 
    • Are users informed about data collection and its purpose? 
    • What is the noted purpose of data collection? 
    • Is consent obtained from users by the company? 
    • Are there options for users to control how much/what data is being collected? 
    • How long is user data retained? 
    • Does the platform address user privacy concerns? 
    • Is ownership of data transferred from user to platform? 
    • Is there any type of remediation in the event that the user changes their mind?  
    • How is authorship/ownership addressed? (do you own input and output, does the company, etc.) 
    • How, if at all, does the platform address or acknowledge bias in their output, training data, or moderation? 
  • Do a quick Google search to see if you can find any recent news about the company. This may include data breaches, public outcry, legal cases, privacy upgrades, public statements from the company, etc. 

Part 2: Remediate

  • Self-Remediation: Of those “red flag” annotations, choose 2-3 statements/policies/agreements to revise on your own to be more in line with your beliefs on these topics of security, privacy, etc.   
  • AI-Generated Remediation: Prompt the platform you chose to revise those statements. You can write this prompt however you see fit; you can prompt it to revise based on your own ethics or you can experiment to see what the tool may “find” to be a more ethical revision.  

Part 3: Reflect

  • How are your statements more ethical than they were before?  
  • What are the implications of the current policy as-is vs. your remediation?  
  • What are some thoughts on the revision provided by the AI tool?  
  • How do your revisions compare to the AI tool’s revisions?  
  • Did you notice any bias in the AI tool’s revisions? If so, what are possible implications of this bias? 

Optional Part 4 (Group Component)

  • Choose one statement, personal revision, and AI-generated revision (unlabeled) and swap with a partner.
  • Without having the three statements labeled, which are you most likely to agree to as a consumer/user of this tool? Why? 


Byrd, A. (2023). “Truth-Telling: Critical Inquiries on LLMs and the Corpus Texts that Train Them.” Composition Studies 51.1.  

Critical Digital Pedagogy: A Collection. Edited by Jesse Stommel et al., Hybrid Pedagogy, 17 July 2020.

Woods, Charles. “The Rhetorical Implications of Data Aggregation: Becoming a “Dividual” in a Data-Driven World.” Journal of Interactive Technology and Pedagogy, 11 May 2021. 

Woods, Charles. 2021. “Privacy Policy Genre Remediation Assignment.” The Digital Rhetorical Privacy Collective.