Critical AI Analysis Presentation

Margaret Poncin Reeves
DePaul University

This small-group presentation assignment aims to further students’ critical AI literacy (Bali 2023) by introducing them to the affordances of various AI tools, while also raising awareness of their risks and limitations. Each group is responsible for researching and testing a different AI platform and sharing their findings with the class. These presentations focus on the function of the tools, but also turn a critical lens onto AI companies’ profit motives, data transparency, and privacy protections. 


Learning Goals

  • Identify different types of genAI applications and evaluate their affordances and constraints as writing tools
  • Discuss how economic and social forces influence platform design, user protections, and AI output

Original Assignment Context: The first few weeks of a 10-week (quarter-long) course open to undergraduate students across the university, titled “Writing with AI”

Materials Needed: Access to a variety of genAI tools, a computer with a projector

Time Frame: ~3 weeks 

Overview

I taught this assignment in WRD 242: Writing with AI, an undergraduate course that was developed in Winter 2024. The main goals of the course were 

1) to learn how to effectively use artificial intelligence tools to assist the writing process, and

2) to explore how economic and social forces are shaping and being shaped by AI development. 

Because the course was cross listed in other departments and fulfilled my institution’s Social, Behavioral, and Cultural Inquiry “learning domain” requirement, enrolled students represented a wide range of majors and disciplines, with varying degrees of comfort around AI technologies. 

Before beginning the assignment, students read A People’s Guide to AI (Onouha & Nucera, 2018) and discussed it in class. Then, they divided into small groups, with each group selecting a different AI tool and learning about how to use it, including its strengths and weaknesses. They also found out what they could about the company who developed the tool, including how the company is funded and their terms of service. Students then shared their findings to the class in ~20-minute presentations.

While this activity left many students more skeptical of AI companies, I also noticed that it increased their willingness to play with new tools and techniques, creating a disposition toward experimentation that’s essential when using a technology that is rapidly evolving. 


Assignment

1.  Choose an AI tool to analyze You may choose so-called “foundational models,” like ChatGPT, Bard or Gemini (Google’s AI), Claude, LLaMa (Facebook’s AI), etc. Or you may choose a tool that’s built using one of these foundational models, like Grammarly, Quilbot, My AI on Snapchat, etc.

2.  Research the tool online and then try out the tool. Find reviews, read through the company’s documentation, and play with the tool to find out:

1.  Function: What is the tool good for, what is it not? Specifically, what tasks would you recommend it for?

2.  Function: What are the various affordances of the interface? Who benefits from its use and how do they benefit? What are the limitations of the interface? What and whom does it leave out? (Questions taken from Sano-Franchini, 2018)

3.  Function: What is the interface like? How does that interface influence how people interact with the tool?

4.  Profit motives: According to “A People’s Guide to AI," "[W]hile machine learning applications might have a specific goal like suggesting the next song you’ll want to listen to or choosing the film you want to watch next, they also have bigger goals. Those goals are usually in line with the company’s profit needs. These are things like getting you to stay longer on a website, or getting you to buy more of a product. These larger aims are the direct result of who has control and power in the space. In other words, lots of modern-day AI is driven by the needs of large corporations, not the needs of the people." Does the company seem to have any larger goals in creating the project? How does the company make money?

5.  Transparency: Can you find out any information about how it works or where its data comes from?

6.  Privacy: What personal information does the tool collect? How is the company using our personal information?

7.  Societal impact: Proponents of genAI argue it can have an equalizing effect, providing folks that didn’t previously have access to certain language and writing resources access. Detractors say it will exacerbate inequality by deepening the digital divide. Where do you think this tool falls?

3. Create a presentation for the class. Give us both practical information about how to use the tool as well as your critical analysis of its privacy, transparency, and profit motives. I recommend integrating visuals of the platform into your presentation, either through screenshots of the interface or by pulling up the site itself.

AI Use

You are welcome to use AI tools to create your presentation. You'll obviously want to test the tool yourself, and I'd like to encourage experimentation with AI for organizing your presentation, designing your slides, or creating visual aids. However, let's get in the practice of doing so ethically: Add a note either at the beginning or end explaining how AI was used.

One additional caveat: Because of LLMs’ propensity to hallucinate, be cautious about asking AI for the information you need to gather. The exception is internet-connected AI tools (like Copilot/Bing, Bard, or ChatGPT Plus), which still hallucinate, but can be asked to provide links to their sources so you can quickly factcheck.

Grading Criteria

Successful presentations will

  • Provide the audience with a practical understanding of the function of the AI tool
  • Provide critical insight into the AI company’s profit motives, data transparency, and privacy policies
  • Explain the group’s opinion of the overall societal impact of the tool they analyzed
  • Demonstrate effective choices in organization, design, and delivery of the content

Acknowledgements

Inspired by the Civics of Technology EdTech Audit and developed from Sano-Franchini’s (2018) critical interface analysis.

References

Bali, M. (April 2023) What I mean when I say critical AI literacy. Reflecting Allowed. https://blog.mahabali.me/educational-technology-2/what-i-mean-when-i-say-critical-ai-literacy  

Civics of Technology Project (n.d.) Conduct an EdTech Audit. Civics of Technology. https://www.civicsoftechnology.org/edtechaudit

Sano-Franchini, J. (2018). Designing outrage, programming discord: A critical interface analysis of Facebook as a campaign technology. Technical Communication Online, 65 (4). Retrieved May 24, 2024, from https://www.stc.org/techcomm/2018/11/08/designing-outrage-programming-discord-a-critical-interface-analysis-of-facebook-as-a-campaign-technology/

Onouha, M., & Nucera, D. (2018). A People’s Guide to AI. Retrieved April 5, 2023, from https://alliedmedia.org/resources/peoples-guide-to-ai