AI Literacy

 

Testing ChatGPT Response Variety to Introduce Natural Language Processing

Elisa Beshero-Bondar
Penn State University

This sequence of assignments progressively introduces students to natural language processing (NLP) through repeated prompt experiments with ChatGPT. Students are beginners learning Python and NLP. Accessing ChatGPT and writing prompt experiments successfully provided the basis for them to investigate the cosine similarity of word embeddings in multiple responses to the same prompt. These assignments succeeded in introducing students to NLP using short generated texts prior to students' beginning to experiment with larger text corpora. 

Understanding Markov Chains

Gabriel Egan
De Montfort University, UK

In this undergraduate assignment, students use a manually applied algorithm to generate a Markov Chain from a given short extract of language. Included here are precise instructions with diagrams for two activities where students develop structures to generate text based on probabilities. Through these game-like activities,  students discover that Markov Chains efficiently embody the writer's preference for following one particular word with another, which lays the foundation for discussion of how probabilistic language-generation models work. The assignment gives students a concrete way to explore and visualise the building blocks of various language models and understand their implications for linguistics. Any students able to distinguish the essential parts-of-speech such as verb, noun, article, adjective, and relative pronoun should be able to complete the assignment with proper support. (All students able to speak English will already have learnt the meaning of these terms at some point, but a short refresher might be wanted to bring everyone up to the same speed in identifying examples of them in practice.) The assignment has been used to help Creative Writing students understand how Artificial Intelligence is able to produce writing that sounds like it came from a human. In the ‘Follow Up’ section suggestions are given for how more specialist linguistic teaching can be built on this basis, including an exploration of the competing theories for how humans generate new sentences.

Neuroqueering AI: The Text Generator as Emergent Collaborator

Natalie Goodman

This assignment first tasks students with creating their own text generator using a premade module and then asks them to reflect on the experience of directing an LLM-generated composition. Students will choose a dataset to train their LLM, examine its output to identify patterns and new meanings that may emerge, and write a reflective essay that critically considers the affordances, challenges, and generative potential of LLMs. Originally taught in an upper-level writing and media class, this project is designed to accompany a theoretical exploration of disability studies and queer theory, but could be adapted for other contexts and disciplines. While a background in computer science is not necessary for students or teachers, this assignment will require enough time for trial and error as students troubleshoot their LLMs. 

Transforming Writing Assignments with AI

Daniel Hutchinson (History) and Erin Jensen (English)
Belmont Abbey College 

This assignment asks first-year undergraduate history and English students to use AI writing models to aid in accessing and understanding readings on specific topics. Students used AI to understand the texts they were reading including the Declaration of Independence and rhetorical analysis readings. Students asked AI questions about the texts and evaluated how AI created academic citations. Students used AI to understand the readings, but also engaged in critical thinking about using AI.

Rhetorical Analysis of Predictive LLMs

Alan Knowles
Wright State University

This assignment asks students to train a large language model (LLM) to generate Twitter posts in the style of specific accounts via a process known as few-shot learning, which trains the LLM on a small number of sample posts. Students use the trained LLM to generate tweets, then they rhetorically analyze the generated tweets. The assignment was originally developed for an entry-level Professional and Technical writing (PTW) course, but can be easily adapted to other disciplines and course levels.

Learning about Text Technology through the LLM Generation of Papers

Nick Montfort
Massachusetts Institute of Technology

Students are assigned to generate a paper about a highly specific, recent text technology, using a free Large Language Model, and then to reflect on this. Our goals: (1) highlight new aspects of the writing process, (2) see how text technologies (previous to LLMs) have influenced writing, and (3) encounter LLMs. While many more students have now heard about the concept of LLMs and have tried them out, it may actually be more helpful now and in the future to have an assignment that introduces a “raw” LLM (without the additional structures of ChatGPT and Bard).

Critical Assessment and Analysis Exercise

Nathan Murray, University of Mississippi
Elisa Tersigni, University of Toronto Mississauga

This assignment asks first-year critical writing students to evaluate the reliability, factuality, and internal reasoning of three anonymized texts, one written by AI, that present conflicting opinions or information. By considering the strengths and weaknesses of these texts independent of contextual information, students are encouraged to develop critical reading skills as well as an awareness of the prevalence of misinformation from both human-generated and AI-generated sources online today.