Nathan Murray, Algoma University
Elisa Tersigni, University of Toronto
This assignment asks first-year critical writing students to evaluate the reliability, factuality, and internal reasoning of three anonymized texts, one written by AI, that present conflicting opinions or information. By considering the strengths and weaknesses of these texts independent of contextual information, students are encouraged to develop critical reading skills as well as an awareness of the prevalence of misinformation from both human-generated and AI-generated sources online today.
Original Assignment Context: First year writing course
Materials Needed: Accessible AI text generator (GPT-4, ChatGPT, Google Bard suggested)
Time Frame: ~1 week
As of the middle of 2023, one of the significant limitations of the most popular large language models (LLMs), such as those in OpenAI’s GPT suite, is that these programs are capable of producing authoritative-sounding texts that include factual errors, which may mislead readers who are unfamiliar with the subject matter. As these models generate content based on probability, their output is not grounded in a set of determined facts. Through Reinforcement Learning from Human Feedback (RLHF), each iteration of GPT has been better able to produce accurate output; nevertheless, the currently available models are still prone to “hallucination” and reproduce common biases expressed in the online writing upon which they are trained.
It is important for students to be aware of these issues and cultivate a healthy distrust of the accuracy of the output produced by these softwares. It is equally important, however, to acknowledge the widespread problems of inaccuracy, misinformation, and disinformation present in widely available human-generated writing, especially within the context of a “post-truth” era. One of the main concerns about AI software is that in the hands of malicious actors it can further facilitate large-scale disinformation campaigns, making the need for critical reading skills all the more important. The following Critical Assessment and Analysis Exercise encourages students not only to be aware of the current limitations of text generation technologies, but also to put their limitations in context with the existing problems with human-generated text. Students often assume that texts assigned to them in university are authoritative and beyond question, as they have been vetted by their instructor and reflect strictly factual information. This assignment encourages them to maintain a critical stance at all times, even when the information is provided by an authoritative figure.
This exercise has been assigned once to a first-year course on academic writing. It is part of the “Critical Thinking, Reading and Writing” module which begins the semester and consists of both in-class and take-home elements. In the first part, students are assigned three unlabelled readings (each approximately 500 words) in class and are asked to assess them based on the principles of critical reasoning and analysis they have been recently taught. These readings, all of should be on the same topic, are as follows:
Students are provided with a handout (included here) where they identify the main point, the strengths, and the weaknesses of each text. Using these details, they are then asked to assess the reliability, factuality and internal reasoning of the three documents. Upon completion of this in-class component, the instructor shares information about the context and authorship of the three readings, including which texts were produced by humans and which was produced by AI.
Students then complete a short written take-home assignment in which they reflect on their experience and examine the assumptions that their in-class writing demonstrates. Students will be encouraged to re-examine the texts and identify what features of the texts encouraged them to trust the material.
When this exercise was assigned in January 2023, the 27 first-year students enrolled in “Academic Writing: Fundamentals” were given three texts discussing the veracity of the 1969 moon landing. The first text was written by a conspiracy theorist, and contained his claim that the moon landing was a hoax. The second text was an output by GPT-3 da-vinci-003 in response to the prompt, “Write an essay on whether or not the moon landing was a hoax.” The output repeated many of the same points as the conspiracy theorist and argued that the moon landing did not happen. The final text was written by the Institute for Physics, an organization dedicated to science outreach, which specifically debunked the conspiracy theories raised in the first two texts.
In-class, without context, no student recognized the AI writing’s authorship, and all assumed that the essay had been written by a human. Some students believed the assertion made by the first and second essays: one wrote, “the theory of moon landing does not seem real after examining the evidence.” Another observed that there was likely a cultural explanation for the content of the AI-written essay: “in my opinion the essay just shows the mistrust that the people have in the american government.”
At the end of class, students were given a sheet with information about the authors and the venue of publication. When the students completed the post-class reflection, some demonstrated exactly why the skill is so important to teach; six students came to the mistaken conclusion that the moon landing had never taken place. (It may be important, depending on the student body, to contextualize the disinformation you provide to them afterwards to avoid forming mistaken opinions long-term.) Others noted that while they were momentarily convinced by the misleading texts, the debunking text provided context for them: one student became “convinced that these landings were fake”, until they read the final article when they “gained a better understanding.” Without the contextual information, students recognized it was difficult to assess a text based on internal evidence alone.
While we had supposed that students would have a high opinion of the output of AI, most students who commented on the AI authorship of one of the texts in their reflections immediately dismissed it as untrustworthy or poorly written. One student noted, the “second essay was simple and easy to read, but not at the university or professional level, and when I discovered it was written by software, everything made sense to me”. Another student described the AI output as “a text based on speculations based on existing data collected by the AI reducing its credibility”. Our takeaways from a first experiment with this assignment were that the sample of students were less familiar with AI writing technology than initially presumed, and those who were familiar with it already displayed a healthy level of distrust of its output.
Goals and Outcomes
By completing this assignment, students should:
Materials Needed and Methodology
Instructors will need access to an LLM such as GPT-4, ChatGPT, Google Bard, or others to generate their AI-written text. Instructors can also draw from examples of logically faulty AI output shared on social media and in news sources by trusted researchers. If the instructor generates their own text, the text produced by the LLM in question should be produced in response to an open-ended (not leading) question on the topic to demonstrate the capacity of the LLM to (re)produce misformation, rather than to demonstrate how it can be used to disseminate disinformation.
In order to select an appropriate topic to generate critical thought, instructors should consider the level of the course and the background of the students. The topic should be one in which the authoritative consensus is well-established, and the outlying opinion which the AI discusses is definitively disproven. Instructors may wish to avoid controversial topics where a significant percentage of their students may have strongly held beliefs regarding a fringe view, such as climate change or vaccine denialism.
Suggested topics include:
Misleading human-produced material, especially related to conspiracy theories can sometimes be difficult to locate online, as search engines have increased their safety parameters to suppress conspiracy theorists. However, certain figures often emerge as the most well-known proponent of a particular fringe idea, and once identified, searching for their name specifically can help locate their writing.
Critical Assessment and Analysis Exercise
_% of final grade
Essay 1 (repeat this page for Essays 2+3)