Juan Pablo Pardo-Guerra
UC San Diego
In this assignment, students are provided with an AI-generated text relevant to a course’s topics and focus and then asked to comment, review, and expand on it using a feature such as "track changes." In engaging with the AI-generated text, students review their knowledge, offer critiques, modify theoretical and empirical claims, and provide concrete examples that illustrate or disprove the provided answer. Their critical and evaluative efforts for the course's topics are foregrounded, and additionally, they gain some AI literacy in evaluating the AI-generated text.
Materials Needed: Short texts (500-1000 words) generated by a LLM on the basis of prompts, Text-editing software which allows students to comment on the provided AI-generated texts
Original Assignment Context: upper division undergraduate class on economic sociology
Timeframe: ~1 course session
There is no getting around the fact that artificial intelligence has changed the landscape of higher education in critical ways. This is particularly clear in the case of Large Language Models that, trained on vast amounts of digital data, manage to produce intelligible texts from a user’s prompt. Although Large Language Models do not “know anything at all” (Burrell 2023) and are merely “stochastic parrots” (Bender, et al.) echoing dead texts and archived digital interactions, their ability to create texts with a few keystrokes and a click is of concern.
A considerable part of the debate on the most recent generation of Large Language Models has centered on the detection of AI-generated text. This simply mirrors earlier discussions about plagiarism, where the emphasis was on locating instances of copying and paraphrasing existing texts without adequate attribution and with the clear intent of passing stolen materials as original contributions. Less time is spent understanding why students chose to use these shortcuts in the first place—placing focus not on surveillance and punishment but on the incentives that lead students to cheat. Students cheat more often than not because the assignments do not challenge their skills, lacking clear benefits and connections to learning outcomes against which they can measure their performance. Knowing how to summarize Georg Wilhelm Hegel, Karl Marx, Max Weber, and Michel Foucault is certainly a skill, but one that has no clear value—specifically in how it is operationalized in practice—with the type of cognitive work that most students will have to engage in employment and civic life.
Given that Large Language Models are particularly effective at summarizing documents and producing credible accounts of frequently used historical texts (an LLM would likely generate a quite reasonable analysis of the Communist Manifesto), students may find that requesting them to do what, in their view, is the same kind of work as that performed by a machine is an invitation to disengagement. Students need to learn these skills, of course, but building assessment strategies around these as if they are true proxies of “critical thinking” is a mistake.
The solution to this puzzling situation is not avoiding LLMs but accepting them as extensions of our analytical and pedagogic toolkits. LLMs can become instruments around which we develop distinct critical and analytical competencies with our students. What matters here is crafting assignments that both develop interest and align directly with learning outcomes, providing students with a sense of development and empowerment rather than repetition and emulation. In what follows, I outline what one approach might look like.
Goals and Outcome
I have been teaching an upper division undergraduate class on economic sociology since 2016, titled Economy & Society. The main purpose of the course is to impress upon students the argument that economic outcomes like wages, financial stability, and entrepreneurial success aren’t simply products of individual efforts and merit but reflect the structural inequalities across race, gender, social class, and ability that shape most social situations.
In the past, I have assessed this class with short weekly write-ups on small empirical projects (for example, a discussion of the notorious case of Theranos, the biomedical company that was purely a sham, or a discussion of the consequences of redlining in San Diego using various maps and datasets of the region). These speak directly to one of the three learning outcomes for the course, namely, to provide students with the ability “to use sociological concepts and explanations to critically analyze how their own economic lives are shaped by broader social structures.” In addition to these small write-ups, students also submit a final, 1,500-word essay based on one of 5 possible prompts.
This last assessment is particularly exposed to potential cheating with LLMs. It would be possible to remove this element of assessment from course, but that would eliminate a final moment for integrative reflection about how social structures impinge on economic outcomes.
Participants of this exercise should be able to 1) contrast the AI-generated claims with those developed throughout the course; 2) evaluate the quality of AI-generated texts in relation to formative and summative discussions, readings and exercises had during the course; 3) modify the provided text in order to align its claims with those of the literature; 4) give examples and counterexamples of claims made in the AI-generated text.
For this exercise or those that seek to emulate its design, instructors will require:
Given that the main objective of the course is to foster critical thinking among students in relation to real-world cases, the new assessment involves transforming the output of a LLM, ChatGPT, into the empirical object that students will have to engage with.
Because the course guides students to think about how expectations and assumptions about categories of class, gender, race, and ability play a role in shaping economic outcomes, asking them to identify and query these expectations and assumptions is a skill they are expected to have developed. Thus, rather than asking them to produce text, the new assignment asks them to criticize an AI-generated text, making it the kind of empirical material they analyze in their weekly write-ups.
In this assignment, students are provided with a 500-word computer-generated essay that responds to a prompt (for example, “Is economic inequality inevitable?”). They are then asked to use word processing software, such as Word or Google Docs, to make comments and edits on the AI-generated document. Specifically, students are asked to:
The result of this exercise is not an essay but corrections on an essay. This assessment exercises the critical abilities of students to evaluate and contextualize claims in relation to the themes of the course. This is harder to simulate with LLMs, given that it asks for a better understanding of contextual information (course readings and themes as well as personal experience) and novel empirical data (such as cases that illustrate the claims in the essay). Instead of reducing critical thinking to the production of (relatively predictable) texts, this exercise invites students to use their critical thinking skills to curate materials, connect topics, and propose changes—a skill that better represents the type of cognitive tasks that they will confront in the future.