This assignment first tasks students with creating their own text generator using a premade module and then asks them to reflect on the experience of directing an LLM-generated composition. Students will choose a dataset to train their LLM, examine its output to identify patterns and new meanings that may emerge, and write a reflective essay that critically considers the affordances, challenges, and generative potential of LLMs. Originally taught in an upper-level writing and media class, this project is designed to accompany a theoretical exploration of disability studies and queer theory, but could be adapted for other contexts and disciplines. While a background in computer science is not necessary for students or teachers, this assignment will require enough time for trial and error as students troubleshoot their LLMs.
Original Assignment Context: final project in an upper-division short summer course called “Hypermedia and Digital Rhetorics”
Materials Needed: Dataset and neural net training tools, links in assignment section
Time Frame: ~2 week
On its face, AI seems diametrically opposed to both neurodivergence and queerness. Where the theoretical portmanteau “neuroqueer” as a theory and practice seeks to destabilize and subvert, At present, AI can only reproduce versions of what it’s already been taught, which doesn’t seem to afford much latitude for disrupting the status quo. . So, it is perhaps unsurprising that approaches to text generation tend to emphasize its ability to automate some of the more utilitarian forms of writing (emails, cover letters, papers for classes we’re not interested in, anything “professional”) s thereby freeing us real humans up for the forms of writing we consider to be uniquely human (critical, analytical, creative, etc.). There’s nothing wrong with this approach. In fact, although I didn’t end up using any of its suggestions, I consulted ChatGPT in writing this very article to help curb my (neurodivergent) tendency to overexplain. But relegating AI to the role of mere writing optimizer elides its capacity for failure–a word that here encompasses both its occasional-to-frequent inadequacy in the task it’s been designed to perform as well as the generative potential afforded by that apparent “failure.” This, as I will go on to demonstrate in terms of both theory and praxis, is where a neuroqueered, interdisciplinary approach to AI in the writing classroom can be implemented in a way that resists reproducing the normativity encoded into its DNA.
The assignment below served as the final project in an upper-division course called “Hypermedia and Digital Rhetorics”[ii] that I taught during the six-week summer session at the University of Florida in 2021. Students, who were mostly English majors without backgrounds in computer science, were asked to create and train their own text-generating neural network using the dataset of their choice,the output of which they would turn in alongside an essay reflecting on the process of composing with AI. Using a pre-made, open-source module called textgrnn (https://github.com/minimaxir/textgenrnn), which eliminated the need to master Python in two weeks, my students collated large and varied datasets (I required them to include at least 500 data points) and set the neural net’s parameters. The assignment prompt contains a list of web resources I used in conjunction with each step of the project, with the caveat that the links provided (all archived versions of the original webpages) may already be outdated by the time this volume is published. To account for the inherent ephemerality of digital content, this guide should be viewed as one spatially- and temporally-located model that will require updating to account for new information and developments.
A background in coding is not necessary for students or teachers (I myself didn’t have one) but a modicum of digital resourcefulness will be required to troubleshoot error messages and other technical problems that will almost certainly emerge. I pilot-tested my own neural network in preparation for this final unit using country song titles, which I chose because of the large and varied amount of data available online as well as the genre’s distinctive song title conventions. The results, which I had my students analyze in class by way of an introduction to this project, were a mixture of the believable (“She Loves Lovin’” by George Strait), the nonsensical (“She's A Dinkin' My Baby Loves of the Morning Beautiful Cowtire to the Rain” by Jo Dee Messina) and the unexpectedly profound (“You’re the Georgia” by Patsy Cline, which seems to liken the object of her affection to a place that in country music is often synonymous with home). In their final essays, many students cited the class’s virtual study group (created and maintained by the students themselves without any involvement from me) as the most helpful resource for troubleshooting error messages. I encouraged them to lean on each other, both in written materials and class discussions, but if I were to teach this assignment again, I would actively require them to create and participate in such a forum. Those who didn’t participate in these extracurricular discussions were more likely to give up due to frustration, which seemed to be ameliorated for those who opted to work through their repeated “failures” together.
I introduced the project within the loose context of disability studies by pairing Donna Haraway’s “Manifesto for Cyborgs,”[iii] alongside a critical response by theorist, poet, and disability rights activist, Jillian Wiese. In her essay, “Common Cyborg,” Wiese criticizes Haraway’s central claim that “[t]echnology would un-gender us,” and particularly her deployment of the term “cyborg” as metaphor without considering the implications for literal “cyborgs”—disabled people who, like Wiese, rely on technology to survive.[iv] I paired these readings in hopes of challenging preconceived notions about humanity’s relationship to technology, particularly the aspects that directly involve our corporeal bodies. My goal was to turn their attention to the generative potential of AI while keeping such speculation grounded in material circumstances, and to recast difference and “failure” as potentialities rather than dead ends. Although student engagement with these readings was hampered by unforeseen and extenuating circumstances, many did come to similar conclusions—one student wrote that the project had led them to reconsider the value of “nonsense,” while others noted the emergent capacity afforded by AI’s lack of contextual (human) understanding.
Although I sensed disability studies to be a proliferative avenue for such a project, I didn’t anticipate that it would become the catalyst for a seismic shift in my own research. I had already been exploring the idea of “neuroqueerness”—a theoretical portmanteau merging queer theory with neurodiversity rhetoric in ways that challenge the social reproduction of normativity—for some time, in ways both philosophical and embodied, but I hadn’t yet considered the neuroqueer potential of artificial intelligence. Future iterations of this project would be explicitly framed as a neuroqueer exercise–what does it mean to “mean” something, and what forms of newness can emerge from (what might appear to be) a lack of meaning or intentionality? Whose “meaning” is it anyway, and how do we assign value to different modes of meaning and intending? To be clear, I am not suggesting an analogical relationship between neurodivergent people (of which I am one) and artificial intelligence. Autistic rhetorician and activist M. Remi Yergeau has evocatively catalogued the ways in which autistic minds and bodies are consistently denied rhetoricity, and in turn humanity,[iii and I have no interest in adding to the vast and sordid canon of “autistic robot” metaphors and imagery. Instead, I am proposing we look to the emergent qualities of neuroqueer rhetoric—which isn’t necessarily legible under traditional (read: neurotypical) rhetorical conceptions of intention, purpose, and exigency—as a guiding principle for approaching AI writing, which is characterized by an abject lack of all of those things. As agents, both AI and the neuroqueer are by definition arhetorical. ChatGPT may be the most sophisticated AI writer yet, but even it is prone to incoherence and absurdity when presented with content outside of the dataset it has been trained on. Without context or situational awareness, AI can only filter information through the pattern recognition techniques it does understand. The resulting output is often a failure of meaning: characters and symbols jut out in jagged spikes like final spasms charted on the EEG output of an algorithm as it overheats and shuts down, while any intelligible words and phrases are displaced from a recognizable symbolic order and set adrift in a sea of free-floating meanings and significations. Neuroqueer rhetoric similarly challenges traditional forms of representation by locating new and novel assemblages of meaning organized by alternative affinities and associations (for example, the sonic quality of words rather than semiotics).[iv]
Approaching LLMs as potential emergent collaborators, rather than as mere editors, can afford us the opportunity to renegotiate our own digital writing practices and create spaces for differentiated emergence within our writing, while also acknowledging the rhetorical agency of neurodivergent writers. The writing classroom is thus placed within a network of transcorporeal entanglements with human and non-human actants, natural(ized) and built environments, and sociohistorical context.[v] By taking their neural net’s deviations to be more feature than bug, this assignment seeks to locate the neuroqueer potential for alternative knowledge production in collaboration with AI writing. Out of the places where writing breaks down, or “glitches,”[vii] “possibility spaces”[viii] can emerge—an approach I tried to emphasize in my implementation of this project by assuring my students that their grade did not hinge on how successful (read: “human”) their AI turned out to be and encouraged them to focus instead on accounting for and interpreting the “failures” that were all but guaranteed to occur. Though the "compositions" students directed their neural nets to produce were necessarily slight--most of the datasets chosen were collections of phrases ranging from four to twenty words each--this assignment can serve as a starting point for considering AI's potential for (queer) failure[vi] as a mode of resistance to neoliberal rhetorics of “optimization” that have characterized so much discourse around AI and writing. Ultimately, the purpose was to practice openness: to the unexpected, the illogical, and the weird, but also to failure as a necessary component of composition (and programming) and potential catalyst for discovery.
[i] For more reading on neuroqueerness and neuroqueer theory, please see M. Remi Yergeau’s introduction to Authoring Autism (2018), especially pages 34 and 38. Nick Walker’s short introduction to the term on the
NeuroQueer group blog is a useful resource for both teachers and students: https://neuroqueer.com/neuroqueer-an-introduction/. Because the term is a loose, heterogenous assemblage of associated and interrelated ideas and thus resists fixed definitions, exploring the broader blogosphere out of which the term originated will likely give the most comprehensive understanding (Loud Hands: Autistics, Speaking; All the Weight of Our Dreams: On Living Racialized Autism; and Typed Words, Loud Voices are all great resources).
[ii] I am greatly indebted to the work of my friend and former colleague, Jason Crider, whose syllabus for the Hypermedia and Digital Rhetorics course was a critical resource for the design of my own version of the class.
[iii] They write, “I am bombarded by representations of autistic people as non-rhetors—as non-rhetors who cannot emote (goodbye pathos), as non-rhetors who cannot recognize the mental states nor visualize the needs of the people around them (goodbye ethos), as non-rhetors whose logics are so mechanistic and rigid that their only comparable non-rhetor analogues are robots and chimpanzees (goodbye, logos).” Yergeau, R.M. “Clinically Significant Disturbance: On Theorists Who Theorize Theory of Mind.” Disability Studies Quarterly, Vol. 33, No. 4 (2013). https://doi.org/10.18061/dsq.v33i4.3876.
[iv] Another autistic writer, Julia Miele Rodas, argues that the echolalic and perseverative forms of language characteristic of many autistic and otherwise neurodivergent people “challenge ordinary communicative expectations; repeat[ing] and ricochet[ing]… striking and forceful and beautifully, queerly concentrated” (1). Rodas, J.M. Autistic Disturbances: Theorizing Autism Poetics from the DSM to Robinson Crusoe. University of Michigan Press, 2018.
[v] This approach is heavily informed by the theoretical field of New Materialism, in particular the work of Karen Barad, Rosi Braidotti, Stacy Alaimo, Laurie Gries, Jane Bennett, and Manuel DeLanda.
[vi] I use the word “failure” here in the sense of J. Jack Halberstam’s 2011 book, The Queer Art of Failure, in which he argues that the American vision of “success” is defined by a heteronormative, patriarchal metric, thereby making “failure” a productive means of challenging these matrices of oppression. Another helpful resource that places a similar conception of “queer failure” within a pedagogical context is Nishant Shahani’s 2005 article, “Pedagogical Practice and the Reparative Performance of Failure, or, ‘What does [Queer] Knowledge do?”
[vii] My use of the word “glitch” here comes from Legacy Russell’s 2018 book, Glitch Feminism: A Manifesto, in which “the glitch is celebrated as a vehicle of refusal, a strategy of nonperformance… we look at the notion of glitch-as-error with its genesis in the realm of the machinic and the digital and consider how it can be reapplied to inform the way we see the AFK [“Away From Keyboard”] world, shaping how we might participate in it toward greater agency for and by ourselves” (21).
[viii] For more on “possibility spaces,” see Manuel DeLanda’s 2011 book, Philosophy and Simulation: The Emergence of Synthetic Reason.
DeLanda, M. (2011). Philosophy and simulation: The emergence of synthetic reason. Continuum Publishing Corporation.
Halberstam, J. (2011). The queer art of failure. Duke University Press.
Haraway, D. (1987). A manifesto for Cyborgs: Science, technology, and socialist feminism in the 1980s. Australian Feminist Studies, 2(4), 1–42. https://doi.org/10.1080/08164649.1987.9961538
Rodas, J.M. (2018). Autistic Disturbances: Theorizing Autism Poetics from the DSM to Robinson Crusoe. University of Michigan Press.
Russell, L. (2020). Glitch Feminism. Verso.
Shahani, N.G. (2005). “Pedagogical Practice and the Reparative Performance of Failure, or, ‘What does [Queer] Knowledge do?” JAC, 25(1), 185-207.
Walker, N. (2021, August 1). Neuroqueer: An introduction. (2021, August 1). NEUROQUEER • THE WRITINGS OF DR. NICK WALKER; https://neuroqueer.com/neuroqueer-an-introduction/
Weise, J. (2018, September 24). Common cyborg. Granta. https://granta.com/common-cyborg/
Yergeau, M. R. (2018). Authoring autism: On rhetoric and neurological queerness. Duke University Press.
Yergeau, R. (2013). “Clinically Significant Disturbance: On Theorists Who Theorize Theory of Mind.” Disability Studies Quarterly, Vol. 33, No. 4, https://doi.org/10.18061/dsq.v33i4.3876
– (2018). Authoring autism: On rhetoric and neurological queerness. Duke University Press.
Notes for Instructors
In the class I taught, students created their own blogs and submitted all writing assignments in the form of blogposts, which is reflected in the instructions below but is not essential for implementation of the project.
The first assignment is an introductory/exploratory short critical response designed to get them thinking about the capabilities of AI and to help them choose a dataset to train their own neural network. The second is the prompt for the neural networks assignment, first explained in philosophical terms, followed by some practical steps and tips for approaching the technical aspect of the project.
This assignment originally served as the final project in a six-week accelerated summer course, but it would probably be more fitting in a semester-long course to allow more time for periods of trial and error, which are crucial to this project.
Intro to Neural Networks and Machine Learning
Submission Format: Blogpost, 375 words minimum
1. This critical response assignment is designed to give you a feel for the process and capabilities of machine learning. First, you’ll view the media listed belowa range of media and reflect on the things you notice. What information was new or surprising to you? Are there any unifying characteristics you notice among the examples presented below? What other possibilities can you imagine for this technology? What might be some affordances and implications of this kind of technology? How might these examples be viewed within the lens of neuroqueerness?
2. Next, you’ll decide on a dataset for your own project and detail your reasons for choosing it. The video and article below offer relatively accessible introductions to DIY neural networks and should help you get a feel for how they function and what kinds of datasets are most conducive (tip: keep it simple). I definitely recommend taking the author’s advice and checking out Wikipedia’s list of lists for ideas.
Building a Neural Network
Submission format: a blog link containing (1) a series of screenshots documenting the various stages and components of your neural network, and (2) a 1,000-word reflective essay
This project will contain two parts: (1) evidence of the output of a neural network that you’ll build from a premade module and train to auto-generate items in the dataset of your choice; and (2) a written reflective essay about the process of creating this neural network.
The purpose of this assignment is not necessarily to learn how to code, though you will be incorporating some basic coding principles. Instead, this project will serve as a sort of springboard for critically examining the process of directing a neural net-generated composition–and, by extension, the practice of writing in collaboration with artificial intelligence overall. You’ll be asked to consider the implications of machine learning and artificial intelligence for writing as both practice and process: How does our engagement with machines and automation bring to bear on our contemporary understandings of communication and creativity as the exclusive domain of the human? The frameworks of artificial intelligence and machine learning model human bodily processes and take up much of the language we use to describe those processes (e.g., “neural” networks)—in what other ways are the apparently robotic functions of AI embodied and imbued with human characteristics? How might the anthropomorphism built into AI lead us to (re)consider our own corporeal bodies, and what it means to live in them in the age of the digital? What does it mean for an AI to “fail,” and how might those failures be instructive or productive?
Ultimately, this assignment is about learning how to learn—which, incidentally, is exactly what you’ll be teaching your computer to do. As such, this assignment will involve a lot of trial and error, and even as you aim for functionality, you should pay special attention to those moments of “failure”—when your AI breaks down or “glitches,” are any unexpected meanings or patterns revealed? Are any fault lines within the overall structure itself exposed? In keeping with the lens of neuroqueerness, are there any moments that could serve as a catalyst for discovery and newness? Ultimately, this final project will ask you to “fail” many times over, and critically reflect on those failures. And, as you’ll almost certainly experience firsthand, machines often fail, too.
The First Steps
Some Resources and Tips
A blog link containing (1) a series of screenshots documenting the various stages and components of your neural network, and (2) a 1,000-word reflective essay