cmpttnl cnstrnt: An Exercise in Constraint and Prompt Engineering

Douglas Luman
Allegheny College

As new context-aware generative models challenge the human relationship to language, students benefit from first-hand observation of these models’ successes and limitations. Using these models often requires using “prompts” (natural language-based directions) to guide their output. The method of developing these directives has quasi-formalized into a practice known as “prompt engineering.”  Serving as a gentle introduction to the intentionality, opportunities, and limits of the prompt engineering process, this work proposes and describes initial outcomes from an assignment that uses similarities between model prompting and the constraint-based literary work of the Ouvroir de littérature potentielle (“Oulipo”) to focus student attention on precision and specificity of prompts and their execution. Beyond familiarizing students with contemporary technologies (particularly OpenAI’s GPT platform) and the nascent practices developing around them, this assignment also aims to give students first-hand experience with the reflexivity of using language to describe language in preparation for larger conversations about language as a technology and the roles of large language models (LLM) in human expression.


Learning Goals: 

  • Learn to interact with LLM through the practice of prompt engineering
  • Refine skills in prompt engineering to increase efficacy and quality of output
  • Discover exploitable boundaries in LLM generation and what these opportunities offer
  • Begin a discussion of the roles and meaning assigned to language as an expressive tool and technology

Original Assignment Context: Introductory lesson in elective Informatics course on “Computational Narrative”

Materials Needed: There are two options available for this assignment: a “no-code” or a “full-code version. Materials for each follow and are detailed further in the assignment:

  • “No Code” version: an accessible AI text generation program (i.e. ChatGPT), selected readings, GitHub account (for both instructor and students)
  • Full version with code (presumes Python language knowledge on the part of both the instructor and students): selected readings, OpenAI API keys for GPT, GitHub accounts, an installation of Python on student machine or an instance of the code running on Google Colab

Time Frame: ~1-2 weeks


Introduction

“The hottest new programming language is English.”

— Andrej Karpathy (@karpathy; Twitter, 24 January, 2023)

I write this assignment introduction as the large language model (LLM) known as GPT-4 has passed or very nearly done so on a number of expert-level tests, such as the United States Medical Licensing Exam (USMLE). This marks the first time—at least in my life—that an automated agent seems to be treading incredibly close to tasks normally considered the sole domain of the “human,” namely a wide array of tasks which are, ostensibly, all about human’s understanding other humans at a more-than-surface level.

But, if LLMs rise to the level of being convincing simulacra of human knowledge and possess the ability to become harbors of emotional investment, we need to devote time to understanding the opportunities, limits, and incidental effects of inputs to the model: user prompts. As Michael Graziano reminds us, “with a good ventriloquist ... [a] puppet seems to come alive and seems to be aware of its world.” 

The social media-worthy, much-hyped products of LLM like GPT-3 and image generation tech such as DALLE or Stable Diffusion begin and end at the role of this able puppeteer. The practice of piloting these models with highly tailored plain language requests to achieve predictable or highly relevant results—known as “prompt engineering” or “prompt programming”—places the controls in the querent’s hands. As the writer behind the site generative.ink elaborates, "[p]rogramming in natural language avails us of an inexhaustible number of functions we know intimately but don’t have names for.”

For folks outside of computational creativity, it may be surprising that this self-reflexivity makes me think about poetry: specifically, reading Charles O. Hartman’s Virtual Muse (1996), a book about computational tools testing the boundaries of that frontier of reflexive knowledge. Hidden in the “Unconclusion” of the book Hartman surfaces the powerful idea that 

one of poetry’s functions is to make us aware, with a fresh intensity, of our relation to the language which constitutes so much of our relation to the world.

While writing about poetry experiments, Hartman’s conclusion about the function of the form struck me as a way to pitch language as a technology to my students: one that has an explorative and, even, introspective power. One of the appeals of this particular assignment is to offer an automated version of Douglas Kearney’s model of the “Danger Room” role of writing—one in which students see the discourse reflected back to them in real-time, a kind of confrontational, full-contact sport of self-discovery. Though thinking about a different context of writing, I find value in the attempt to force students to, as Kearney closes the piece, explore that “finding a path can be a complex negotiation between possibilities.” This assignment revels in possibility.

Additionally, given the history of text generation as a motivating force in the technology sector, it comes as no surprise that this assignment arrives at a time when much is being made about how we make sense of LLMs which seem to occupy a relationship to language—thereby the world—which appears so much like humans’ own. 

Taught as an introductory lesson to the “Computational Narrative” course at Allegheny College in Spring 2023, the assignment which follows contemplate limitation, intentionality, and prompt engineering. This course is taught as part of the College’s Informatics major, a course of study that emphasizes the role and meaning of the presence of information and technology as constructor of lived experience across the many disciplines that computational culture influences. It has been taught once.

Adopting the framework of the intentionality of poetry, the assignment adopts the strict framework of the Ouvroir de littérature potentielle (“Oulipo”) as a guide for understanding the relationship between prompt and text. As Paul Fournel writes in The nOulipean Analects, the constrained prompts Oulipeans apply (e.g. forbidding the use of specific vowels from writing, or substituting nouns using determinate substitution rules) “[stop] when the constraint has been elaborated,” bear a striking resemblance to the practice of “prompt engineering,” which likewise prefaces and generates but is not the result. As you may intuit, applying these kinds of constraints serves as a task that LLMs, at least as far as GPT is concerned, are canonically bad at. In addition, GPT-4 lacks a feature that many visual generation models offer, that of the ability to provide “negative prompts” to constrain generative output. Given this omission from prompt practices related to GPT-4, this work seeks to test the limits of what is possible by offering various levels of prompt development.

This assignment also addresses a gap in electronic applications of Oulipean practices—the opportunity to “explore the deeper structures of language that allow the symbolic to reach into the physical world.” Giving students who don’t normally think of or turn to poetry the opportunity to experiment with the relatively low-stakes environment that Oulipean practice supposes (i.e. the prompt is the writing; the outcome is an elaboration of the original idea) supports the general tenor of exploration I wanted for an early-semester assignment. (For context, this was students’ first real assignment for the course.)

The goals of this assignment are four-fold:

  • Learn to interact with LLM through the practice of prompt engineering
  • Refine skills in prompt engineering to increase efficacy and quality of output
  • Discover exploitable boundaries in LLM generation and what these opportunities offer
  • Begin a discussion of the roles and meaning assigned to language as an expressive tool and technology

A fifth unstated goal for this assignment is to expose students to LLM technology by degrees in order to subtly introduce the limitations of a technology which will, if it has not already become, a mainstay in information culture. I leave this unstated for two main reasons: the wording and intention of this goal—to offer exposure—is, as many curriculum writers might comment, “soft.” It is hard to evaluate or measure what this means or how its outcome manifests. 

However, as Ian Bogost’s most recent article on GPT-4 in The Atlantic suggests, “[i]nstead of one big AI chat app that delivers knowledge…the [GPT model] (and others like it) will become an AI confetti bomb that sticks to everything.” This claim reads an awful lot like William Carlos Williams’ comparison of Eliot’s “The Wasteland” to an “atom bomb,” except in this case, we are apparently dealing with a rogue confetti cannon. The current and projected saturation of GPT and other technologies in our daily lives is a one-way street headed toward, essentially, universal integration. At present, there are few courses taught specifically around our interface with these models—“prompt engineering.” The Andrej Karpathy tweet that prefaces this introduction certainly supposes that this way of working with LLMs will become a mainstay practice. I am certain the ubiquity of these models will become a skill which all levels of education will need to address and train. Prompt engineering will become a pervasive occupational and professional task.

Students engaged with this assignment by generating two (2) works: one which adopted a “traditional” Oulipean constraint and one that they figured could only be done by GPT or another LLM. Supported by readings that discuss both the Oulipo and LLMs, the assignment adopts a “learning by doing” during an introductory lab and informal and formal discussions of student work. Certain students are selected as part of each week’s assignments to give formal written responses based on our collective discussions. 

During this GPT-focused week, students engaged with Oulipean forms like the tautogram (the use of works focusing on a single vowel) creating works such as one student’s exploration of the use of the letter “e” in the English language (excepted for brevity):

eee eee eee Eeee eee eee eee eee eee eee eee eee eee eee eee eee eee Eeee eee eee eee eee eee eee Eeee eee eee eee Eeee eee eee eee Eeee eee Eeee eee Eeee eee Eeee eee Eeee eee Eeee eee Eeee eee Eeee Eee eee Eeee Eee Eee Eee Eee Eee Eee

This exploration, the product of the prompt “write me a Tautogram with the constraint of all vowels” produced an intention-atypical, but illustrative use of the letter in common English.

Another GPT-specific request speaks directly to the systems which compose language materially and digitally when they prompt the model: “[y]our keyboard has broken and the only keys that work are the following: 1 2 3 4 5 6 7 8 ! @ # $ % ^ & *. Please write a sonnet using only these keys,” which resulted in a self-referential output technical output keeping the patterned spirit, if not the “law,” of a traditional sonnet:

12#3 $% 456 78! $% 8#&

^&%$! @# ^&$! 8#& @# $%7

^&$! &%$ ^&4$! %7 %^$ 8#&

$% ^&$! 7@$ &%$ !$ 7#%$ %7

 

@#$ %7 &%$ 456 78! @# $%7

^&8#! $% ^&$! &%$ @# $%7

^&$! &%$ 78! @#$ %7 7#%$

$% ^&$! &%$ 456 8#& $%7

 

!$ %^$ 8#& 456 78! $%7

^&$! &%$ 7@$ &%$ 8#& @#$

$% !$ %^$ $% ^&$! 7#%$ %7

7@$ &%$ !$ 7#%$ 456 8#&


Student responses to the practice and outcomes of the work, commenting that the practice constituted a “rewire in strategies of how I had been [reading] work,” and that a drove a wedge into others’ interpretive practice by forcing thinking on “a deeper level as to what GPT thought it was doing…and whether or not what we take away was in GPT's ‘intentions.’” Several observed that the prompts which resulted in the best output were also of the “few-shot” variety, in which examples or stricter rules apply. Others, using the ChatGPT platform, held extended conversations with the platform, gently correcting the model when it made incorrect or incomplete judgments. In most cases students could identify the gaps in their own prompts and hold themselves equally, if not more, accountable for “misses'' from the model, which was often the case; most students discovered that, despite close attention to the engineering of the prompt, GPT-3 execution was markedly imperfect and incomplete.

Notes on the Assignment

This assignment was offered during a course whose structure follows a three-day-a-week model of three 50 minute class periods with a lab session of two hours. When assigned, the schedule was:

Day

Purpose

Time allotted

Day 1, class time

Discussion of texts and constraint

50 minutes

Day 1, lab time

Discussion of prompt engineering and practice

2 hours

Day 2, class time

Informal workshop of prompts and outputs

50 minutes

Day 3, class time

Formal discussion of 3 students’ prompts and outputs

50 minutes

 

Materials and Preparation

This edition of the assignment uses GPT-3 chat for early-semester, accessible content for non-programmers. This version of the assignment also contains code to interact directly with GPT-3, as availability proved an issue when the assignment was actually conducted.

“No Code” version

  • chat.openai.com
  • Projector
  • Digital (or physical) copies of readings
  • Networked PC
  • GitHub account
    • For professors and students

Full version with code

This version presumes Python language knowledge on the part of both the instructor and students, though it may be possible to give this to students with minimal experience or instruction. Here, the instructor will need to distribute the necessary information for an environment file (.env file) which contains values coded as OPEN_AI_KEY and OPEN_AI_ORG. The format for this file is provided at the end of the README at the included link.

  • Projector
  • Digital (or physical) copies of readings
  • Networked PC
  • OpenAI API keys for GPT
    • Organization
    • API key
  • GitHub account
    • For professors and students
  • An installation of Python on student machine

Acknowledgements

This assignment was heavily influenced by a few readings, the content of which didn’t all make it into the introduction. The sources to which I am particularly indebted:

  • The nOulipean Analects, Les Figues Press: Los Angeles. 2007.
  • Frankfurt, Harry. On Bullshit. Princeton University Press. 2005.
  • y Arcas, Blaise Agüera. “Do Large Language Models Understand Us?” Daedalus 151.2 (2022): 183-197.

There is an interesting link here to investigate about giving students low-stakes content and practices to use in their pursuit of understanding what LLMs do (versus the question of how they work, though this assignment discusses some of both). Here, I found Arcas and Frankfurt’s work particularly helpful and, in some respects, permissive to allow students a kind of freedom from needing to be profound or meaningful. Instead, they could just “try out” some voices in a relatively isolated space.


The Assignment

The assignment is hosted using GitHub under a CC-BY-SA 4.0 license. It is available at the following link: https://github.com/AppliedPoetics/cmpttnl-cnstrnt

The prompt text is provided as a reference, but is also available on the above-linked site as the README.md.

Prompt: Computational Constraint

“With a good ventriloquist ... [the] puppet seems to come alive and seems to be aware of its world.”

— Michael Graziano, in Consciousness and the Social Brain

“Prompt engineering for large language models is just an excuse to make up more nonsensical sentences to feed these AI monsters.”

— GPT-3, in response to the prompt "What is prompt engineering for large language models? Answer in a very snarky way."

Readings

Theory

Practice

Documentation

Summary

Prompt engineering—the practice of learning to con/destructively "pilot" a generative model—is one of the surprising new skills to emerge from the development of context-aware large language models (LLM). Simply put: prompt engineering is the practice of instructing a model to produce an output consistent with the prompter's intent or desire. While we've given a new name to what essentially amounts to "asking the right questions," prompt engineering is much more than that.

To date, successful prompt engineering endeavors to ask what an artist _wants_ to happen. This assignment approaches generative writing with large language models (LLM) from an opposite perspective. Drawing on the practice of the Ouvroir de littérature potentielle (“Oulipo”), we challenge GPT-3 to a more difficult task—producing works of "constrained" writing to discover what LLM can and, more importantly, cannot do.

For those of us familiar with visual image generators such as DALLE or Stable Diffusion, this idea is close to, but not quite "negative prompting" (e.g. asking for a picture of a house without any people in it). The approach of computational constraint applied to language prompts thinks about the concept generatively. We aren't simply asking to "live without" a feature common to a parcel of language, we're interested in rethinking the possibilities are open by restricting choice.

Mainly, what kinds of choices can we engineer the model to make and how can we account for those choices?

Goals

  • Learn to interact with LLM through the practice of prompt engineering
  • Refine skills in prompt engineering to increase efficacy and quality of output
  • Discover exploitable boundaries in LLM generation and what these opportunities offer
  • Begin a discussion of the roles and meaning assigned to language as an expressive tool and technology

Outcomes

  • 2 texts incorporating prompt engineering (included in the `writing` folder as `md` files)
    • 1 enacting a "traditional" Oulipean constraint
    • 1 enacting a constraint only possible using GPT-3
  • A journal of various prompts attempted with brief notes about relative success or failure (include in `writing/prompts.md`)

Process

Using ChatGPT

ChatGPT is an interface that allows you to use the prompt you’ve engineered and, failing excellent results, to chat with the model and encourage it to make changes that conform to your expected constraint.

I advise you to be kind to the model, even if it is just an LLM. ChatGPT is available here

Using code provided 

ChatGPT is undergoing both rapid change to a subscription model and varying levels of actual availability (due to performance load). To make this assignment possible, the assignment repository offers code that interfaces with the GPT-3 back-end (not chat, per se). To use this, obtain a key from your instructor to place in a .env file in the main folder of your repository.

This repository contains three (3) files essential to making any code for this assignment "happen". They are all contained in the src folder.

data/prompt.txt

This contains the prompt which prepends the text. For example: `remove all of the bad people from the following text`

data/source.txt

If operating on a "found" text (i.e. one you creatively pirated from elsewhere), paste the text you'd like to operate on in this file.

main.py

The program behind communicating with the GPT-3 API. This file requires the creation of an .env file, the values and specifications of which will be provided in class during either the session or the lab.