Professor Bot: An Exercise in Algorithmic Accountability

Jentery Sayers
University of Victoria 

This low-tech, tool-agnostic, small-stakes assignment prompts students to attend to issues of power and governance in artificial intelligence (AI), with an emphasis on what students do not know and may thus want to learn about algorithmic decision-making. Students first consider a hypothetical scenario where AI is assessing university entrance essays. They then consult publications on “algorithmic accountability” to articulate questions they would want to ask key decision-makers about the AI decision-making process. They conclude the exercise by reflecting on what they learned about algorithmic accountability, transparency, and social responsibility. 


Learning Goals: 

  • Engage and reflect upon the notion of “algorithmic accountability” by attending to the unknowns of algorithmic decision-making in post-secondary education.
  • Articulate concerns about algorithmic decision-making with social action.
  • Consider how algorithms might be assessed or regulated in ways that govern other processes in Canada (could be adapted for other countries, including the United States).

Original Assignment Context: multiple digital humanities and English courses, of varying levels and sizes (40-108 students)

Materials Needed: Selected readings, tools for notetaking, and access to a camera as well as a whiteboard or chalkboard

Time Frame: One 80-minute class session, preceded by a pertinent lecture on algorithmic accountability and accompanied by assigned reading


Introduction

The following assignment is a prompt for an in-class workshop on “algorithmic accountability” conducted in small groups. I ran this workshop on four occasions at the University of Victoria in British Columbia, Canada. I taught it for the first time in February 2019 in a large, 100-level Digital Humanities course intended for 108 first-year undergraduate students across the disciplines. The course, called “Unlearning the Internet,” was about how social norms and cultural histories shape habits of internet research and communication. (My slides, my notes, and additional context for the course are online at https://jntry.work/unlearning/.) I have not conducted this workshop since OpenAI’s release of ChatGPT or GPT-4, but I have run it in three different Digital Humanities and English courses for undergraduates. 

Before students participate in the workshop, they learn foundational concepts (such as remediation, instrumentalism, and determinism) in the fields of Media Studies and Science and Technology Studies. Prior to the workshop, I also give a brief lecture on algorithmic accountability and auditing algorithms. 

My primary source material and assigned reading for the lecture is Robyn Caplan et al.’s “Algorithmic Accountability: A Primer'' (2018), which states that “[a]lgorithmic accountability ultimately refers to the assignment of responsibility for how an algorithm is created and its impact on society; if harm occurs, accountable systems include a mechanism for redress” (2018). Frank Pasquale’s The Black Box Society (2015) is another key text; however, for the sake of both time and accessibility, I teach Pasquale’s research on algorithms via an interview Megan Rose Dickey conducted with him for Tech Crunch in 2017. There, Pasquale links accountability to auditing and then describes three steps in auditing an algorithm: “transparency,” “qualified transparency,” and “ethical and social responsibility.” Transparency pertains to accessing not only algorithms but also data. Qualified transparency involves people not employed by a corporation inspecting its algorithms and data to identify notable biases and anomalies. And ethical and social responsibility means that a corporation accepts responsibility for forms of discrimination resulting from its algorithms and is consequently held accountable for them (Pasquale in Dickey 2017). Caplan et al. write: “Because of the ad hoc nature of self-governance by corporations, few protections are in place for those most affected by algorithmic decision-making. Much of the processes for obtaining data, aggregating it, making it into digital profiles, and applying it to individuals are corporate trade secrets. This means they are out of the control of citizens and regulators” (25). I have aligned the steps of the assignment with this observation about the lack of oversight in entities such as private corporations and even post-secondary institutions.

This assignment is not intended to help students understand the technical particulars of algorithms or determine whether an AI’s output passes a particular test for exhibiting human intelligence. It also bypasses the impulse to use ChatGPT in the classroom in order to foreground issues of power and governance and, more specifically, what students do not know and may thus want to learn about algorithmic decision-making in our present moment. 

To narrow the scope and connect student learning with lived experience, I ground the prompt in an admittedly speculative scenario describing a near-future in post-secondary Canadian education. I refer to the scenario as “Professor Bot.”

Learning Outcomes 

The “Professor Bot” scenario has one primary learning goal: students should engage and reflect upon the notion of “algorithmic accountability” by attending to what they want to know about algorithmic decision-making in post-secondary education. The scenario succeeds when 1) students are able to articulate their concerns about algorithmic decision-making with social action and 2) they consider how algorithms might be assessed or regulated in ways we already regulate similar sociotechnical processes in Canada. 

I point students to the Government of Canada’s “Directive on Automated Decision-Making” when they feel lost or prefer more concrete examples. If they request additional academic research, then I offer them a copy of Christian Sandvig et al.’s “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms” (2014). These readings are specific to my teaching context, which is shaped by Media Studies and Science and Technology Studies, but they serve as examples of how to gently introduce students to complex policy discussions.

In Spring 2019, I couched the “Professor Bot” workshop in a Digital Humanities course with the following learning outcomes:

“By the conclusion of this course, you should learn how to: 

  1. Purposefully read, analyze, and synthesize digital media using the appropriate research tools and techniques, 
  2. Concisely articulate issues common to digital culture and explain why and for whom those issues matter today, 
  3. Combine critical thinking in the humanities with basic technical competencies in media practice and communication, 
  4. Use digital media as a form of both evidence and argumentation, 
  5. Demonstrate an awareness of various strategies used by researchers to produce critical work for the web, and
  6. Create a simple ‘zine’ to teach a specific audience something important related to the course theme of ‘unlearning the internet.’”

I used these course outcomes to assess student work, including the outcomes of this workshop, as part of a “log” or journal students kept throughout the term. Several students made algorithmic accountability the topic of their zine (see Learning Outcome 6 above), which they created near the end of the term for an audience of their choice. 

Materials and Skills Required 

The assignment is intended to be low-tech and small-stakes, and the in-class workshop should take about an hour, plus time for writing and reflection. Students will need tools for notetaking as well as access to a camera and a whiteboard or chalkboard. Prior to the workshop, they should read “Algorithmic Accountability” by Megan Rose Dickey (2017) and “Algorithmic Accountability: A Primer” by Robyn Caplan, Joan Donovan, Lauren Hanson, and Jeanna Matthews (2018). Both publications are open-access. 

How Students Responded

The most common student response to this workshop was a palpable sense of curiosity when they learned that Canada already wrote a directive on automated decision-making. The existence of this directive, even if it is not perfect, meant students did not need to start from scratch when addressing power and governance in AI, and the vocabulary provided by Pasquale and Caplan et al. also helped them to get started. 

More interesting, students recommended a variety of social actions when they reflected on the process of an algorithm audit. Although I did not quantitatively track, let alone code, their responses, I found that, despite the assigned readings, many of them still deemed governance to be a technical matter: that is, they rendered transparency in the social process tantamount to transparency in data and the inner workings of AI, where social responsibility implies being responsible for the recipe of AI but not necessarily its uses or effects. Students thus frequently found black-boxed AI to be unfair to them yet held post-secondary institutions or governments rather than private corporations accountable for the integration of AI with decision-making in education. Here, student perceptions of accountability hinged on neoliberal choice and namely the assumption that a university or college could always refuse to contract with corporations in the tech sector. 

Regardless of their position on accountability, students tended to be productively surprised when they learned how much they did not know about AI decision-making even beyond the technical particulars, and a common student suggestion was improving AI literacy and including within it more education about audits, transparency, regulation, and policy-making. Other student suggestions for social action involved ways for governments, corporations, and post-secondary institutions to better foster student trust in otherwise opaque decision-making processes and, of course, for institutions to craft not only clear rubrics for AI decision-making but also accessible mechanisms for appealing those decisions and redressing related harms and grievances. 

Acknowledgments 

I would like to thank Stefan Higgins, Ria Karve, and Ian Michael Waddell for teaching the first iteration of this workshop with me. I would also like to thank Kari Kraus, who introduced me to the practice of speculative design, and Christian Sandvig, who introduced me to the notion of an algorithm audit.


The Assignment 

Here is the prompt for the workshop and reflection. It involves five steps. As students conduct the exercise, I manage their time by announcing how many minutes remain in each step, and TAs and I move around the room to address student questions and concerns as they unfold. I invite students to converse among themselves throughout the workshop. I also read the instructions aloud before we begin, project those instructions on a large screen for student reference during the exercise, and re-read each step aloud as we arrive at it. I do not lecture during this 80-minute class session, which is dedicated entirely to the workshop. 

Algorithmic Accountability 

The aim of this workshop is for you to engage and reflect upon the notion of “algorithmic accountability.” We will ground the exercise in a speculative scenario that might feel like science fiction.

“Professor Bot”

It’s the near-future. As many business experts projected, BAs in English and Media Studies are in high demand across Canada. People, including you, are now returning to university to earn these important degrees; however, demand is off the charts. In fact, it’s so high that all universities now require you to . . . ack! . . . write an entrance essay in English and Media Studies. Despite the prevalence of bots as both assistants and peers in society, universities do not permit you to write the essay with an AI. 

The prompt for the entrance essay asks you to identify and analyze the relationship between science fiction and artificial intelligence. It provides you with two short fictions to interpret, and you are given four hours to write your answer in a text editor on a computer that’s not connected to the internet.

In a cruel twist, the essay is marked by . . . Professor Bot.

A subscription-based product of Big Four Tech Services, Inc., Professor Bot exists due to demands on academic labor. There just aren’t enough English profs available to assess all these essays. You’re rightfully concerned about how exactly Professor Bot determines whether your entrance essay will pass, and you want to know who is ultimately accountable for this “Prof Bot.” After all, it stands between you and your English and Media Studies BA. 

Next Steps

Your goal is to articulate what you mean by “algorithmic accountability” in the case of Professor Bot. Here are five steps toward that articulation:

  1. Please take at least twenty minutes on your own to describe (in writing) “transparency,” “qualified transparency,” and “ethical and social responsibility” with respect to Professor Bot. What, for instance, would you want to know about how Prof Bot processes data and makes decisions? How has Prof Bot learned to assess English and student writing, and based on what data (e.g., which corpora of science fiction and which collections of student essays)? Who made and maintains Prof Bot? Who reviews Prof Bot’s work and decision-making? And who should be held responsible for Prof Bot’s assessment of your entrance essay? As you respond to these issues, see “Algorithmic Accountability” by Megan Rose Dickey (2017) and “Algorithmic Accountability: A Primer” by Robyn Caplan, Joan Donovan, Lauren Hanson, and Jeanna Matthews (2018) for context and details, including definitions of the terms used above.
  2. After twenty minutes on your own, gather in groups of no more than five people and then take twenty more minutes to consolidate your descriptions of “transparency,” “qualified transparency,” and “ethical and social responsibility” with respect to Professor Bot.
  3. After twenty minutes of consolidating, please take about ten minutes to write on the whiteboard your group’s distilled descriptions of “transparency,” “qualified transparency,” and “ethical and social responsibility” with respect to Professor Bot. Be prepared to share these descriptions with the class, TAs, and me. Your descriptions should directly address these four questions of Prof Bot: 
    • Transparency of what, exactly?
    • Qualified transparency involving whom? 
    • Whose ethical and social responsibility? 
    • Ethical and social responsibility determined by whom?
  4. Now, beneath these answers on the whiteboard, please take ten more minutes to briefly describe how two key decision-makers in this scenario would likely respond to your answers. What obstacles to accountability might these decision-makers emphasize, and what concerns or objections might they have? Be sure to identify the decision-makers, and please be as specific as possible. One such decision-maker may be the CEO of Big Four Tech Services. Another may be a Dean of Humanities or a Chair of English and Media Studies at a Canadian university.
  5. Finally, please use 150-250 words to not only document your group’s descriptions of “transparency,” “qualified transparency,” and “ethical and social responsibility” with respect to Professor Bot but also reflect on what you learned about algorithmic accountability during this workshop. You might even define “algorithmic accountability” in your own words. Feel free to co-author the three descriptions with your group; however, the reflection should be written by you alone. Also be sure to include the first names of your group members (for the sake of attribution), together with a photograph of your group’s notes on the whiteboard. Thank you!

References 

Caplan, Robyn, Joan Donovan, Lauren Hanson, and Jeanna Matthews. “Algorithmic Accountability: A Primer.” Data and Society, 18 April 2018. https://datasociety.net/library/algorithmic-accountability-a-primer/.

Dickey, Megan Rose. “Algorithmic Accountability.” Tech Crunch, 30 April 2017. https://techcrunch.com/2017/04/30/algorithmic-accountability/

Government of Canada. “Directive on Automated Decision-Making.” Last modified 25 April 2023. https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592

Kraus, Kari. “Finding Fault Lines: An Approach to Speculative Design.” The Routledge Companion to Media Studies and Digital Humanities, ed. Jentery Sayers. Routledge, 2018. 

Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press, 2015. 

Sandvig, Christian, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms.” Data and Discrimination: Converting Critical Concerns into Productive Inquiry conference, 22 May 2014. http://www-personal.umich.edu/~csandvig/research/Auditing%20Algorithms%20--%20Sandvig%20--%20ICA%202014%20Data%20and%20Discrimination%20Preconference.pdf.