Resources

Teaching Spotlight: An Opt-In/Opt-Out Approach to AI Use

Benjamin Morgan
Headshot of Benjamin Morgan

Benjamin Morgan (Department Chair and Associate Professor of English) studies literature, science, and aesthetics in the Victorian period and early twentieth century. He specializes in nineteenth-century sciences of mind and emotion; aestheticism and decadence; and speculative and science fiction. His interests also lie in the environmental humanities, including topics such as extinction, energy cultures, and the literary history of climate change. His recent courses include “Media Aesthetics: Image, Text, Sound I and II,” “Introduction to the Environmental Humanities,” “Climate Change in Media and Design,” and “Humanities Writing Seminars.”

Tell us a bit about your course context.

I teach in the Media Aesthetics Humanities Core sequence and typically have first year students who are new to the University. We focus on learning how to interpret and write about different forms of media. We study film and visual art, as well as theoretical explorations of topics such as the male gaze or gender in cinema. It’s a wide range of texts, and “text” is broadly defined, but all are centered around the topic of how images circulate within society and the power of images from Plato until the present. Part of the course is also about how the emergence of new media, such as photography, film, or even the technology of writing, has historically created social panics. So it seemed a bit odd to me to not talk about our current moment and to unreflectively take a position that this new technology is bad and should be banned in all cases.

Tell us about your approach to students’ use of artificial intelligence in your course.

Students have two options in the class regarding AI use. Students can choose, at the beginning of the term, to take a “traditional” version of the course with no AI use allowed, and there is essentially zero technology outside of word processors. There’s no policing of their AI use if they decide to opt out, so it’s more of a social contract between myself and the students. Students also have the option to take a more “experimental” version of the course, where they can opt in to AI use. I give them a set of guidelines around how they can and can’t use AI for the course. In all cases, using AI to write the assigned papers is banned. However, students can use it to interact with course materials, such as using AI to act as a conversational partner or for brainstorming. For those who did opt in to AI usage, each week they were required to write a short reflection on how they used the tools that week. About one-third of the students decided to opt in to AI usage this term with the rest opting out.  

What are the learning challenges or opportunities around AI use in your course that you wanted to address by using this strategy?

I wanted to give my students the opportunity to engage with these tools and be thoughtful about how they were using them. I wanted them to treat AI outputs as something they were analyzing or studying, not simply as a tool that would spit out content for a paper. I wanted them to consider how the tool might be shaping their thought patterns as well as consider motivations of developers and companies that have designed the tool to give answers that are uncontroversial and that affirm the user. I also didn’t want to require AI usage, as many of my students were seeking the “traditional” University of Chicago experience. Many students wanted a fully analog class where they got to read Plato and Aristotle and come to class with the physical books, ready for a lively discussion. However, others were very appreciative of the opportunity to explore AI, particularly in a humanities class, and to think about how this tool would work in that context. 

How did you decide to approach AI usage as an “opt-in/opt-out” choice? What was your thought process?

I started my thinking about AI in the same place as a lot of people -- it felt like this was a technology that was mainly a way for students to avoid doing work or shortcut their thinking. That approach of offloading learning is fundamentally counter to the kind of things that I wanted them to learn to be doing in class, so in Autumn Quarter 2024, I banned all forms of AI use in my courses. However, over the summer of 2025, I began using ChatGPT to explore what the tool was and what it did, both in professional and personal contexts. I found it to be a lot more interesting than I had expected – the benefits were different, as were the limitations. My own experiences with the tool led me to take a different approach, one where students could opt in or opt out of AI use.

I also had many conversations with colleagues and noticed that there were two powerful contingents in the AI space. There’s a group of people who are hyping AI as a solution to all types of educational problems, and a second contingent who are extremely suspicious of any use of AI in the classroom. It was very hard to find a space of open inquiry around AI tools; my own use of AI didn’t match either of those preconceived notions about the utility or detriments of it. When I started using the tool, at first it seemed very powerful, but the more I interacted with it, the less I took its answers at face value. But, at the same time, as a scholar of language and writing, I found it interesting as a new, text-based communication medium. I wanted to bring AI into the classroom with a critical attitude, where I introduced it not as an amazing thing that’s going to revolutionize everything we do, but instead to explore how it was interacting with student learning.

What is working well about what you’re doing? Why do you think it’s working?

I think a main thing that is working well is that students trust that they can talk with me and be open about their interactions with the technology. They trust that I’m not out to catch them using AI, force them to use it, or promote its use if they don’t want to. This open approach has allowed us to have an interesting conversation around their responses to this technology. Some students are still adamantly opposed to AI use and don’t think anyone should be using it. Others are very curious about it. I can talk to my students about the technology and why educators are worried about it. I can also talk with them about the skills that I’m trying to teach to them and how the technology undermines those skills.

I didn’t want conversations with students to come from a place of policing—calling them out, checking up on them, or requiring them to work in a Google document I could monitor at all times. I hated the oppositional dynamic that created, where I suddenly felt positioned against my own students. This policy has allowed me to avoid that dynamic and have a more open conversation around AI. It is at the expense of not being able to control as much of what students are doing with AI, but the main benefits are the increased openness and communication.

I also have found students doing particularly interesting things with the tools. One student asked an AI tool to act as Socrates, and he asked “Socrates” questions about Plato’s Republic, which we were reading at the time. My student noticed that the AI’s responses were not particularly precise or effective, so he kept arguing with it. He would cite arguments directly from the text that contradicted the AI’s responses, and the AI would come back in a very apologetic way and revise its initial response. The student was citing the text to make arguments back to this chatbot that he was interacting with and proving to himself that he could outsmart it, which I think is a productive way to approach AI.

For another assignment, students needed to watch a film on their own time. One student chose Battleship Potemkin and turned to a chatbot for assistance. They asked the chatbot to identify some things they should be looking out for as a first-time viewer of the film, but to not spoil the film. The chatbot then created a guide for the student, which I think is a wonderful use of the tool. The guide it created was better than what the student would have gotten from using other sources, as it was tailored to the question of what to look for in the film.  

I appreciate that my students are using the tools in these more productive ways that aren’t them simply feeding the tool a question, getting an answer, and copying and pasting that answer into their assignment. Instead, they’re using it in productive ways to enhance, but not outsource, their learning.  

What challenges have you faced with this approach?

One challenge around this strategy is that even though I am offering students the opportunity to opt-in or out of AI usage, I suspect some students are using it in unsanctioned ways. After they turned in their first batch of papers, I noticed some tell-tale signs of AI usage. I then had to sit down with specific students to discuss this and why I thought they’d used AI. The students denied AI usage, but this is the exact scenario I was hoping to avoid by having such an open policy.

Another interesting challenge is that in the first batch of papers, there were certain types of rhetorics or syntactical structures that didn’t match those that I’d seen in my previous 15 years of teaching first-year writing students. I’m not sure if this change is due to incorporating more scaffolding into the assignment, in that students must meet with a writing tutor before turning in the paper, or if it’s because they are using ChatGPT to write their papers. I have done a lot of work in the course to build an open, trusting, and communicative environment, and this potential use of AI felt like a violation of that work. Therefore, I asked students in an anonymous survey to share with me how, if at all, they used AI for these papers. What I found is that students were having conversations with ChatGPT about the content of their writing. They asked for assistance in refining their paper’s structure or grammar, but not for specifically writing the assignment. So, I’m still considering the implications of that and how I want to address that in future courses.

Overall, I see that my students are really engaged, they come to office hours and discuss their papers with me, we talk about their papers after class. It’s not as if this is a group of students who are checking out and not doing the work; instead, they’re highly engaged and I don’t want to lose that engagement even if AI is assisting them. I think we’re in a new educational landscape where student engagement includes these kinds of productive interactions with generative AI models that produce new or different perspectives on the readings that then get incorporated into their own writing. I really sympathize with the impulse to fully ban AI usage in classrooms, but in my experience, I don’t think that is going to be a productive way to approach this situation where students are interacting with these types of tools constantly, including as results in Google searches, whether they want to or not.

I’m also very sympathetic to students who are navigating multiple, sometimes contradictory, AI policies in each class they’re taking. A lot of my students have discussed being worried about being accused of using AI when they didn’t use it, and I think this goes back to the adversarial relationship AI has introduced between students and instructors. While it’s a challenge to have a fool-proof AI policy, I’m committed to fostering this open and respectful dialogue with my students to help determine some of those effective ways to use AI to support learning.

What changes do you plan to make to this strategy in upcoming terms?

I will definitely keep this opt-in and opt-out approach in future quarters where I teach this course, but I need to consider how AI usage fits into the writing portion of the class. I am still not ok with students asking AI to write their papers for them; however, I do think that the papers I’ve gotten from students this quarter are better than papers students have written absent dialogue with AI. This suggests to me that there is at least a possibility that AI is helping my students write better and I want to explore that with them more.

I’m also thinking about how to assess students fairly when comparing those who didn’t use AI, whose work might be less polished and more rough, to students who did use AI to assist them and thus produced more polished writing. I want to support both types of students in becoming better writers, and not penalize those who chose not to use AI and are still developing their writing skills.