Resources

Pedagogy of AI as Normal Technology

Nick Feamster
Headshot of Nick Feamster

Nick Feamster is a Neubauer Professor in the Department of Computer Science. He directs the Network Operations and Internet Security research lab; co-leads netml.io, a research initiative focused on applying machine learning to networking problems; and co-directs the AI and Policy Pillar, which works on policy issues at the intersection of AI and technology. His research focuses on applications of AI and machine learning to improve the performance and security of networked systems. His recent courses include, “Security, Privacy, and Consumer Protection,” “Internet Censorship and Online Speech,” “Machine Learning for Computer Systems,” and “Free Speech and Internet Censorship.” We talked with him about how he integrates and encourages AI use in his courses, prompted by a post he wrote on his Substack available here.

Tell us a bit about the course context.

My approach applies to all the courses I teach. I typically teach undergraduate courses in the computer science major. One is called “Machine Learning for Computer Systems” which explores how machine learning and AI apply to networking security and systems. This is the class with a major focus on AI – the learning objectives are centered around understanding machine learning techniques and applying those techniques to real systems problems. The other two main classes I teach are “Security, Privacy, and Consumer Protection” and “Internet Censorship and Online Speech.” These courses touch on AI, but also address internet policy, internet measurement, data collection, and applications of data to real-world questions. In “Internet Censorship and Online Speech,” we address how AI is changing censorship and online content moderation. Students in my courses are typically advanced undergrads majoring in computer science, but I get a lot of students outside the major – I’ve had students in economics; Law, Letters, and Society; computational social sciences; biology; and applied math, among others. Because of this wide range, I design the courses to be inclusive of these students’ varying backgrounds. And, given the content of these courses, I think the approach I take to AI is even more important.   

Tell us about your approach to artificial intelligence in your courses. Why did you choose that approach?

My approach is rooted in a philosophy that AI is normal technology; it’s just another tool, like Stack Overflow or GitHub. This philosophy was informed by the work of Arvind Narayanan and Sayash Kapoor at Columbia University. People use various tools to learn how to code and take code snippets and expand upon them for their own projects. I see AI as similar to a calculator or debuggers; they’re just technology. AI is potentially quite a bit more powerful than a lot of the tools I just mentioned, but fundamentally it’s not different. So, I have an explicit policy in my syllabus with an eye toward that. I don’t just permit my students to use AI; I actively encourage it. I view my classroom and classes as a simulator for the real world and in the real world, these tools are everywhere. No one is writing code from scratch anymore, so ignoring AI and the existence of these tools doesn’t serve my pedagogical purposes or serve my students. It actually leaves them unprepared for what they will see when they leave UChicago and enter the world. We as teachers have to acknowledge the world and the environment we’re operating in and teach to that.

With that, there is one key principle I abide by which I think is aligned with the “simulation of the real world” philosophy. Students have to understand what they’re turning in. They’re welcome to use AI and any other tools as they see fit, but if they’re using them to not do the assignment, that defeats the learning objectives. It not only circumvents what I’m trying to teach my students, but they’re also not learning how to use the AI tools effectively. This aligns with my other course policies. I’ve always had an open and permissive collaboration policy and encouraged students to work with whomever they want, as long as they acknowledged their collaborators. Now, they just have collaborators who might not be human.

How do you use AI tools in your courses?

Outside of my permissive AI policy for student use, I use AI tools for course-related tasks, such as helping students who miss class or to prepare for exams. Prior to AI, after every class, I would list the topics that I covered and share it with students. This year, I took transcripts from my classes, uploaded them to Claude, and asked it to give me a summary of the class. I then briefly review that and post it. It saves me a lot of time and is also more immediate and comprehensive for my students. For exams, students are always asking for practice questions. I want to help them, but I also don’t want to give out the previous exams. With AI, I can upload the course agenda, syllabus, and past exams, and have the tool just generate more exams. It’s certainly not perfect and still takes a few hours for me to write the actual exam, but a lot of the grunt work is gone. I saved the prompt so I can use it to generate future exams, and also share an edited version of it with students so they can generate as many practice questions as they want to help them learn.

I also have reconsidered the purpose of some of my assignments. For example, in my censorship class I had reading responses that were meant as an accountability measure. However, about two to three years ago, I started to get responses that were clearly generated by a chatbot. Now, I ask students to share with me components of the readings they didn’t understand or things they want to discuss. Their responses can be very short, even just one sentence, but I tell them I will use their responses to adapt my lectures and make the discussions in class more relevant to them.  And it works! Some students turn in a single sentence and others write paragraphs, but either way it is helpful to me in preparing for class. I can put their responses into Claude and have a tailored discussion guide in no time.

What teaching challenge did you want to address by taking this approach? Or, what curiosity did you want to pursue?

The biggest challenge to address was the gap between what I teach and what students will do when they graduate Despite over 25 years of coding and teaching experience, I wasn’t sure if my assignments were well-suited to a world where students would have AI-assisted coding.  They wouldn’t be able to get jobs if they didn’t know how to use these AI tools. So, how do I achieve my learning objectives and also close this gap?

These questions are perhaps easier to answer in upper-level courses than for students who are just learning how to code. I typically teach the former and don’t presume to have any answers for the latter, because students have to be taught enough so that they understand the AI’s outputs and reasoning frameworks. However, for my classes of upper-level students, I see the advent of AI tools as an advantage because the tools remove a lot of the grunt work associated with working with computer systems. For example, in my “Security, Privacy, and Consumer Protection” class, I need to teach my students about web encryption. Previously, I would have them set up a web server without encryption, then add the encryption – that’s an extremely tedious process, and not exactly what I need them to learn, but it was necessary. Now, they can have AI set up the server for them and I can ask them more detailed, in-depth application questions about the content, rather than having them spend time configuring details that will likely change with the next software upgrade.

What did you do to prepare to put it into action in your class?

I learn by doing so I started using these tools extensively in my own work -- for research, writing, and developing my courses. These classes are adjacent to my own research, so I also tried to learn from my PhD students about things that work and don’t work. I spoke with a lot of people, in a variety of disciplines, about their thoughts around the tools, including my own students. My students come from a variety of disciplines and the technical aspects, like if you know how to code, are largely irrelevant. I think it’s more important for students to be flexible or creative thinkers. I want them to be able to make connections between concepts they’re learning in ethics and philosophy courses with AI and content moderation, for example. The broader the set of colleagues that I talk to and the broader the set of students that I bring into the class, the more both I and my students learn.

What is working well about what you’re doing? Why do you think it’s working?

I can teach more and do more in classes. Now that a lot of the grunt work is removed, we have more time for hands-on applications, such as how to design software and think like a programmer or computer scientist. They learn testing, debugging, and critical reasoning about systems, which are the key components of computer science that you still need to learn even if AI is creating the code.

I think the biggest reason this approach is working for me and my students is because I’m honest and transparent about it and tell my students how I’m using AI tools. I was worried students would be against this, but so far, students have responded positively to this approach. I think this is because students can see that their responses are being used to make the class more tailored toward their misunderstandings and interests. I also take responsibility for the outputs that I’m using. I offer students the opportunity to learn and the content to focus on, and it’s their responsibility to demonstrate their learning. We’re both going to use these tools, so I model for my students how I use them and expect my students to use them responsibly.

I also view my students as partners in learning. I’m their instructor and it’s my job to teach them, but it’s not my job to force-feed them information. They are ultimately responsible for whether or not they learn anything. If they use AI to prompt generate their homework, they’re the ones that lose out on learning, and I make sure to make this clear to them. I want them to learn and I want to facilitate that learning, but it’s ultimately their responsibility to do the hard work of learning. 

What changes do you hope to make to this approach in upcoming terms?

In future quarters, I want to ask my students to share the exact prompts they used and to reflect on how they used it. Since I see them as partners in learning and know each student has something unique to bring to the class, I’d like to use their prompts to potentially learn new ways to use the tools. I also want to prompt my students with more structured reflection questions about the role that AI had in their completion of the assignment.

More broadly, I’m also thinking about how to teach more concepts and cover more content now that a lot of the grunt work is removed from assignments. An assignment that used to take 3-4 hours can now be completed in 30 minutes with AI, so I need to consider that. I want to weave AI-assisted workflows into my pedagogy and assessments. Right now, most of my assignments were developed before AI, so I want to think about how to more explicitly incorporate AI use and what that would look like.

In a year, when I teach these courses again, I think the world is going to be pretty different. In the time between now and then, I need to pay attention to advancements and figure out how to continually evolve my courses to meet the shifting needs of education. I think CS degrees are even more important now – one, these tools continue to evolve because of computer scientists and two, regardless of students’ future jobs, they’ll likely be using AI. I want to be the one training the next generation of thinkers to engage with these tools in productive and critical ways.