Resources

Teaching Spotlight: Lisa Rosen on Integrating Reflective AI Use in Student Writing

Headshot of Lisa Rosen, Associate Senior Instructional Professor and Associate Director of the Committee on Education

Lisa Rosen is an Associate Senior Instructional Professor and Associate Director of the Committee on Education and a CCTL Associate Pedagogy Fellow. She teaches graduate and undergraduate courses that focus on the changing role of schooling in society, including “Schooling and Social Inequality,” “Schooling and Identity,” and “Beyond the Culture Wars: Social Movements and the Politics of Education.”  

What teaching challenge did you want to address by using AI in your course?

I teach writing-intensive classes for undergraduate and graduate students and have always worried about students using online tools to do the writing for them. The release of ChatGPT intensified that conversation. I’ve done many things in my teaching to address this possibility. For example, I’m more mindful about supporting students as they do the writing that I’m asking them to do and am clear with them about my learning goals for them and helping them clarify their own goals. I also provide resources that help them meet those objectives, so that the temptation to take “the easy way out” is less intense. 

How do you approach the use of AI in your classes? What did you do to prepare to implement this strategy?

On my syllabus, I included an explicit AI policy and produced additional guidance to help students determine appropriate use of AI. My syllabus policy states that students may use AI to assist their writing and recommends activities that might be most helpful—such as idea generation and critiquing their work—and not as helpful, such as doing the writing for them. My policy also requires students to disclose and reflect on their use of AI. Using it is allowed, but they must share a full transcript of their interaction with the AI as part of the assignment, and therefore can’t use tools that don’t allow them to do that. Finally, they are required to reflect on how the tool assisted or impeded their learning, a task that requires students to have identified their own learning goals prior to writing. I’ve found that most of my students aren’t using AI, despite having permission to do so.  

I had to structure that reflection prompt so that anyone could use it, regardless of AI use, so all students reflect on their writing. They are learning about themselves as writers and learners through the assignment design, especially through the reflection requirements. Earlier in the year, a student with ADHD shared that she dislikes revising because it requires focusing on the same thing for a long time, which is hard for her. But she found the scaffolding of the assignment helpful, saying that completing small chunks of the assignment and getting feedback on them helped her see the value of revision. She said that she saw improvement because of that feedback, and that this process helped her see the value of revising the same project over time. Without the reflection component, students may not have thought about things like this. Furthermore, I have an unverified hunch that reflecting on their learning goals primes students to look for progress toward those goals, and to avoid behaviors and practices that might short-circuit their own learning, since their goal is often to become a better writer.  

What's working well with your approach, and what questions or issues still remain?

One challenge has been distinguishing between appropriate and inappropriate use. I’ve gradually defined that for myself over the course of the academic year. In Autumn 2023, I didn’t explicitly prohibit students from using the tool to re-write portions of the papers. They put their ideas into ChatGPT, and it helped them with the revision. Now I ask students not to do that, and most don’t. 

I was surprised that not all students are excited about AI. When I asked why so few of them used it, they said it’s not helpful for this kind of assignment, or that they don’t like its output. One person speculated that because our class is an elective for future educators, using AI comes with an “ick-factor,” and goes against their intrinsic motivation for learning and their values as educators. A few of them said that they were worried about getting in trouble for using it, despite having explicit permission to do so. But I explained to them that the reason they needed to disclose their use was because AI-generated text is mostly undetectable.  

I wish that I had instituted this policy earlier. Students love it, and it’s an important low-stakes way to disclose things about their learning and struggles that they might not otherwise talk about. It’s turning out to be a useful relationship-building tool for me. I can learn about individual students’ needs and tailor support for them and look for the common challenges across the group of students, too. 

Despite most students not finding AI to be helpful, there was one exception. International students were the most frequent users of ChatGPT, and I suspect this was due to their language backgrounds – AI was helpful in detecting language errors. In Matthias Staisch’s presentation to the “Teaching with AI” Exploratory Teaching Group in Spring Quarter, he suggested something that might be useful for language learners: letting students write their paper in their first language then using ChatGPT to generate a translation. That’s a different way of co-authoring that I may try in the future.  

What advice would you give and/or what resources would you recommend for those interested in using GAI in their courses?

First, I’d recommend starting with the hands-on workshops from ATS to learn about AI and what it can do. Experiment with the range of tools available and be familiar with their different outputs, strengths and weaknesses, and how those change over time.   

This year I also co-organized an Exploratory Teaching Group with Emily Coit (Assistant Instructional Professor of English) and support from the CCTL about generative AI and teaching, where a group of instructors who were curious about AI met several times over the course of the year to learn from each other about how we use it in our classrooms. The CCTL is very helpful for thinking through the pedagogical uses of generative AI. Likewise, meeting with staff at ATS was very helpful—Michael Hernandez helped me to think through my AI policy. 

As far as advice goes, I wouldn’t recommend banning it. That’s futile. Instead, determine your learning objectives for the course and for different assignments, and design a policy that centers student learning, using AI to serve learning, not to substitute it. A one-size-fits-all policy might not work, so be open to changing the policy for different assignments if it’s necessary to do so.