Resources

Teaching Spotlight: Hoyt Long on Using AI as Historical Personae

Headshot of Hoyt Long, Professor of Japanese Literature and East Asian Languages and Civilizations

Hoyt Long is a Professor of Japanese Literature and East Asian Languages and Civilizations, and is Chair of the Department of East Asian Languages and Civilizations. His teaching and research focus on modern Japan, with specific interests in the history of media and communication, cultural analytics, platform studies, the sociology of literature, book history, and environmental history. He teaches courses such as “Introduction to Cultural Analytics,” “Readings in World Literature,” “The Modern Japanese Novel,” and “Platforming Culture in East Asia: From Newspapers to Web 2.0.”

How have you tried to incorporate generative AI tools into your instructional practices?

AI has impacted both my course and assignment design. In terms of course design, in my introduction to cultural analytics course, we spend a week or more discussing conceptual essays on the significance of AI for human creativity, particularly its impact on writing and other types of artistic media. We look at how researchers use large-scale analysis tools, such as AI, to gain insights into historical and literary materials. Finally, we take a critical angle on the biases that appear in AI models, and speculate on how they might affect cultural production and consumption going forward.  

Regarding assignment design, AI has played more of a role in my Humanities Core course “Readings in World Literature.” I’m skeptical about its use for essay writing, editing, and revising. But I was curious as to how it might supplement reading. For example, we read Augustine’s Confessions, a text that most students are unfamiliar with. I wanted to know whether generative AI would support their ability to read the text and understand its historical context and contemporary interpretations. First, students read sections of the text, and annotated and responded to it on their own, as they would in a normal class. In a subsequent assignment, they used library and internet resources to craft distinct personae for AI chatbots like ChatGPT. The goal was to have them guide the AI models toward more deliberate and focused interpretations of particular passages in Confessions. These in turn might help students step back from their own readings and reflect on how Augustine’s contemporaries, a philosopher or theologian, for example, might have responded to his thoughts on God. As a teacher, I was curious to know if generative AI could impersonate a historical figure to help students gain deeper insight into a work’s social and historical context. What I found is that AI can facilitate some of the work of reading for context, introducing students to ideas that might not have occurred to them otherwise. 

Do you think this approach was successful or not? Why?

Yes and no. My design was successful in that it taught students how deficient generative AI can be as a reader without good prompting, because its reactions were flat and somewhat shallow. At the same time, students learned how to better prompt AI using digital and print resources they found in the library on their chosen historical figure. So, they learned more about that historical figure, which they may not have otherwise done, and fed that information to the AI to obtain further information about that figure’s possible interpretation of Confessions. Ultimately, the AI model returned a list of generic responses that didn’t encourage students to dig much deeper. In this respect, it wasn’t a significant improvement on learning about historical context the old-fashioned way – by reading the scholarship and piecing together different perspectives on one’s own. It jump-started that process, but was certainly no substitute for it.  

Despite this, I am considering trying the experiment again, with a few changes. First, I would build in more time for students to reflect on the process and the AI’s output. Second, I would spend more time preparing students to design better prompts for the models, which could then potentially improve the model output. Finally, I’d ask students to experiment with a range of different AI tools, so that we could talk about how different models respond to different kinds of input, discuss the data they’re trained on, and examine their response styles. 

How are you addressing the use of AI technology with your students more generally? Are there tools or resources you have found to be useful as you navigate your students' uses of AI technology?

Students and I were open with each other from the beginning. I didn’t want to implement a draconian policy against AI or assume that they were using AI. They shared some of their experiences with me and the extent of their AI use, and then we discussed my AI policy, including acceptable and unacceptable uses [in my class]. I allowed some flexibility for students who wanted to use the tool for revision or editing, as long as they acknowledged their use of AI in their paper. Ultimately, I don’t suspect any student used AI much for writing, as I didn’t receive anything from students that looked AI-generated. But I think this sort of thing is easier to detect when so much emphasis is placed on the revision and writing process. 

What is the biggest challenge you're experienced when trying to integrate generative AI into your teaching?

The biggest challenge is determining how much time to devote to AI and deciding whether its helping students meet course learning goals. In my Core course, we found that the AI requires a lot of scaffolding to generate useful and specific output. As students learn to write better prompts, it’s important to reflect with them on that process of using and prompting AI. That takes time, and it means I have less time to provide additional context for the reading, discuss it with them in class, and to help them with improving their writing. Despite the additional time, it's still useful as students learn to write and conduct research in an environment where AI models are so close at hand, [as they need to] understand the limitations of these tools in those activities.  

What are students learning about their own learning or about AI by using it in your course?

One of my objectives was to help students recognize their own limitations in responding to a text. AI allowed them to examine the ways that they respond as a reader at this moment in time, in this historical context. It made students think about what they took for granted in our current historical moment and made their own biases more visible. Students may have benefitted more from the AI’s responses if we had had more time to develop the historical personae, but it still helped them think about the differences between reading five scholarly articles about Cicero, as might happen in a more advanced course, and reading one or two articles and using these, together with AI, to get a better handle on past interpretations of a text.   

What advice would you give and what resources would you recommend for those interested in using AI tools in their teaching?

The materials from ATS and the CCTL are fantastic. Those materials are put together by people who teach at UChicago and who know our students. Additionally, it was helpful to read a working paper on AI and writing from the joint MLA-CCCC (Conference on College Composition and Communication) task force. AI’s quick evolution makes it challenging to stay up to date, but it’s nonetheless helpful to have the perspectives of disciplinary and pedagogical experts.