CSCI 1952A: Human‑AI Interaction

Professor Serena Booth • Fall 2025

Logistics

Class Time
TTh 1:00–2:20 PM
Location

UPDATED LOCATION: CIT 477 Lubrano

Office Hours
Prof. Booth: Tuesdays 2:30–3:30 PM in CIT 427
Devices
You should not use laptops or cell phones during class except for planned activities, or where accommodation is necessary.

Course Description

AI systems do not exist in a vacuum; they exist in a human world. This course will examine human‑AI interaction, spanning the gamut from how humans design AI systems, to how humans work with and alongside AI systems, to how AI systems affect human behaviors. Sometimes the AI systems under scrutiny will be embodied (robots!), sometimes not. The course will be both discussion and project based. As a final project, students will conduct a pilot study on human‑AI interaction. This course is aimed at graduate students or others with interest in conducting human‑AI interaction research. With perseverance and dedication, these course projects could be converted into conference papers at venues like AAAI, HRI, or NeurIPS.

Background Preparation

This is not an introductory course on AI. Our expectation is that you have taken a technical course on AI — like CSCI 0410 (Intro to AI), CSCI 1420 (Machine Learning), or CSCI 2470 (Deep Learning). You should have basic competency in building AI systems; for example, you should be able to train an MNIST classifier with only minimal support from the internet. The only exception to this background preparation is if you already have substantial research experience in another area of computer science — i.e., you have led the work of a technical paper. If you would like to discuss your preparation, please get in touch with the instructor.

Grading

This course is a seminar, and, as such, it requires substantial participation. The grade will be comprised as:

Late Submissions and Late Days

Since this is a seminar class, there are no late submissions or late days. We expect you to attend every class and to submit every assignment. If you cannot attend class or complete an assignment, please obtain a note from health services where appropriate. In other extenuating circumstances, reach out to Serena.

Presentation of Papers

For your assigned classes, you are expected to guide the discussion. The typical class has two assigned readings; in such cases, a typical discussion will consist of two 10‑minute presentations for each of the papers, alongside class discussion for each paper, with a 5‑minute break in between the two papers. You are free to choose another format. In either case, you should send the instructor a copy of your slides or planned activities 24 hours prior to the class; you should incorporate any instructor feedback into your presentation before class. You may propose a modification to the papers for that class (while maintaining the theme of that class). This modification proposal must be shared at least a week in advance of that class. And, the instructor reserves the right to overrule your proposed papers :)

Discussion Form

24 hours before each class, you should fill in this form to submit your weekly comprehension and discussion questions.

Schedule

Week Day Date Theme Sub-topic Papers
0ThursdaySept 4Introduction to Human-AI InteractionN/A
1TuesdaySept 9Societal importance of interactionOn the consequences of our decisions
ThursdaySept 11Emergent interaction vs. designed interaction
2TuesdaySept 16Designing AISpecifying reward functions
ThursdaySept 18Learning reward functions from feedback
3TuesdaySept 23Designing AIShared autonomy
ThursdaySept 25Guest Lecture by Dr. Isaac Sheildlower. “Providing people with control in interaction.”N/A
4TuesdaySept 30Designing AIImitating human behavior
ThursdayOct 2Learning from observations and corrections
5TuesdayOct 7Designing AILearning reward functions 2
  • Paper ID 116) “Deep reinforcement learning from human preferences” — Christiano et al.

  • Paper ID 117) “Models of human preference for learning reward functions” — Knox et al.
ThursdayOct 9Reward models for LLMs
  • Paper ID 118) “Training language models to follow instructions with human feedback”

  • Paper ID 119) “RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback”
6TuesdayOct 14Using AITrust and automation bias
  • Paper 120) “Trust in Automation: Designing for Appropriate Reliance” — Lee and See

  • Paper 121) “Humans and Automation: Use, Misuse, Disuse, Abuse” — Parasuraman and Riley
ThursdayOct 16N/AFinal Project PlanningN/A
7TuesdayOct 21Guest lecturesProf. Brad Knox Guest LectureN/A
ThursdayOct 23Trust and automation in practice
  • Paper 122) “Hello AI: uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making” — Carrie Cai et al.

  • Paper 123) “Planning with trust for human-robot collaboration”
8TuesdayOct 28Using AIAI explanations
  • Paper 124) “Towards a rigorous science of interpretable machine learning” — Finale Doshi-Velez & Been Kim

  • Paper 125) “The Mythos of Model Explainability” — Zachary Lipton
ThursdayOct 30Mental models, overreliance
  • Paper 126) “To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making”

  • Paper 127) “Beyond accuracy: The role of mental models in human-AI team performance”
9TuesdayNov 4Using AIExplanations and policy summaries
  • Paper 128) “Why should I trust you?” Explaining the predictions of any classifier

  • Paper 129) “Highlights: Summarizing agent behavior to people”
ThursdayNov 6Failures of explanations
  • Paper 130) “Sanity checks for saliency maps”

  • Paper 131) “Do feature attribution methods correctly attribute features?”
10TuesdayNov 11Using AIHuman-AI Teams
  • Paper 132) “Updates in human-AI teams: Understanding and addressing the performance/compatibility tradeoff” — Gagan Bansal et al.

  • Paper 133) “Autonomous driving systems: A preliminary naturalistic study of the Tesla Model S”
ThursdayNov 13Fairness and emergent behaviors
  • Paper 134) “Inherent trade-offs in the fair determination of risk scores” — Kleinberg et al.

  • Paper 135) “Discrimination Exposed? On the Reliability of Explanations for Discrimination Detection”
11TuesdayNov 18Guest lectures“Human-AI Interaction”: Prof. Matt Taylor Guest LectureN/A
ThursdayNov 20“Mechanistic Interpretability”: Dr. Yilun Zhou Guest LectureN/A
12TuesdayNov 25Societal impacts of AIAI Safety & Societal Impact
  • Paper 136) “Concrete problems in AI Safety”

  • Paper 137) “Algorithmic monoculture and social welfare” — Jon Kleinberg and Manish Raghavan
ThursdayNov 27THANKSGIVING HOLIDAY
13TuesdayDec 2No classWork on final projectsN/A
ThursdayDec 4AI Policy
  • Paper 138) “Blueprint for an AI Bill of Rights” — White House OSTP

  • Paper 139) “Position: Strong Consumer Protection is an Inalienable Defense for AI Safety in the United States” — Serena Booth
14TuesdayDec 9Final project presentationsFinal Project PresentationsFinal Project Presentations
ThursdayDec 11Final Project PresentationsFinal Project Presentations

Deadlines

Acknowledgements

This class draws on the syllabi of others. These are Interactive Machine Learning taught by Prof. Matthew Taylor at the University of Alberta; Human‑Centric Machine Learning taught by Prof. Scott Niekum at the University of Massachusetts at Amherst; Algorithmic Human‑Robot Interaction taught by Prof. Anca Dragan at the University of California at Berkeley; and Human‑AI Interaction taught by Prof. Elena Glassman at Harvard University.