Aligning Robot and AI Behaviors with Human Intents

People must be easily able to understand and change robot and AI behaviors.
I design tools to enable these interactions.

Research

A project overview image. This shows an icon of a robot and a person. There is an arrow from the person to the robot, with the text 'specify behaviors' above the arrow.

The Perils of Trial-and-Error Reward Design

Trial-and-error reward design is unsanctioned, but the implications of this widespread practice have not been studied. We conduct empirical computational and user study experiments, and we find that trial and error leads to the design of reward functions which are overfit and otherwise misdesigned. Published at AAAI 2023.

Project Webpage
Video: Reward Design Perils

A project overview image. It shows a decision surface, with highlighted points corresponding to an adversarial example, a picture of a corgi, a picture of a corgi butt, and a picture of a loaf of bread. The level sets for 50 percent confidence examples (e.g., the corgi butt and the adversarial examples) are highlighted.

Bayes-TrEx: Model Transparency by Example

Looking at expressive examples can help us better understand neural network behaviors and design better models. Bayes-TrEx is a tool to find these expressive examples. Published at AAAI 2021.

Project Webpage

A project overview image. Above it shows three controllers in a 2D navigation task: an RRT controler, a IL controller using smmoothing and Lidar, and a DS modulation controller. Below we show an example 3D reaching task: a robot is positioned in front of a table, and a small red target is present.

Robot Controller Understanding via Sampling

In this work, we adapt a Bayes-TrEx-like framework for the task of sampling representative robot behaviors. Led by Yilun Zhou; published at CoRL (Conference on Robot Learning) 2021.

Project Webpage (with video), Paper, Code

A visual overview of variation theory of learning.

How to Understand Your Robot

We look at how cognitive theories of human concept learning should inform human-robot interaction interfaces, especially for teaching and learning tasks. In collaboation with Sana Sharma and Elena Glassman (Harvard). Published at HRI 2022.

Website, Paper

A project overview image. It shows an example of thematic analysis, where themes are grouped into clusters. The image is zoomed out, so you can't read specific details.

Resource Constraints and Responsible Development

We interviewed industry practitioners from startups, government, and non-tech companies about their use and integration of machine learning in developing products. We analyze these interviews with thematic analysis. Collaboration with Aspen Hopkins. Published at AIES 2021.

Paper, Poster, Slides

Six example saliency maps for an image of a crow.

Do Feature Attribution Methods Work?

We design a principled evaluation mechanism for assessing feature attribution methods, and contribute to the growing body of literature suggesting these methods cannot be trusted in the wild. Led by Yilun Zhou, in collaboration with Marco Ribeiro (MSR). Published at AAAI 2022.

Arxiv Paper

A scene showing how a user might view a logical summary and a system state. The image shows a car with cars to its left, right, and behind. The description says 'I speed up when one or both of: (1) Both of: - a vehicle is not in front of me - the next exit is not 42. (2) All of: - a vehicle is to my right. - a vehicle is not in front of me. - a vehicle is behind me.'

Logic Interpretability

How should we best present logical sentences to a human? Published at IJCAI 2019.

Project Webpage

A small TurtleBot robot, kitted out with a cookie delivery box.

Piggybacking Robots

My award-winning undergraduate senior thesis, a project which set out to answer the question of whether we place too much trust in robotic systems, specifically in the physical security domain. Published at HRI 2017.

Project Webpage
Video: Piggybacking Robots

Media Coverage

A comic depicting a robot with a plate of cookies, trying to enter someone's house.

PhD Comics

A comic depicting a robot trying to get a human to take a cookie in exchange for placing themself in danger.

Soonish by Kelly and Zach Weinersmith

Advocacy

Serena and colleague Willie Boag standing in front of a doorway in Congress.

Science Policy

In 2021-2022, I served as President and in 2020-2021 as Vice President of MIT's Science Policy Initiative. I advocate for using science to inform policy, and for using policy to make science just and equitable. Pictured above with colleague Willie Boag. Not an endorsement for Senator Alexander.

Serena's students posing for a photo on a staircase.

Equity and Inclusion

I'm a strong advocate for the inclusion of women and underrepresented minorities in science. In 2019, I served as co-president of MIT's GW6: Graduate Women of Course 6. Pictured above: students from an introductory CS class I taught in Puebla, Mexico.