Our People

Faculty

Finale Doshi-Velez, Herchel Smith Professor of Computer Science

Finale Doshi-Velez

I head the Data to Actionable Knowledge (DtAK) group at Harvard Computer Science.  We use probabilistic methods to address many decision-making scenarios involving humans and AI.  Our work spans specific application domains (e.g. health and wellness, humanitarian crisis negotiation) as well as broader socio-technical questions around human-AI interaction, AI accountability, and responsible and effective AI regulation.

Krzysztof Gajos, Gordon McKay Professor of Computer Science

I lead the Intelligent Interactive Systems Group at Harvard. We design, build and evaluate interactive systems that have some machine intelligence under the hood. This work requires simultaneous innovation in design and computation so we engage a wide range of methods from qualitative research, through design and quantitative controlled experiments, to building new algorithms and implementing working systems.

Martin Wattenberg, Gordon McKay Professor of Computer Science

Martin Wattenberg

I co-lead the Insight + Interaction Lab at Harvard with Fernanda Viégas. My research focuses on highly capable AI systems: how they function, how people might best use them, and how to mitigate their risks. Systems that use my work in machine learning and data visualization are in daily use by millions of people, and have been shown in museums worldwide.

Fernanda Viégas, Gordon McKay Professor of Computer Science

I co-lead the Insight + Interaction Lab at Harvard with Martin Wattenberg. My focus on data visualization is known for its contributions to social and collaborative visualization. My passion for making complex data understandable to lay viewers has led me to visualize wind currents, study collaboration patterns in Wikipedia, and create dynamic maps of news around the world.

Elena Glassman, Assistant Professor of Computer Science

Elena Glassman

I lead The Variation Lab at Harvard. We define and build AI-resilient interfaces that help people use AI while being resilient to AI choices that are not right, or not right for them. This is critical during context- and preference-dominated open-ended tasks, like ideating, searching, sensemaking, and reading or writing text and code at scale. AI-resilient interfaces improve AI safety, usability, and utility by working with, not against, human perception, attention, and cognition. To achieve this, we derive design implications from cognitive science, even when they fly in the face of common usability guidelines.

Administration

Billy Howard-Malt, Administrative Coordinator

I am the administrator for CHARM, helping coordinate all events and collaboration in our center. I have a deep interest in Human-Driven AI, ranging from ethics and policy, to AI companions and the use of AI in the mental health space.