People
The people of CHARM
CHARM brings together expertise in human‑computer interaction, machine learning, and data visualization — united by a focus on AI that augments human capabilities.
Browse our members
Finale Doshi‑Velez
Faculty · Herchel Smith Professor of Computer Science
I head the Data to Actionable Knowledge (DtAK) group at Harvard Computer Science. We use probabilistic methods to address many decision-making scenarios involving humans and AI. Our work spans specific application domains (e.g. health and wellness, humanitarian crisis negotiation) as well as broader socio-technical questions around human-AI interaction, AI accountability, and responsible and effective AI regulation.
Krzysztof Gajos
Faculty · Gordon McKay Professor of Computer Science
I lead the Intelligent Interactive Systems Group at Harvard. We design, build and evaluate interactive systems that have some machine intelligence under the hood. This work requires simultaneous innovation in design and computation so we engage a wide range of methods from qualitative research, through design and quantitative controlled experiments, to building new algorithms and implementing working systems.
Elena Glassman
Faculty · Assistant Professor of Computer Science
I lead The Variation Lab at Harvard. We define and build AI-resilient interfaces that help people use AI while being resilient to AI choices that are not right, or not right for them. This is critical during context- and preference-dominated open-ended tasks, like ideating, searching, sensemaking, and reading or writing text and code at scale. AI-resilient interfaces improve AI safety, usability, and utility by working with, not against, human perception, attention, and cognition. To achieve this, we derive design implications from cognitive science, even when they fly in the face of common usability guidelines.
Hanspeter Pfister
Faculty · An Wang Professor of Computer Science
I lead the Visual Computing Group at Harvard. My research spans visualization, computer graphics, and computer vision, with a focus on developing visual analysis tools that help scientists understand large, complex datasets across domains such as neuroscience, genomics, and medicine. Increasingly, my work addresses how AI systems can be made more transparent and trustworthy through visual and interactive methods, including explainability of generative AI, reasoning verification, and the visual analysis of AI model behavior.
Billy Howard-Malt
Staff · Administrative Coordinator
Web designer, coordinates CHARM events, communications, and partnerships across SEAS and beyond.
Grace Guo
Postdoctoral Member
Building human-centered explainability tools for AI, particularly in the biomedical and healthcare domains.
Johannes Knittel
Postdoctoral Member
The intersection of data science, machine learning, and visualization.
Andrew Lee
Postdoctoral Member
Machine learning, interpretability, and understanding the representations learned by neural networks.
Hongjin Lin
PhD Student
AI and social impact, through a mixed-methods approach.
Lena Armstrong
PhD Student
Human-computer interaction and algorithmic justice
Jonas Raedler
PhD Student
Reinforcement Learning, Interpretability, and Explanation
Chelse Swoopes
PhD Student
Catherine Yeh
PhD Student
Data visualization, interpretability, and human-AI interaction.
Ritesh Kanchi
PhD Student
Accessibility, intelligent user interfaces, and computer science education.
Leo Benac
PhD Student
Machine Learning, decision-making, and negotiation.
Jianna So
PhD Student
Accessibility, AI in health, and human-computer interaction
Ruishi Zou
PhD Student
Building intelligent tools to empower people to make sense of complex data/information.
Olivia Seow
PhD Student
Machine learning and experimental interfaces.
Leslie Gu
PhD Student
3D vision, geometric deep learning, and physics-aware visual generation.
Ella Hugie
PhD Student
Sejal Khatri
PhD Student
Social computing, informal learning, and generative AI applications.
Chau Vu
PhD Student
Designing intelligent systems grounded in human perception and cognition, leveraging advances in AI and machine learning to augment capabilities safely and efficiently.
Trevor DePodesta
PhD Student
Machine interpretability, human-AI interaction, and data visualization.
Ziwei Gu
PhD Student
Augmenting human cognition and efficiency by leveraging large language models (LLMs) and interactive techniques, AI-resiliency.
Hiwot Belay Tadesse
PhD Student
Explainable Artificial Intelligence (XAI), and informed decision-making.
Alexandra Irger
PhD Student
Scientific visualization, and computer graphics.
Sohini Upadhyay
PhD Student
The intersection of human-computer interaction, and technology policy.
Jose Roberto Tello Ayala
PhD Student
Developing interpretable model architectures to generate meaningful insights from data.
Sukanya Krishna
PhD Student
App development, finance, geospatial analysis, computer vision, renewable energy, and healthcare.
Esther Brown
PhD Student
Reinforcement learning, foundation models, and AI/machine learning models for decision-making under uncertainty.
Chunggi Lee
PhD Student
Human-computer interaction, visualization, and computer vision.
Yicong Li
PhD Student
Computer vision and deep learning, multimodal LLMs, AI for healthcare, AI for neuroscience, and AI for biology.
Helena Vasconcelos
PhD Student
Symbolic systems, computer science, and human-computer interaction.
Yida Chen
PhD Student
Machine Learning Interpretability, Human-AI Interaction, Deep Learning
Simon Warchol
PhD Student
Scalable visualization, interpretability, and computational methods for multiplexed tissue imaging data.
Michelle Si
PhD Student
Using tools from economics and computer science to study the societal role/impact of AI.
Shivam Raval
PhD Student
Explaining and visualizing clustering structures in high-dimensional data and interpreting latent activations in frontier AI models.
Kyran Romero
Post Baccalaureate Fellow
Personalizing AI assistance for decision making under uncertainty.