People

The people of CHARM

CHARM brings together expertise in human‑computer interaction, machine learning, and data visualization — united by a focus on AI that augments human capabilities.

Browse our members

Finale Doshi-Velez

Finale Doshi‑Velez

Faculty · Herchel Smith Professor of Computer Science

I head the Data to Actionable Knowledge (DtAK) group at Harvard Computer Science.  We use probabilistic methods to address many decision-making scenarios involving humans and AI.  Our work spans specific application domains (e.g. health and wellness, humanitarian crisis negotiation) as well as broader socio-technical questions around human-AI interaction, AI accountability, and responsible and effective AI regulation.

Krzysztof Gajos

Krzysztof Gajos

Faculty · Gordon McKay Professor of Computer Science

I lead the Intelligent Interactive Systems Group at Harvard. We design, build and evaluate interactive systems that have some machine intelligence under the hood. This work requires simultaneous innovation in design and computation so we engage a wide range of methods from qualitative research, through design and quantitative controlled experiments, to building new algorithms and implementing working systems.

Elena Glassman

Elena Glassman

Faculty · Assistant Professor of Computer Science

I lead The Variation Lab at Harvard. We define and build AI-resilient interfaces that help people use AI while being resilient to AI choices that are not right, or not right for them. This is critical during context- and preference-dominated open-ended tasks, like ideating, searching, sensemaking, and reading or writing text and code at scale. AI-resilient interfaces improve AI safety, usability, and utility by working with, not against, human perception, attention, and cognition. To achieve this, we derive design implications from cognitive science, even when they fly in the face of common usability guidelines.

Hanspeter Pfister

Hanspeter Pfister

Faculty · An Wang Professor of Computer Science

I lead the Visual Computing Group at Harvard. My research spans visualization, computer graphics, and computer vision, with a focus on developing visual analysis tools that help scientists understand large, complex datasets across domains such as neuroscience, genomics, and medicine. Increasingly, my work addresses how AI systems can be made more transparent and trustworthy through visual and interactive methods, including explainability of generative AI, reasoning verification, and the visual analysis of AI model behavior.

Billy Howard-Malt

Billy Howard-Malt

Staff · Administrative Coordinator

Web designer, coordinates CHARM events, communications, and partnerships across SEAS and beyond.

Grace Guo headshot

Grace Guo

Postdoctoral Member

Building human-centered explainability tools for AI, particularly in the biomedical and healthcare domains.

Johannes Knittel headshot

Johannes Knittel

Postdoctoral Member

The intersection of data science, machine learning, and visualization.

Andrew Lee headshot

Andrew Lee

Postdoctoral Member

Machine learning, interpretability, and understanding the representations learned by neural networks.

Jonas Raedler headshot

Jonas Raedler

PhD Student

Reinforcement Learning, Interpretability, and Explanation

Ritesh Kanchi

Ritesh Kanchi

PhD Student

Accessibility, intelligent user interfaces, and computer science education.

Leo Benac

Leo Benac

PhD Student

Machine Learning, decision-making, and negotiation.

Ruishi Zou headshot

Ruishi Zou

PhD Student

Building intelligent tools to empower people to make sense of complex data/information.

Leslie Gu headshot

Leslie Gu

PhD Student

3D vision, geometric deep learning, and physics-aware visual generation.

Ella Hugie headshot

Ella Hugie

PhD Student

Sejal Khatri headshot

Sejal Khatri

PhD Student

Social computing, informal learning, and generative AI applications.

Chau Vu headshot

Chau Vu

PhD Student

Designing intelligent systems grounded in human perception and cognition, leveraging advances in AI and machine learning to augment capabilities safely and efficiently.

Trevor DePodesta headshot

Trevor DePodesta

PhD Student

Machine interpretability, human-AI interaction, and data visualization.

Ziwei Gu headshot

Ziwei Gu

PhD Student

Augmenting human cognition and efficiency by leveraging large language models (LLMs) and interactive techniques, AI-resiliency.

Hiwot Belay Tadesse headshot

Hiwot Belay Tadesse

PhD Student

Explainable Artificial Intelligence (XAI), and informed decision-making.

Alexandra Irger headshot

Alexandra Irger

PhD Student

Scientific visualization, and computer graphics.

Sohini Upadhyay headshot

Sohini Upadhyay

PhD Student

The intersection of human-computer interaction, and technology policy.

Jose Roberto Tello Ayala headshot

Jose Roberto Tello Ayala

PhD Student

Developing interpretable model architectures to generate meaningful insights from data.

Sukanya Krishna headshot

Sukanya Krishna

PhD Student

App development, finance, geospatial analysis, computer vision, renewable energy, and healthcare.

Esther Brown headshot

Esther Brown

PhD Student

Reinforcement learning, foundation models, and AI/machine learning models for decision-making under uncertainty.

Yicong Li headshot

Yicong Li

PhD Student

Computer vision and deep learning, multimodal LLMs, AI for healthcare, AI for neuroscience, and AI for biology.

Helena Vasconcelos headshot

Helena Vasconcelos

PhD Student

Symbolic systems, computer science, and human-computer interaction.

Yida Chen headshot

Yida Chen

PhD Student

Machine Learning Interpretability, Human-AI Interaction, Deep Learning

Simon Warchol headshot

Simon Warchol

PhD Student

Scalable visualization, interpretability, and computational methods for multiplexed tissue imaging data.

Michelle Si headshot

Michelle Si

PhD Student

Using tools from economics and computer science to study the societal role/impact of AI.

Shivam Raval headshot

Shivam Raval

PhD Student

Explaining and visualizing clustering structures in high-dimensional data and interpreting latent activations in frontier AI models.

Kyran Romero headshot

Kyran Romero

Post Baccalaureate Fellow

Personalizing AI assistance for decision making under uncertainty.