Danniell Hu | Computer Science PhD Student @ UMich

Publications

More information about publications can be found on my Google Scholar profile.

Human at the Center: A Framework for Human-Driven AI Development

Authors: Danniell Hu, Diana Acosta Navas, Susanne Gaube, Hussein Mozannar, Matthew E. Taylor, Krishnamurthy Dvijotham, Elizabeth Bondi-Kelly

Published in: AAAI AI Magazine, 2025

View Abstract

Artificial Intelligence (AI) systems increasingly shape many aspects of daily life, influencing our jobs, finances, healthcare, and online content. This expansion has led to the rise of human-AI systems, where humans communicate, collaborate, or otherwise interact with AI, e.g., when humans use AI outputs to make decisions. While these systems have shown potential to enhance human capabilities and improve performance on benchmarks, evidence suggests that they often underperform compared to AI-only or human-only approaches in experiments and real-world applications. Here, we argue that human-AI systems should be developed with a greater emphasis on human-centered factors—such as usability, fairness, trust, and user autonomy—within the algorithmic design and evaluation process. We advocate for integrating human-centered principles into AI development through human-centered algorithmic design and contextual evaluation with real users. Drawing on interdisciplinary research and our tutorial at two major AI conferences, we highlight examples and strategies for AI researchers and practitioners to embed these principles effectively. This work offers a systematic synthesis that integrates technical, practical, and ethical insights into a unified framework. Additionally, we highlight critical ethical considerations—fairness, labor, privacy, and human agency—to ensure systems meet performance goals while serving broader societal interests. Through this work, we aim to inspire the field to embrace a truly human-centered approach to algorithmic design and deployment.


Towards a Cognitive Model of Dynamic Debugging: Does Identifier Construction Matter?

Authors: Danniell Hu, Priscila Santiesteban, Madeline Endres, Westley Weimer

Published in: IEEE Transactions on Software Engineering, 2024

View Abstract

Debugging is a vital and time-consuming process in software engineering. Recently, researchers have begun using neuroimaging to understand the cognitive bases of programming tasks by measuring patterns of neural activity. While exciting, prior studies have only examined small sub-steps in isolation, such as comprehending a method without writing any code or writing a method from scratch without reading any already-existing code. We propose a simple multi-stage debugging model in which programmers transition between Task Comprehension, Fault Localization, Code Editing, Compiling, and Output Comprehension activities. We conduct a human study of n = 28 participants using a combination of functional near-infrared spectroscopy and standard coding measurements (e.g., time taken, tests passed, etc.). Critically, we find that our proposed debugging stages are both neurally and behaviorally distinct. To the best of our knowledge, this is the first neurally-justified cognitive model of debugging. At the same time, there is significant interest in understanding how programmers from different backgrounds, such as those grappling with challenges in English prose comprehension, are impacted by code features when debugging. We use our cognitive model of debugging to investigate the role of one such feature: identifier construction. Specifically, we investigate how features of identifier construction impact neural activity while debugging by participants with and without reading difficulties. While we find significant differences in cognitive load as a function of morphology and expertise, we do not find significant differences in end-to-end programming outcomes (e.g., time, correctness, etc.). This nuanced result suggests that prior findings on the cognitive importance of identifier naming in isolated sub-steps may not generalize to end-to-end debugging. Finally, in a result relevant to broadening participation in computing, we find no behavioral outcome differences for participants with reading difficulties