CSD Faculty Earns Google Research Scholar Award

Wednesday, May 15, 2024 - by Adam Kohlhaas

SCS faculty members Andrea Bajcsy, Motahhare Eslami, Ken Holstein, Aditi Raghunathan, Andrej Risteski and Hong Shen have received 2024 Google Research Scholar Awards.

Six Carnegie Mellon University researchers in the School of Computer Science received 2024 Google Research Scholar Awards, which support early career professors pursuing research in fields relevant to Google. 

Aditi Raghunathan, an assistant professor in the Computer Science Department, received the award to support the project "Robust Fine-Tuning of Foundation Models." 

Even the largest-scale and most powerful foundation models today, such as GPT4, do not reliably solve intended tasks. They hallucinate false information, or leak private or even dangerous information. A common way to prevent such errors is via a process of alignment. Raghunathan's project will work to ensure this alignment process works well in the real world under changing conditions, unexpected variations and adversaries.

Her research will develop principled methods that appropriately constrain the fine-tuning process to maximally preserve pretrained knowledge and improve downstream robustness.

Recipients in HCII, ML, and RI

Andrea Bajcsy

Andrea Bajcsy, an assistant professor in the Robotics Institute, received the award to support her work, "In-the-Wild Robot Behavior Alignment From Human Preferences."

Generative models are revolutionizing how robots interact with their environment, learn from humans and make decisions. But robots often struggle to behave according to human preferences. Human-preference feedback in domains like ChatGPT allows generative language models to learn these preferences, but applying this approach to robotics is complicated by risks posed by robots demonstrating physical actions in the real world. 

Bajcsy's work aims to help robots generate preferred behaviors without real-world risks, quickly learn from human feedback and enable end-users to safely tune any robot's behavior.

Motahhare Eslami and Ken Holstein

Motahhare Eslami and Ken Holstein, both assistant professors in the Human-Computer Interaction Institute (HCII), will use their Google Scholar Award for "Generative AI Meets Responsible AI: Supporting User-Driven Auditing of Generative AI Systems."

Generative AI has the potential to create diverse content, but traditional expert-led algorithm auditing often misses harmful biases due to cultural blind spots and unpredictable social behavior. User-driven algorithmic audits offer a unique opportunity for everyday users to access these systems and provide a more comprehensive review.

Eslami and Holstein have joined forces with HCII Ph.D. student Wesley Deng and faculty member Jason Hong to develop the structured support, tools and processes for effective public participation in generative AI auditing that current user-driven audits lack. Their tools, WeAudit and TAIGA, will guide users through onboarding, prompting, validating and reporting potentially harmful AI behavior.

Andrej Risteski

Andrej Risteski, an assistant professor in the Machine Learning Department, received the award to support "Algorithmic Foundations for Generative AI: Inference, Distillation and Non-Autoregressive Generation."

Large language and image models have fundamentally transformed the landscape of machine learning and AI. The main driving force behind their impressively improved performance in the last several years has been scale — both of model size and data —rather than algorithmic innovation. 

Risteski's research will address methodologically improving several critical aspects in the generative AI pipeline, including inference techniques, distillation procedures and pretraining non-autoregressively parameterized models. It identifies mathematical abstractions that facilitate theoretical analysis and will be used as guidance in designing algorithmic changes at scale.

Hong Shen

Hong Shen, an assistant research professor in the HCII, received the award to support "Understanding and Supporting Marginalized Communities in AI Red Teaming."

AI red teaming — the practice of intentionally seeking to break the safety barriers of AI to understand its capabilities and limitations — has become increasingly important in evaluating modern AI technologies. Empowering and supporting human labor involved in AI red teaming must be handled with care, particularly for people from socially disadvantaged communities.

Through a human-centered approach, Shen's project complements existing technical work in red teaming by focusing on the critical human infrastructure that supports these practices.

 For more information about this year's award recipients, visit the Google Research Scholar Program website.

Media Contact:

Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu