Powering the Next Generation of AI

Dr. Sewon Min recieves the WAGS Innovation in Technology Award with her work in Large Language Models.

Seven years ago, Dr. Sewon Min was struck by the limitations of existing artificial intelligence (AI) systems to accurately comprehend and generate human language. “The lack of systems capable of understanding language intrigued me to dive deeper into research,” Min says.  

Min was aware of the transformative potential of AI as a bridge between human communication and understanding, and she used this as a launching pad to explore how language models could be made more intelligent and reliable. 

During her time as a Ph.D. student at the University of Washington’s Paul G. Allen School of Computer Science & Engineering, she led work on nonparametric language models to rethink how AI systems manage and access information. Through this research, Min received the 2024 Western Association of Graduate Schools (WAGS) ProQuest Innovation in Technology Award for her groundbreaking thesis, “Rethinking Data Use in Large Language Models.” These models enable AI to retrieve information in real time and not have to be “re-trained” to use new data, resulting in more accurate and resource-efficient systems.  

 

Dr. Sewon Min (center) with her doctoral advisers, Dr. Luke Zettlemoyer (left) and Dr. Hannaneh Hajishirzi (right).

Training a model is like memorizing a lot of text—it stores information so it can recall it later without needing to reference the original data. This process is imperfect, as no model can remember everything perfectly. In contrast, a nonparametric model is like having the ability to look up information on the internet whenever needed.  “Instead of memorizing the capital city of every country, you can simply search for it when required,” Min says. This approach is more accurate, efficient, and less resource-intensive since it reduces the need to store everything in memory and instead focuses on retrieving data as needed. 

Nonparametric models are not only more efficient but also more transparent, as they can provide citations for their sources; they address some of the most pressing challenges in AI: building systems that deliver factual, traceable and verifiable information. Min’s innovative approach has the potential to transform how AI systems are designed and used in fields like education, healthcare and public policy. 

“I’m hoping my research can contribute to the next generation of large language models, like ChatGPT, and nonparametric models can help solve some of the issues in traditional parametric models,” Min says.  

Dr. Min and her friends enjoying the UW cherry blossoms.

The WAGS thesis award committee recognized the significance of Min’s research and implications for the future of AI. This prestigious award is given for the development of innovative technology in a thesis or dissertation and its utilization for the creative solution of a major problem.  Min is one of two University of Washington graduate students to receive the award this year. “I feel fortunate that I am able to work in the AI field that impacted the wider technology community,” Min says.

Since graduating from the University of Washington last summer, Min has joined the Allen Institue for AI (AI2) and is preparing to take on a new role as a full-time faculty member at UC Berkeley. There, she looks forward to mentoring the next generation of researchers and continuing to promote nonparametric language modeling. 

“I hope to inspire more excitement in the research community about nonparametric language modeling and promote not just my work, but the broader direction as a whole,” Min says. 

 

Published on January 31, 2025

By: Tatiana Rodriguez, UW Graduate School