Paul G. Allen School of Computer Science & Engineering – 91̽News /news Tue, 14 Apr 2026 14:38:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Tiny cameras in earbuds let users talk with AI about what they see /news/2026/04/14/cameras-in-wireless-earbuds-vuebuds/ Tue, 14 Apr 2026 14:38:00 +0000 /news/?p=91232 Two black earbuds: one with the casing removed exposing a computer chip and tiny camera.
91̽researchers developed a system called VueBuds that uses tiny cameras in off-the-shelf wireless earbuds to allow users to talk with an AI model about the scene in front of them. Here, the altered headphones are shown with the camera inserted. Photo: Kim et al./CHI ‘26

91̽ researchers developed the first system that incorporates tiny cameras in off-the-shelf wireless earbuds to allow users to talk with an AI model about the scene in front of them. For instance, a user might turn to a Korean food package and say, “Hey Vue, translate this for me.” They’d then hear an AI voice say, “The visible text translates to ‘Cold Noodles’ in English.”

The prototype system called VueBuds takes low-resolution, black-and-white images, which it transmits over Bluetooth to a phone or other nearby device. A small artificial intelligence model on the device then answers questions about the images within around a second. For privacy, all of the processing happens on the device, a small light turns on when the system is recording, and users can immediately delete images.

The team will April 14 at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona.

“We haven’t seen most people adopt smart glasses or VR headsets, in part because a lot of people don’t like wearing glasses, and they often come with , such as recording high-resolution video and processing it in the cloud,” said senior author , a 91̽professor in the Paul G. Allen School of Computer Science & Engineering. “But almost everyone wears earbuds already, so we wanted to see if we could put visual intelligence into tiny, low-power earbuds, and also address privacy concerns in the process.”

Cameras use far more power than the microphones already in earbuds, so using the same sort of high-res cameras as those in smart glasses wouldn’t work. Also, large amounts of information can’t stream continuously over Bluetooth, so the system can’t run continuous video.

The team found that using a low-power camera — roughly the size of a grain of rice — to shoot low-resolution, black-and-white still images limited battery drain and allowed for Bluetooth transmission while preserving performance.

There was also the matter of placement.

“One big question we had was: Will your face obscure the view too much? Can earbud cameras capture the user’s view of the world reliably?” said lead author , who completed this work as a 91̽doctoral student in the Allen School.

The team found that angling each camera 5-10 degrees outward provides a 98-108 degree field of view. While this creates a small blind spot when objects are held closer than 20 centimeters from the user, people rarely hold things that close to examine them — making it a non-issue for typical interactions.

Researchers also discovered that while the vision language model was largely able to make sense of the images from each earbud, having to process images from both earbuds slowed it down. So they had the system “stitch” the two images into one, identifying overlapping imagery and combining it. This allows the system to respond in one second — quick enough to feel like real-time for users — rather than the two seconds it takes with separate images.

The team then had 74 participants compare recorded outputs from VueBuds with outputs from Ray-Ban Meta Glasses in a series of tests. Despite VueBuds using low-resolution images with greater privacy controls and the Ray-Bans taking high-res images processed on the cloud, the two systems performed equivalently. Participants preferred VueBuds’ translations, while the Ray-Bans did better at counting objects.

Sixteen participants also wore VueBuds and tested the system’s ability to translate and answer basic questions about objects. VueBuds achieved 83-84% accuracy when translating or identifying objects and 93% when identifying the author and title of a book.

This study was designed to gauge the feasibility of integrating cameras in wireless earbuds. Since the system only takes grayscale images, it can’t answer questions that involve color in the scene.

The team wants to add color to the system — color cameras require more power — and to train specialized AI models for specific use cases, such as translation.

“This study lets us glimpse what’s possible just using a general purpose language model and our wireless earbuds with cameras,” Kim said. “But we’d like to study the system more rigorously for applications like reading a book — for people who have low vision or are blind, for instance — or translating text for travelers.”

Co-authors include , a 91̽master’s student in the Allen School, and , , , and , all 91̽students in electrical and computer engineering.

For more information, contact vuebuds@cs.washington.edu.

]]>
UW’s graduate and professional programs highly ranked by US News & World Report /news/2026/04/06/uws-graduate-and-professional-programs-highly-ranked-by-us-news-world-report/ Tue, 07 Apr 2026 04:00:53 +0000 /news/?p=91184 Flowering cherry trees line the  91̽quad, taken from above.
The UW’s graduate and professional degree programs again were recognized as among the best in the nation by U.S. News & World Report. Photo: 91̽

UPDATE April 7, 2026:The original version of this story omitted two 91̽programs that were included in the rankings: Occupational Therapy (Tied for 20th) and Physical Therapy (Tied for 31st).

The 91̽’s graduate and professional degree programs again were recognized as among the best in the nation, according to .

Topping this year’s list include programs at the Evans School of Public Policy & Governance, the School of Public Health, the School of Nursing, the Paul G. Allen School of Computer Science & Engineering in the College of Engineering and the College of Education. The College of Arts & Sciences and the College of the Environment also had top-rated programs.

In total, 81 graduate and professional degree programs across the 91̽placed in the top 35 in this year’s U.S. News rankings.

“These rankings highlight the strength and impact of the 91̽’s graduate and professional programs,” said 91̽President Robert J. Jones. “These programs equip students with the skills and knowledge to meet critical workforce needs and serve society, while demonstrating the power of higher education to advance the public good. We are proud to foster an environment where students and faculty can thrive and have a real impact on the world around them.”

While the 91̽celebrates the success and impact of the programs recognized by U.S. News — and notes that many applicants use these rankings to help them select schools and discover potential areas of study — the University also recognizes shortcomings inherent in the ranking systems.

The 91̽School of Law and the 91̽School of Medicine withdrew from the U.S. News rankings in 2022 and 2023, respectively, citing concerns that some of the methodology in the rankings for those specific disciplines incentivize actions and policies that run counter to the schools’ public service missions.

91̽leaders continue to work with U.S. News and other ranking organizations to improve their methodologies, to the extent that the organizations are open to it. Schools, colleges and departments continually reevaluate the benefits and potential shortfalls of participating in specific rankings.

Excluding the School of Law and the School of Medicine, 29 91̽programs placed in the top 10, and 81 are in the top 35.

The 91̽this year placed in the top 10 nationwide in public affairs, biostatistics, nursing, computer science, education, psychology, speech and language pathology, statistics and Earth sciences.

The UW’s Evans School of Public Policy & Governance has maintained its top-10 ranking for more than a decade and tied for fifth in the nation this year. The Evans School’s environmental policy program was ranked second, while public finance and budgeting as well as leadership both ranked No. 10.

The 91̽School of Nursing’s doctor of nursing practice program tied for No. 1 among public institutions. The School of Public Health has maintained its top-10 ranking for more than a decade, coming in this year at No. 9. The school also had three programs in the top 10: biostatistics, environmental health sciences and epidemiology.

The UW’s programs in speech and language pathology tied for No. 6. Two programs from the College of Education placed in the top 10. And the Paul G. Allen School of Computer Science & Engineering this year tied for seventh place overall with three programs ranked in the top 10, including artificial intelligence, programming language and systems.

U.S. News ranks biostatistics in two ways. 91̽ranked No. 3 as a science discipline that applies statistical theory and mathematical principles to research in medicine, biology, environmental science, public health and related fields. UW’s School of Public Health ranked No. 7 in biostatistics as an area of study that trains students to apply statistical principles and methods to problems in health sciences, medicine and biology. At the UW, biostatistics is a division of the School of Public Health.

In some cases, such as the College of Arts & Science and the Foster School of Business, U.S. News ranks several professional disciplines housed within academic units. Programs in dentistry are not ranked.

The rankings below are based on preliminary data and may be updated. relies on both expert opinions and statistical indicators.

TOP 10:

Library and Information Studies (overall): Two-way tie for 1st (ranked in 2025)

Public Affairs (environmental policy): 2nd

Library and information studies (digital librarianship): Two-way for 2nd (ranked in 2022)

Library and Information Studies (information systems): 2nd (ranked in 2022)

Biostatistics: 3rd

Physics (nuclear): Two-way tie for 3rd (ranked in 2024)

Nurse practitioner (doctor of nursing practice): Four-way tie for 4th

Evans School of Public Policy & Governance (overall): Four-way tie for 5th

Library and Information Studies (library services for children and youth): Two-way for 5th (ranked in 2022)

Computer science (systems): Tied for 6th

Education (elementary education): 6th

Psychology (clinical): Three-way tie for 6th

Speech-language pathology: Five-way tie for 6th

Statistics: Four-way tie for 6th

Public Health (biostatistics): 7th

Computer science (overall): Three-way tie for 7th

Computer science (programming language): Tied for 7th

Education (secondary education): 7th

Nursing (midwifery): Five-way tie for 7th

Public Health (environmental health sciences): 7th

School of Social Work (overall): 7th (ranked in 2025)

Public Health (epidemiology): 8th

Computer science (artificial intelligence): 9th

Earth sciences: Tied for 9th

Geophysics: Three-way tie for 9th (ranked in 2024)

Public Affairs (nonprofit management): 9th

School of Public Health (overall): Tied for 9th

Public Affairs (public finance and budgeting): 10th

Public Affairs (public management and leadership): 10th

TOP 25:

Biological sciences: Five-way tie for 16th

Business (accounting): 10-way tie for 16th

Business (entrepreneurship): Five-way tie for 17th

Business (information systems): Three-way tie for 15th

Business (part-time MBA): Three-way tie for 11th

Business (full-time MBA): 20th

Business (management): Five-way tie for 25th

Business (marketing): Eight-way tie for 25th

Chemistry (analytical): Four-way tie for 16th (ranked in 2024)

Chemistry: Seven-way tie for 22nd

Chemistry (inorganic): Three-way tie for 22nd (ranked in 2024)

Computer science (theory): Tied for 11th

College of Education (overall): Tied for 24th

Education (administration): Tied for 11th

Education (curriculum/instruction): Tied for 12th

Education (policy): Tied for 14th

Education (special education): Tied for 12th

College of Engineering (overall): Three-way tie for 22nd

Engineering (aerospace/aeronautical/astronautical): Tied for 17th

Engineering (biomedical/bioengineering): Five-way tie for 12th

Engineering (civil): Four-way tie for 13th

Engineering (computer): 12th

Engineering (electrical): Three-way tie for 22nd

Engineering (industrial/manufacturing/systems): Seven-way tie for 24th

Engineering (materials engineering): Five-way tie for 25th

Library and Information Studies (school library media): Two-way tie for 11th (ranked in 2022)

Mathematics (applied math): 21st (ranked in 2024)

Nursing master’s (overall): Tied for 12th

Nurse practitioner (adult gerontology acute care): Tied for 11th

Nurse practitioner (family): Tied for 15th

School of Pharmacy (overall): Tied for 14th

Physics (overall): Tied for 20th

Public Affairs (public policy analysis): 14th

Public Affairs (social policy): Tied for 13th

Public Affairs (urban policy): Three-way tie for 21st

Public Health (health care management): Three-way tie for 16th

Public Health (health policy and management): 11th

Public Health (social behavior): 13th

Sociology (overall): Two-way tie for 22nd (ranked in 2025)

Sociology (population): Two-way tie for 15th (ranked in 2022)

TOP 35:

Business (analytics): Seven-way tie for 32nd

Business (executive MBA): Three-way tie for 29th

Business (finance): Nine-way tie for 31st

Business (international MBA): Tie for 32nd

Business (production & operations): Five-way tie for 27th

Engineering (chemical): Tied for 28th

Engineering (mechanical): 34th

English: Two-way tie for 34th (ranked in 2025)

Fine arts: 15-way tie for 34th

History: Three-way tie for 31st (ranked in 2025)

Mathematics: Four-way tie for 26th

Occupational Therapy: Tied for 20th

Physical Therapy: Tied for 31st

Political science: Five-way tie for 33rd (ranked in 2025)

]]>
Q&A: Ryan Calo, law professor and interdisciplinary researcher, talks about his new book, “Law and Technology” /news/2026/03/31/qa-ryan-calo-law-professor-and-interdisciplinary-researcher-talks-about-his-new-book-law-and-technology/ Tue, 31 Mar 2026 22:34:24 +0000 /news/?p=91165 A book cover
Ryan Calo, a 91̽professor of law, has written a new book, “Law & Technology.” Calo is also a professor in the Information School and an adjunct in the Paul G. Allen School of Computer Science & Engineering. Photo: University of Oxford Press

Since Ryan Calo joined 91̽ School of Law in 2012, he has become a leading expert on the law and emerging technology.

Calo believes that few interesting questions — especially around technology — can be resolved by reference to a single discipline.

Calo is a co-founder of the , and the . He is also a professor in the and an adjunct in the .

Calo’s newest book, “,” published late last year, is a guide to a legal analysis of regulation and technology. Nearly a decade ago, Calo realized that the most recent book on the topic was published in the 1970s. He decided it was time for an updated resource reflecting current, rapidly evolving technology and the present regulatory environment.

91̽News spoke with Calo about the book and the current legal and policy climate in the United States.

man wearing a plaid shirt standing outside
Ryan Calo is a professor in the 91̽School of Law and the Information School. He is an adjunct in the Paul G. Allen School of Computer Science & Engineering. Photo: Doug Parry/91̽

Who is the intended audience for “Law and Technology”?

Ryan Calo: I wrote it primarily for new entrants to the field, be they junior scholars or students. I also hoped that the themes would resonate with more senior scholars and that it would be useful outside of academia for either analysis or instruction. Because ultimately, what the book does is proposes a methodology for analyzing technology from a legal perspective.

I spent a lot of time interacting with policymakers, staffers on Capitol Hill, people who work for senators and members of Congress. A legislator might come to a staffer and say, “Hey, my constituents are really worried about augmented reality or AI. They’re really worried about deep fakes.” That staff member doesn’t really have a place to start, and they end up just calling up experts, reading New York Times articles, talking to industry, but not in any kind of methodical way. This book is designed to help them figure out what’s going on.

I also hope that this book would be of use to people who are in practice and want to be more methodical about analyzing a given technology.

Technology evolves fast. How should the legal system and policymakers prepare to navigate the relationship between law and emerging technologies?

RC: Many of us have an expectation that technology is just going to change. It’s just going to evolve, and our job as lawyers or judges or policymakers, is to kind of scramble and accommodate the resulting disruption, and perhaps try to restore the status quo. Part of what I hope to see is legal scholars and policymakers acknowledging that the disruption isn’t inevitable.

We need to empower independent researchers to figure out what’s going on with new technology. Right now researchers are disempowered because they don’t have access to the relevant data and platforms. And many times when they try to get that data, they get served with a cease and desist letter.

We need to protect whistleblowers and make sure there’s adequate, truly top-notch expertise within government. If you have those things, then you’re much more likely to be able to figure out what could go wrong with these technologies without having to observe the harm unfold over a long period of time, as we have with the internet and now with AI.

You mentioned the School of Law’s leadership in tech policy. How is the 91̽positioned nationally in this space?

RC: We are really among the leaders in this area.

The School of Law has a lot of tech policy offerings, including a . Many faculty have contributed to scholarship over the years. We have lots of faculty writing about law and technology.

We also have been really a model for impactful interdisciplinary collaboration. Law students can work in the clinic or the Tech Policy Lab. I’m one of the founders of the Center for an Informed Public, which bridges human centered and design engineering as well as the Information School and dozens of other departments including psychology, education and even geography.

A third important example is the . We did a whole year of work mapping out who was doing work in the space — all the centers, all the labs, all the initiatives — all the people on the three campuses identified as working at this intersection.

We’re leaders across the country at the law school in terms of our student offerings in our research, but we are also part of that interstitial glue. People think of the iSchool, which they should. They think of computer science, which they should. But they also should think about who else is in the center of this, who else is at the heart of it, and the School of Law is a big part of that.

There’s been a lot of news lately about states trying to regulate AI and the federal government pushing back. What’s your perspective?

RC: If I were trying to sabotage the innovation edge of the United States, I would do at least two things, maybe three.

First, I would divest in basic research. The United States has had an innovation edge over the rest of the world in large part because of decisions made in the 1950s and beyond to invest in basic research. I would dismantle that, and I would try to make it really hard for universities to do research, either by spending less, disrupting the relationships, or messing with overhead in ways that makes research impossible.

The second thing I would do is make it really hostile for outside innovators to come in and participate in knowledge production here. I would, whether xenophobically or not, try to make it really hard for people with ideas and talent and knowledge to come here to the United States to work on teams with other Americans, to stay here and teach in our schools, to found companies. The second enormous advantage the United States has had is that the country has become attractive because of its commitment to the rule of law and its robust higher ed system, and that’s built on its innovation and investment in research. People from all over the world come here to try and make the next Google and Amazon, or are teaching in our schools and contributing to our ecosystem.

The third thing I would do in this hypothetical situation is remove non-existent hurdles to transformative technologies like AI. What do I mean? Federal leaders are currently talking about getting out of the way of AI, but there aren’t any regulations about AI, really. There are some state laws that have a kind of European flavor of risk management, like and . There are specific things that states are worried about, including deep fakes and labeling online social media accounts that are automated. There’s almost nothing standing in the way of AI innovation in terms of regulation.

The way that our system is structured is that the individual states, under our concept of federalism, are supposed to be laboratories of ideas, experimenting with legislation, and showing that it works or it doesn’t. Pretending that you’re pro-innovation because you’re trying to stamp out the very few regulatory hurdles that companies have to have to abide by all in the name of competing with China, which has AI laws, is just senseless. We’re much better off following the wisdom of the founders, who said, “Hey, if you have something new in society, let the states serve as laboratories for different laws, and we can all learn from each other about how that’s going.” That’s classic federalism and it used to be a pillar of conservative thinking.

The President doesn’t have the power to boss the states around in terms of their legislative capacities. And Congress has taken up the question of whether to try to preempt AI laws, and they resignedly declined. I just want to comment that the overall strategy of the administration has been deeply anti-innovation in its impact, even though it is vociferously proinnovation in its rhetoric.

Any final thoughts?

RC: We have an environment in the U.S. that promotes innovation, sometimes through laws, such as laws that protect intellectual property, and laws that make people feel safe enough to use products and services that companies can sell them to us. There’s not, and never has been, a one-to-one correlation between regulation and promoting innovation. It’s really important that we acknowledge, as a society and community, that sometimes laws are written in the service of innovation. What you want is a favorable regulatory environment, not a complete absence of the rule of law.

For more information, contact Calo at rcalo@uw.edu.

]]>
DopFone app can accurately track fetal heart rate using only a smartphone /news/2026/02/26/dopfone-fetal-heart-rate-app/ Thu, 26 Feb 2026 16:58:23 +0000 /news/?p=90704
DopFone uses an off-the-shelf smartphone’s existing speaker and microphone to accurately estimate fetal heart rate. The phone mimics a Doppler ultrasound, emitting a tone and listening for the subtle variations in its echo caused by fetal heart beats. A machine learning model then estimates the heart rate. Photo: Garg et al./Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

Heart rate is an important sign of fetal health, yet few technologies exist to easily and inexpensively track fetal heart rates outside of doctors’ offices. This can create risks for pregnancies in low-resource regions where doctors are far away or inaccessible.

A team led by 91̽ researchers has created DopFone, a system that uses an off-the-shelf smartphone’s existing speaker and microphone to accurately estimate fetal heart rate. The phone mimics a Doppler ultrasound, emitting a tone and listening for the subtle variations in its echo caused by fetal heart beats. A machine learning model then estimates the heart rate. In a clinical test with 23 pregnant women, DopFone estimated heart rate with an average error of 2 beats per minute, or bpm. The accepted clinical range is within 8 bpm.

The team Dec. 2 in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.

“Eventually DopFone could let people test fetal heart rate regularly, rather than relying on the intermittent tests at a doctor’s office, or not getting tested at all,” said lead author , a 91̽doctoral student in the Paul G. Allen School of Computer Science & Engineering. “Patients might then send this data to doctors so that they can better judge patients’ health when they’re not in a clinic.”

Traditional Doppler ultrasounds, the clinical standard for fetal heart rate monitoring, work by sending high-frequency sound into a person’s body and tracking how the echo changes in frequency. They’re very accurate at measuring fetal heart rate but require costly equipment and a skilled technician to operate it.

To use DopFone, a user places the phone’s microphone against their abdomen for one minute. The phone emits a subaudible 18 kilohertz tone. The team chose this low frequency because — unlike a Doppler’s high frequencies, above 2,000 kilohertz — it sits within the range smartphone microphones can record while still traveling well through tissue. As the tone is reflected through the user’s abdomen, the fetus’s heartbeat creates small shifts in the sound.

A machine learning model then estimates the heart rate using the audio and the patient’s demographic information

The team tested DopFone in 91̽Medicine’s maternal-fetal medicine division on 23 pregnant patients between 19 and 39 weeks of pregnancy. On average its readings were within 2.1 bpm of the medical Doppler ultrasound. Its accuracy was slightly diminished for patients with high body mass indexes, though those readings were still within normal limits. Because an irregular fetal heartbeat is often an emergency, DopFone was not tested on patients with irregularities.

Next, the team plans to gather more data outside a lab to better train the model. Eventually they want to deploy it as a publicly available app.

“This women’s health space is often overlooked,” Garg said. “So I want to focus on accessible alternatives that can be available to people in low resource areas, whether that’s here in the U.S. or in other countries. Because health belongs to everyone.”

Co-authors include , a 91̽graduate student in electrical and computer engineering; and , both OB/GYNs in 91̽Medicine’s maternal-fetal medicine division; and , a 91̽assistant professor in the Allen School. , a 91̽professor in the Allen School and in electrical and computer engineering, and of the Georgia Institute of Technology, were senior authors. This research was funded by the 91̽Gift Fund.

For more information, contact Garg at pgarg70@uw.edu.

]]>
91̽and Microsoft expand relationship to enhance AI learning and research with aim to prepare Washington’s workforce for the future /news/2026/02/24/uw-and-microsoft-expand-relationship-to-enhance-ai-learning-and-research-with-aim-to-prepare-washingtons-workforce-for-the-future/ Tue, 24 Feb 2026 23:33:11 +0000 /news/?p=90745 woman demonstrating to two men
The 91̽and Microsoft announced the expansion of their long standing partnership uniting world-class academic research with world-leading technology. Amelia Keyser-Gibson (right), a graduate student in the School of Environmental and Forest Sciences, demonstrates her research to 91̽President Robert J. Jones (center) and Microsoft Vice Chair and President Brad Smith (left). Photo: Mark Stone/91̽

The 91̽ and Microsoft have announced the expansion of their long‑standing partnership uniting world-class academic research with world-leading technology. 91̽and Microsoft aim to accelerate AI discovery, prepare students and workers for an AI-driven economy, and help communities understand and use AI responsibly.

The announcement, made today by 91̽President Robert J. Jones and Microsoft Vice Chair and President Brad Smith during an event at the UW’s Paul G. Allen School of Computer Science & Engineering, will increase the University’s access to the most advanced AI computing power, expand internship and applied research opportunities for its students, and develop community AI literacy programs, including a foundational AI course for working Washingtonians.

“Our long-standing partnership with Microsoft demonstrates what’s possible when universities and industry come together to support students and our society, and we are grateful for their continued support,” Jones said. “Together, we’re expanding students’ access to hands-on learning, advancing AI research and strengthening our workforce.”

 

For broadcast media

This announcement builds on Microsoft’s decades-long support of the University, including $165 million of investments in student scholarships and enhancements to the UW’s world-leading computer science and engineering programs. In tandem with ongoing state and federal support, these investments have helped increase access to education and contributed to the state’s highly skilled workforce.

“President Jones has outlined a bold vision for the 91̽, one that expands access and affordability in higher ed, forges radical partnerships and strengthens civic health,” Smith said. “It’s essential that this vision includes broad access to AI technology and the skills to use it, so students, workers and communities across Washington are prepared for this new era of computing and can share fully in its benefits.”

The timing of the announcement comes as forecasts predict a need to fill 1.5 million job vacancies in Washington by 2032 — about 640,000 new jobs and 910,000 openings due to retirements, according to Partnership for Learning. Up to 75% of those vacancies will require post-secondary credentials, with four-year and advanced degrees in highest demand. If current trends hold, experts predict a shortfall of nearly 600,000 credentialed workers in Washington over the decade.

“It’s critical that industry, colleges and universities, and policy makers continue to work together to maintain the region’s economy and climate of innovation and discovery,” Smith said. “That includes avoiding going backward by making cuts to core state funding that would make a college degree less accessible to our state’s students.”

The budgets proposed by the Washington State Legislature’s majorities would keep funding for the 91̽largely stable. Historically, the Legislature has created a fertile environment for workforce growth and training through the Washington Workforce Education Investment Act (WEIA) and the Washington State Opportunity Scholarship (WSOS).

Since passage in 2019, with support from Microsoft and other business leaders, the WEIA has generated more than $2 billion in dedicated funding to expand higher education access in Washington. WSOS — a first-of-its-kind public-private partnership in which private employers contribute philanthropic dollars that are matched by the State of Washington to expand access to higher education in high-demand fields — has delivered nearly $150 million in total scholarships statewide, combining private donations and state matching funds. One-third of WSOS scholars attend the UW.

“These new elements of our partnership with Microsoft continue to position the 91̽and our state as leaders in access to higher education and at the forefront of the emerging technologies that can drive broad-based prosperity,” Jones said.

Microsoft and the UW’s expanded partnership will:

  • Provide faculty, researchers and students with access to advanced computing capabilities that enable modern AI training, experimentation and research, and instruction. Microsoft is supplementing this effort by donating Microsoft Azure cloud computing credits to help accelerate the development of a research cloud computing platform.
  • Launch a new initiative to connect 91̽faculty, visiting professors and students with real-world research opportunities at Microsoft. This is based on a new “research marketplace” that will be established and supported by Microsoft’s AI for Good Lab. It will be complemented by 10 additional graduate student-researcher slots per year — eight through the Microsoft Research organization and two in the AI for Good Lab.
  • Support undergraduate students as they become civic leaders, helping them build ethical judgment, digital citizenship and agency to co-design how emerging technologies, including AI, will serve communities and democracy.
  • Join forces with UW’s Continuum College, an institution serving more than 50,000 learners annually through 400 programs serving young people, working adults and senior citizens. The 91̽and Microsoft will develop programming that helps Washingtonians navigate AI-related workforce transitions with confidence and purpose. This collaboration will result in new courses and other learning pathways focused on career resilience, evolving job demands and navigating the challenges that accompany shifting career identities.
  • Beginning this fall, the 91̽and Microsoft will launch a new collaboration on Microsoft’s Redmond campus that reimagines how universities and industry work together. This part of the work will deepen workforce‑connected education and applied learning. The collaboration will support the co‑development of select courses and learning experiences for Microsoft employees navigating rapid AI‑driven change, while enabling 91̽students to learn alongside industry professionals and gain real‑world insight as part of their academic experience. Additional details will be announced later this year.

Since becoming the UW’s 34th president in August 2025, President Jones has set out three key priorities for the University: increasing access to education, including through the goal of making a 91̽degree debt-free for Washington undergraduates; spurring radical collaborations with businesses and communities to advance positive change; and eliminating any artificial barriers between the University and the communities it serves.

Along with strategic planning underway at the UW, Jones is engaging with corporate and civic leaders, as well as organizations throughout the region, to expand existing partnerships with the UW. Through these relationships, he aims to support access and affordability for students and the economic vitality and social fabric of Washington state and beyond.

For more information, contact Victor Balta at balta@uw.edu.

]]>
Five 91̽scientists named Sloan Fellows /news/2026/02/17/five-uw-scientists-named-sloan-fellows/ Tue, 17 Feb 2026 17:10:04 +0000 /news/?p=90641 Portrait of five researchers
Five 91̽faculty members have been awarded early-career fellowships from the Alfred P. Sloan Foundation. They are, from left to right, Maria “Masha” Baryakhtar, Matthew R. Golder, Vikram Iyer, Willem Laursen and Frankie Pavia. Photo: 91̽

Five 91̽ faculty members have been awarded early-career fellowships from the Alfred P. Sloan Foundation. The new Sloan Fellows, announced Feb. 17, are , an assistant professor of physics, , an assistant professor of chemistry, and , an assistant professor of biology, all in the College of Arts & Sciences; , an assistant professor of computer science in the College of Engineering; and , an assistant professor of oceanography in the College of the Environment.

Since the first Sloan Research Fellowships were awarded in 1955, and including this year’s fellows, 136 faculty from 91̽ have received a Sloan Research Fellowship, according to the Sloan Foundation.

Sloan Fellowships are open to scholars in seven scientific and technical fields — chemistry, computer science, Earth system science, economics, mathematics, neuroscience and physics — and honor early-career researchers whose achievements mark them among the next generation of scientific leaders.

The 126 Sloan Fellows for 2026 were selected by researchers and faculty in the scientific community. Candidates are nominated by their peers, and fellows are selected by independent panels of senior scholars based on each candidate’s research accomplishments, creativity and potential to become a leader in their field. Each fellow will receive $75,000 to apply toward research endeavors.

This year’s fellows come from 44 institutions across the United States and Canada.

Maria “Masha” Baryakhtar

ⲹٲ’s research in the Department of Physics focuses on theories beyond the established Standard Model of particle physics and on creating new ideas and directions for testing these theories. Such theories address outstanding puzzles in our existing understanding and often predict new, ultralight, feebly interacting particles beyond those we have discovered so far. The existence of these particles can be tested through exquisitely precise experiments in the lab or by observing extreme objects in the sky like black holes and neutron stars.

“My research program aims to search high and low for new, as yet hidden particles and forces. Because of their nature, these particles require a range of creative search strategies. The directions I am establishing use new technologies and data from the sky to the lab and may be the only way to shed light on the truly dark elements of our universe.”

Matthew R. Golder

ҴDZ’s research in the Department of Chemistry addresses the omnipresent “plastics problems” from two different vantage points. First, the team thinks about new ways to prolong the useful lifetime of commodity materials. The researchers use molecular engineering to keep plastics in use longer before discarding. The Golder Research Group also develops new methods to make and repurpose plastics, with an emphasis on green chemistry and making plastics more recyclable.

“Plastics are paramount to daily life, so there are numerous opportunities to improve performance and mitigate waste. We operate at the interface of fundamental organic chemistry and applied materials science to enhance plastic integrity and sustainability. By doing so, my students really take this mission to heart and constantly dream up new ways to creatively (re)design commodity plastic materials.”

Vikram Iyer

’s research in the Paul G. Allen School of Computer Science & Engineering seeks to address sustainability challenges across the full computing stack from creating recyclable polymers to reimagining the way we build computing hardware by designing AI systems to and . In particular, the group’s work goes beyond simply reducing energy consumption to quantify and tackle the environmental impacts of materials and manufacturing.

My group both leverages innovations from outside of computing like chemistry and material science to drive sustainability and applies computing techniques from AI to programming languages to fundamentally advance environmental sciences. This work is highly interdisciplinary and takes some extra effort at the beginning for each of us to understand the technologies and methods developed by our collaborators. By doing this, we can come up with completely new ideas that have real world impact like enabling carbon reduction at major companies like Amazon, and creating systems like battery-free robots that push the boundaries of technology.”

Willem Laursen

ܰ’s research in the Department of Biology is focused on understanding how animals detect and respond to sensory cues in their environment. Using genetic manipulation, neurophysiology and behavioral analyses, the lab’s current focus is to understand how disease vector mosquitoes use sensory cues to locate hosts, mates and egg-laying sites.

“It is an honor to be selected as a Sloan Fellow. This award will support our lab’s research on the role of the mosquito gustatory, or taste, system in critical behaviors, such as blood feeding. While mosquitoes use all of their senses to efficiently locate hosts, their taste system is surprisingly understudied. By examining the gustatory systems of blood-feeding insects, we hope to better understand how taste cues on the skin and in the blood are detected and used to guide their specialized behaviors, lines of inquiry that could ultimately identify new targets for controlling the spread of disease.”

Frankie Pavia

ʲ’s research in the School of Oceanography develops and applies new isotopic techniques to study feedbacks in the Earth system. His work spans the oceanic, atmospheric, lithospheric, and human domains, on timescales ranging from minutes to millennia.

“The oceans are a repository and reactor for materials originating on land, in the atmosphere, in Earth’s interior and from outer space. Chemical fingerprints of oceanic interactions with these reservoirs can be unlocked using unique analytical chemistry techniques, especially those involving the precise measurement of isotope ratios. My current research aims to discover new interactions between the oceans and the Earth system in the past, present and future, by pioneering interdisciplinary studies that use measurements of stable and radioactive isotopes to determine how much and how fast the Earth system changes. Current projects involve using cosmic dust to reconstruct sea-ice coverage, sensitively detecting human-derived carbon in the oceans, and understanding the past and future impacts of oceanic calcium carbonate dissolution on storage of atmospheric carbon dioxide.”

Contact Baryakhtar at mbaryakh@uw.edu, Golder at goldermr@uw.edu, Iyer at vsiyer@cs.washington.edu, Laursen at wlaursen@uw.edu, and Pavia at fjpavia@uw.edu.

]]>
In a study, AI model OpenScholar synthesizes scientific research and cites sources as accurately as human experts /news/2026/02/04/in-a-study-ai-model-openscholar-synthesizes-scientific-research-and-cites-sources-as-accurately-as-human-experts/ Wed, 04 Feb 2026 16:02:30 +0000 /news/?p=90533 A screenshot of the OpenScholar demo.
91̽and Ai2 research team built OpenScholar, an open-source AI model designed specifically to synthesize current scientific research. In tests, OpenScholar cited sources as accurately as human experts, and 16 scientists preferred its response to those written by subject experts 51% of the time. Above is the user-interface for a free online demo of the model.

Keeping up with the latest research is vital for scientists, but given that are published every year, that can prove difficult. Artificial intelligence systems show promise for quickly synthesizing seas of information, but they still tend to make things up, or “hallucinate.”

For instance, when a team led by researchers at the 91̽ and , or Ai2, studied a recent OpenAI model, , they found it fabricated 78-90% of its research citations. And general-purpose AI models like ChatGPT often can’t access papers that were published after their training data was collected.

So the 91̽and Ai2 research team built OpenScholar, an open-source AI model designed specifically to synthesize current scientific research. The team also created the first large, multi-domain for evaluating how well models can synthesize and cite scientific research. In tests, OpenScholar cited sources as accurately as human experts, and 16 scientists preferred its response to those written by subject experts 51% of the time.

The team Feb. 4 in Nature. The project’s are publicly available and free to use.

“After we started this work, we put the demo online and quickly, we got a lot of queries, far more than we’d expected,” said senior author , a 91̽associate professor in the Paul G. Allen School of Computer Science & Engineering and senior director at Ai2. “When we started looking through the responses we realized our colleagues and other scientists were actively using OpenScholar. It really speaks to the need for this sort of open-source, transparent system that can synthesize research.”

Try the

Researchers trained the model and then created a set of 45 million scientific papers for OpenScholar to pull from to ground its answers in established research. They coupled this with a technique called “,” which lets the model search for new sources, incorporate them and cite them after it’s been trained.

“Early on we experimented with using an AI model with Google’s search data, but we found it wasn’t very good on its own,” said lead author , a research scientist at Ai2 who completed this research as a 91̽doctoral student in the Allen School. “It might cite some research papers that weren’t the most relevant, or cite just one paper, or pull from a blog post randomly. We realized we needed to ground this in scientific papers. We then made the system flexible so that it could incorporate emerging research through results.”

To test their system, the team created ScholarQABench, a benchmark against which to test systems on scientific search. They gathered 3,000 queries and 250 longform answers written by experts in computer science, physics, biomedicine and neuroscience.

“AI is getting better and better at real world tasks,” Hajishirzi said. “But the big question ultimately is whether we can trust that its answers are correct.”

The team compared OpenScholar against other state-of-the-art AI models, such as OpenAI’s GPT-4o and two models from Meta. ScholarQABench automatically evaluated AI models’ answers on metrics such as their accuracy, writing quality and relevance.

OpenScholar outperformed all the systems it was tested against. The team had 16 scientists review answers from the models and compare them with human-written responses. The scientists preferred OpenScholar answers to human answers 51% of the time, but when they combined OpenScholar citation methods and pipelines with GPT-4o (a much bigger model), the scientists preferred the AI written answers to human answers 70% of the time. They picked answers from GPT-4o on its own only 32% of the time.

“Scientists see so many papers coming out every day that it’s impossible to keep up,” Asai said. “But the existing AI systems weren’t designed for scientists’ specific needs. We’ve already seen a lot of scientists using OpenScholar and because it’s open-source, others are building on this research and already improving on our results. We’re working on a followup model, , which builds on OpenScholar’s findings and performs multi-step search and information gathering to produce more comprehensive responses.”

Other co-authors include , , , all 91̽doctoral students in the Allen School; , a 91̽professor emeritus in the Allen School and general manager and chief scientist at Ai2; , a 91̽postdoc in the Allen School and postdoc at Ai2; , a 91̽professor in the Allen School; , a 91̽assistant professor in

the Allen School; Amanpreet Singh, Joseph Chee Chang, Kyle Lo, Luca Soldaini, Sergey Feldman, Mike D’Arcy, David Wadden, Matt Latzke, Jenna Sparks and Jena D. Hwang of Ai2; Wen-tau Yih of Meta; Minyang Tian, Shengyan Liu, Hao Tong and Bohao Wu of University of Illinois Urbana-Champaign; Pan Ji of University of North Carolina; Yanyu Xiong of Stanford University; and Graham Neubig of Carnegie Mellon University.

For more information, contact Asai at akaria@allenai.org and Hajishirzi at hannaneh@cs.washington.edu.

]]>
Video: Drivers struggle to multitask when using dashboard touch screens, study finds /news/2025/12/16/video-drivers-struggle-to-multitask-when-using-dashboard-touch-screens-study-finds/ Tue, 16 Dec 2025 17:00:09 +0000 /news/?p=90099

Once the domain of buttons and knobs, car dashboards are increasingly home to large touch screens. While that makes following a mapping app easier, it also means drivers can’t feel their way to a control; they have to look. But how does that visual component affect driving?

New research from the 91̽ and Toyota Research Institute, or TRI, explores how drivers balance driving and using touch screens while distracted. In the study, participants drove in a vehicle simulator, interacted with a touch screen and completed memory tests that mimic the mental effort demanded by traffic conditions and other distractions. The team found that when people multitasked, their driving and touch screen use both suffered. The car drifted more in the lane while people used touch screens, and their speed and accuracy with the screen declined when driving. The effects increased further when they added the memory task.

These results could help auto manufacturers design safer, more responsive touch screens and in-car interfaces.

The team Sept. 30 at the ACM Symposium on User Interface Software and Technology in Busan, Korea.

“We all know ,” said co-senior author , a 91̽professor in the Paul G. Allen School of Computer Science & Engineering. “But what about the car’s touch screen? We wanted to understand that interaction so we can design interfaces specifically for drivers.”

As the study’s 16 participants drove the simulator, sensors tracked their gaze, finger movements, pupil diameter and electrodermal activity. The last two are common ways to measure mental effort, or “cognitive load.” For instance, pupils tend to grow when people are concentrating.

Related:

  • Story from

While driving, participants had to touch specific targets on a 12-inch touch screen, similar to how they would interact with apps and widgets. They did this while completing three levels of an “N-back task,” a memory test in which the participants hear a series of numbers, 2.5 seconds apart, and have to repeat specific digits.

The participants’ performance changed significantly under different conditions:

  • When interacting with the touch screen, participants drifted side to side in their lane 42% more often. Increasing cognitive load had no effect on the results.
  • Touch screen accuracy and speed decreased 58% when driving, then another 17% under high cognitive load.
  • Each glance at the touchscreen was 26.3% shorter under high cognitive load.
  • A “hand-before-eye” phenomenon, in which drivers’ reached for a control before looking at it, increased from 63% to 71% as memory tasks were introduced.

The team also found that increasing the size of the target areas participants were trying to touch did not improve their performance.

“If people struggle with accuracy on a screen, usually you want to make bigger buttons,” said , a 91̽doctoral student in the Allen School. “But in this case, since people move their hand to the screen before touching, the thing that takes time is the visual search.”

Based on these findings, the researchers suggest future in-car touch screen systems might use simple sensors in the car — eye tracking, or touch sensors on the steering wheel — to monitor drivers’ attention and cognitive load. Based on these readings, the car’s system might adjust the touch screen’s interface to make important controls more prominent and safer to access.

“Touch screens are widespread today in automobile dashboards, so it is vital to understand how interacting with touch screens affects drivers and driving,” said co-senior author , a 91̽professor in the Information School. “Our research is some of the first that scientifically examines this issue, suggesting ways for making these interfaces safer and more effective.”

, a 91̽doctoral student in the Information School, is co-lead author. Other co-authors include , , and of TRI. This research was funded in part by TRI.

For more information, contact Wobbrock at wobbrock@uw.edu and Fogarty at jfogarty@cs.washington.edu.

]]>
AI can pick up cultural values by mimicking how kids learn /news/2025/12/11/ai-training-cultural-values/ Thu, 11 Dec 2025 17:04:44 +0000 /news/?p=90064 A video game shows two kitchens of different sizes.
In the Overcooked video game, players work to cook and deliver as much onion soup as possible. In the study’s version of the game, one player can give onions to help the other who has further to travel to make the soup. The research team wanted to find out if AI systems could learn altruism by watching different cultural groups play the game. Photo:

Artificial intelligence systems absorb values from their training data. The trouble is that values differ across cultures. So an AI system trained on data from the entire internet won’t work equally well for people from different cultures.

But a new 91̽ study suggests that AI could learn cultural values by observing human behavior. Researchers had AI systems observe people from two cultural groups playing a video game. On average, participants in one group behaved more altruistically. The AI assigned to each group learned that group’s degree of altruism, and was able to apply that value to a novel scenario beyond the one they were trained on.

The team Dec. 9 in PLOS One.

“We shouldn’t hard code a universal set of values into AI systems, because many cultures have their own values,” said senior author , a 91̽professor in the Paul G. Allen School of Computer Science & Engineering and co-director of the Center for Neurotechnology. “So we wanted to find out if an AI system can learn values the way children do, by observing people in their culture and absorbing their values.”

As inspiration, the team looked to showing that 19-month-old children raised in Latino and Asian households were more than those from other cultures.

In the AI study, the team recruited 190 adults who identified as white and 110 who identified as Latino. Each group was assigned an AI agent, a system that can function autonomously.

These agents were trained with a method called inverse reinforcement learning, or IRL. In the more common AI training method, reinforcement learning, or RL, a system is given a goal and gets rewarded based on how well it works toward that goal. In IRL, the AI system observes the behavior of a human or another AI agent, and infers the goal and underlying rewards. So a robot trained to play tennis with RL would be rewarded when it scores points, while a robot trained with IRL would watch professionals playing tennis and learn to emulate them by inferring goals such as scoring points.

This IRL approach more closely aligns with how humans develop.

“Parents don’t simply train children to do a specific task over and over. Rather, they model or act in the general way they want their children to act. For example, they model sharing and caring towards others,” said co-author , a 91̽professor of psychology and co-director of Institute for Learning & Brain Sciences (I-LABS). “Kids learn almost by osmosis how people act in a community or culture. The human values they learn are more ‘caught’ than ‘taught.’”

In the study, the AI agents were given the data of the participants playing a modified version of the video game Overcooked, in which players work to cook and deliver as much onion soup as possible. Players could see into another kitchen where a second player had to walk further to accomplish the same tasks, putting them at an obvious disadvantage. Participants didn’t know that the second player was a bot programmed to ask the human players for help. Participants could choose to give away onions to help the bot but at the personal cost of delivering less soup.

Researchers found that overall the people in the Latino group chose to help more than those in the white group, and the AI agents learned the altruistic values of the group they were trained on. When playing the game, the agent trained on Latino data gave away more onions than the other agent.

To see if the AI agents had learned a general set of values for altruism, the team conducted a second experiment. In a separate scenario, the agents had to decide whether to donate a portion of their money to someone in need. Again, the agents trained on Latino data from Overcooked were more altruistic.

“We think that our proof-of-concept demonstrations would scale as you increase the amount and variety of culture-specific data you feed to the AI agent. Using such an approach, an AI company could potentially fine-tune their model to learn a specific culture’s values before deploying their AI system in that culture,” Rao said.

Additional research is needed to know how this type of IRL training would perform in real-world scenarios, with more cultural groups, competing sets of values, and more complicated problems.

“Creating culturally attuned AI is an essential question for society,” Meltzoff said. “How do we create systems that can take the perspectives of others into account and become civic minded?”

, a 91̽research engineer in the Allen School, and , a software engineer at Microsoft who completed this research as a 91̽student, were co-lead authors. Other co-authors include , a scientist at the Allen Institute who completed this research as a 91̽doctoral student; , an assistant professor at San Diego State University, who completed this research as a post-doctoral scholar at UW; and , a professor in the Allen School and director of the at UW.

For more information, contact Rao at rao@cs.washington.edu.

]]>
AI headphones automatically learn who you’re talking to — and let you hear them better /news/2025/12/09/ai-headphones-smart-noise-cancellation-proactive-listening/ Tue, 09 Dec 2025 17:30:37 +0000 /news/?p=89888

UPDATE (Dec. 12, 2025): This story has been updated to correct Malek Itani’s department.

Holding a conversation in a crowded room often leads to the frustrating “cocktail party problem,” or the challenge of separating the voices of conversation partners from a hubbub. It’s a mentally taxing situation that can be exacerbated by hearing impairment.

As a solution to this common conundrum, researchers at the 91̽ have developed that proactively isolate all the wearer’s conversation partners in a noisy soundscape. The headphones are powered by an AI model that detects the cadence of a conversation and another model that mutes any voices which don’t follow that pattern, along with other unwanted background noises. The prototype uses off-the-shelf hardware and can identify conversation partners using just two to four seconds of audio.

The system’s developers think the technology could one day help users of hearing aids, earbuds and smart glasses to filter their soundscapes without the need to manually direct the AI’s “attention.”

The team Nov. 7 in Suzhou, China at the Conference on Empirical Methods in Natural Language Processing. The underlying code is open-source and .

“Existing approaches to identifying who the wearer is listening to predominantly involve electrodes implanted in the brain to track attention,” said senior author , a 91̽professor in the Paul G. Allen School of Computer Science & Engineering. “Our insight is that when we’re conversing with a specific group of people, our speech naturally follows a turn-taking rhythm. And we can train AI to predict and track those rhythms using only audio, without the need for implanting electrodes.”

Related:

  • For more information, visit
  • Story from

The prototype system, dubbed “proactive hearing assistants,” activates when the person wearing the headphones begins speaking. From there, one AI model begins tracking conversation participants by performing a “who spoke when” analysis and looking for low overlap in exchanges. The system then forwards the result to a second model which isolates the participants and plays the cleaned up audio for the wearer. The system is fast enough to avoid confusing audio lag for the user, and can currently juggle one to four conversation partners in addition to the wearer’s audio.

The team tested the headphones with 11 participants, who rated qualities like noise suppression and comprehension with and without the AI filtration. Overall, the group rated the filtered audio more than twice as favorably as the baseline.

A pair of headphones with a curly black microphone taped to one ear cup.
The team combined off-the-shelf noise-canceling headphones with binaural microphones to create the prototype, pictured here. Photo: Hu et al./EMNLP

Gollakota’s team has been experimenting with AI-powered hearing assistants for the past few years. They developed one smart headphone prototype that can pick a person’s audio out of a crowd when the wearer looks at them, and another that creates a “sound bubble” by muting all sounds within a set distance of the wearer.

“Everything we’ve done previously requires the user to manually select a specific speaker or a distance within which to listen, which is not great for user experience,” said lead author Guilin Hu, a doctoral student in the Allen School. “What we’ve demonstrated is a technology that’s proactive — something that infers human intent noninvasively and automatically.”

Plenty of work remains to refine the experience. The more dynamic a conversation gets, the more the system is likely to struggle, as participants talk over one another or speak in longer monologues. Participants entering and leaving a conversation present another hurdle, though Gollakota was surprised by how well the current prototype performed in these more complicated scenarios. The authors also note that the models were tested on English, Mandarin and Japanese dialog, and that the rhythms of other languages might require further fine-tuning.

The current prototype uses commercial over-the-ear headphones, microphones and circuitry. Eventually, Gollakota expects to make the system small enough to run on a tiny chip within an earbud or a hearing aid. In that appeared at , the authors demonstrated that it is possible to run AI models on tiny hearing aid devices.

Co-authors include, a 91̽doctoral student in the Allen School; and , a 91̽doctoral student in the electrical and computer engineering department.

This research was funded by the Moore Inventor Fellows program.

For more information, contact proactivehearing@cs.washington.edu

]]>