Technology – 91̽News /news Tue, 14 Apr 2026 22:17:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Tiny cameras in earbuds let users talk with AI about what they see /news/2026/04/14/cameras-in-wireless-earbuds-vuebuds/ Tue, 14 Apr 2026 14:38:00 +0000 /news/?p=91232 Two black earbuds: one with the casing removed exposing a computer chip and tiny camera.
91̽researchers developed a system called VueBuds that uses tiny cameras in off-the-shelf wireless earbuds to allow users to talk with an AI model about the scene in front of them. Here, the altered headphones are shown with the camera inserted. Photo: Kim et al./CHI ‘26

91̽ researchers developed the first system that incorporates tiny cameras in off-the-shelf wireless earbuds to allow users to talk with an AI model about the scene in front of them. For instance, a user might turn to a Korean food package and say, “Hey Vue, translate this for me.” They’d then hear an AI voice say, “The visible text translates to ‘Cold Noodles’ in English.”

The prototype system called VueBuds takes low-resolution, black-and-white images, which it transmits over Bluetooth to a phone or other nearby device. A small artificial intelligence model on the device then answers questions about the images within around a second. For privacy, all of the processing happens on the device, a small light turns on when the system is recording, and users can immediately delete images.

The team will April 14 at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona.

“We haven’t seen most people adopt smart glasses or VR headsets, in part because a lot of people don’t like wearing glasses, and they often come with , such as recording high-resolution video and processing it in the cloud,” said senior author , a 91̽professor in the Paul G. Allen School of Computer Science & Engineering. “But almost everyone wears earbuds already, so we wanted to see if we could put visual intelligence into tiny, low-power earbuds, and also address privacy concerns in the process.”

Cameras use far more power than the microphones already in earbuds, so using the same sort of high-res cameras as those in smart glasses wouldn’t work. Also, large amounts of information can’t stream continuously over Bluetooth, so the system can’t run continuous video.

The team found that using a low-power camera — roughly the size of a grain of rice — to shoot low-resolution, black-and-white still images limited battery drain and allowed for Bluetooth transmission while preserving performance.

There was also the matter of placement.

“One big question we had was: Will your face obscure the view too much? Can earbud cameras capture the user’s view of the world reliably?” said lead author , who completed this work as a 91̽doctoral student in the Allen School.

The team found that angling each camera 5-10 degrees outward provides a 98-108 degree field of view. While this creates a small blind spot when objects are held closer than 20 centimeters from the user, people rarely hold things that close to examine them — making it a non-issue for typical interactions.

Researchers also discovered that while the vision language model was largely able to make sense of the images from each earbud, having to process images from both earbuds slowed it down. So they had the system “stitch” the two images into one, identifying overlapping imagery and combining it. This allows the system to respond in one second — quick enough to feel like real-time for users — rather than the two seconds it takes with separate images.

The team then had 74 participants compare recorded outputs from VueBuds with outputs from Ray-Ban Meta Glasses in a series of tests. Despite VueBuds using low-resolution images with greater privacy controls and the Ray-Bans taking high-res images processed on the cloud, the two systems performed equivalently. Participants preferred VueBuds’ translations, while the Ray-Bans did better at counting objects.

Sixteen participants also wore VueBuds and tested the system’s ability to translate and answer basic questions about objects. VueBuds achieved 83-84% accuracy when translating or identifying objects and 93% when identifying the author and title of a book.

This study was designed to gauge the feasibility of integrating cameras in wireless earbuds. Since the system only takes grayscale images, it can’t answer questions that involve color in the scene.

The team wants to add color to the system — color cameras require more power — and to train specialized AI models for specific use cases, such as translation. 

“This study lets us glimpse what’s possible just using a general purpose language model and our wireless earbuds with cameras,” Kim said. “But we’d like to study the system more rigorously for applications like reading a book — for people who have low vision or are blind, for instance — or translating text for travelers.” 

Co-authors include , a 91̽master’s student in the Allen School, and , , , and , all 91̽students in electrical and computer engineering.

For more information, contact vuebuds@cs.washington.edu.

]]>
At quantum testbed lab, researchers across the 91̽probe ‘spooky’ mysteries of quantum phenomena /news/2026/04/13/qt3-quantum-computing-testbed-lab-dilution-fridge/ Mon, 13 Apr 2026 23:09:13 +0000 /news/?p=91294 Three people stand next to a complex metal tube-shaped machine
Max Parsons (left), assistant professor of electrical and computer engineering, works with undergraduate staff members Reynel Cariaga (center) and Jesus Garcia (right) at the QT3 lab. The device in the foreground is a scanning tunneling microscope that can image individual atoms within a material by scanning an extremely fine needle — just one atom thick at the tip — across the sample. Photo: Erhong Gao/91̽

Even on a campus like the 91̽’s — home to particle accelerators, wave tanks and countless other bespoke pieces of equipment — the machinery in the stands out. Take the dilution fridge, a large, white, cylindrical device that can cool a small chamber to one hundredth of a kelvin above absolute zero — the coldest possible temperature in the universe.

“This is the coldest fridge money can buy,” said , a 91̽assistant professor of electrical and computer engineering and the former director of the lab, which goes by the nickname QT3. “When it’s running, the chamber inside this device is about 100 times colder than outer space. At that temperature, it’s much easier to study and manipulate a material’s quantum properties.”

The lab also houses a photon qubit tabletop lab: a nondescript set of boxes, lasers and lenses that can demonstrate the “spooky” — a term scientists actually use — phenomenon known as quantum entanglement, where two particles appear to communicate instantaneously with each other despite being physically apart.

Or there’s the lab’s latest acquisition, the scanning tunneling microscope, which can image individual atoms within a solid material, allowing researchers to study the structure of materials at the smallest scales.

An interdisciplinary group of researchers has been marshalling resources and expertise to create QT3 for three years, and now, the lab is opening its doors as a unique one-stop shop resource for quantum researchers and educators at the UW.

“The idea of this lab is to improve access to quantum hardware,” Parsons said. “It’s rather hard to acquire equipment like this. And there are a lot of researchers that may have good ideas that they want to test, but don’t have the resources yet for their own equipment. So we’re inviting researchers, initially from across campus, but also from other universities and from industry, to come in and test their ideas. This can be a hub for quantum experts to share their ideas and collaborate.”

The lab also boasts hardware that can demonstrate known quantum principles and techniques, making it useful for students in quantum fields. In addition to the entanglement device, Parsons’ students developed a machine that can suspend charged particles — in this case, tiny grains of pollen — in midair using electric fields. Researchers use the same technique to trap single atoms and manipulate their quantum properties, making the lab’s ion-trapping machine good practice for more complex work.

Two tiny dots hover back and forth in a tube
The QT3 facility’s ion trapping lab gives students a chance to practice techniques used in quantum computing research. Here, students have suspended two tiny grains of pollen — the red dots hovering back and forth — in midair using electric fields. Photo: Robert Thomas

Some students even work at the lab through an undergraduate staffing program, and have helped install instrumentation, write code to power equipment and build parts for custom microscopes. The program provides yet another avenue for students to get hands-on experience with unusual machinery and techniques.

“Quantum mechanics is inherently counterintuitive, and that makes it a powerful teaching tool,” Parsons said. “In the QT3 lab, students will encounter systems where their everyday intuition breaks down, and they must rely on careful reasoning and experimentation instead. They learn how to debug when results don’t match expectations, how to test simple cases and how to build understanding about hardware step by step.”

The cosmically cold dilution fridge remains something of a centerpiece, even as the lab fills up with specialized equipment. The extreme environment within the device strips heat, light and other stray energy away from materials, allowing researchers to observe the peculiar quantum properties that remain. One such property is superposition, or the ability of a particle like an electron to maintain multiple mutually exclusive properties at the same time. Scientists use superposition to create a powerful, tiny piece of technology: a quantum bit, or qubit.

“Traditional computers use bits, which can only be one or zero. A qubit, on the other hand, we can make one plus zero,” Parsons said. “It’s both at the same time, and only when we measure it do we find out which one it is. We can use this unusual property to build a new class of computers that excel at tasks like communications and encryption.”

QT3 is part of a collaborative effort to solidify 91̽as a leader in quantum research and applications. Most of the lab hardware was funded by a congressional earmark championed by Senator Maria Cantwell’s office. Departmental funding from across the College of Engineering and the College of Arts and Sciences helped rehab the lab space. The National Science Foundation provided seed funding for the instructional lab equipment.

a repeating hexagonal pattern of small golden blobs
An image captured by the QT3 lab’s scanning tunneling microscope reveals a lattice of individual atoms in a sample of silicon. Photo: Rajiv Giridharagopal

The 91̽has also spent the past decade investing heavily in faculty with quantum expertise.

“Very few places have expertise across the full quantum stack, from materials up to algorithms,” said , a 91̽professor of physics and founder of QT3. “The 91̽has quantum faculty in electrical and mechanical engineering, physics, computer science, materials science and chemistry. Our faculty work on superconducting qubits, spin defects, photons, trapped ions, neutral atoms and topological qubits. Our advantage is the breadth of our investment.”

The lab is now available to researchers and students across the UW, and private companies are encouraged to reach out about partnering. Parsons has already used the lab to teach a graduate-level class in electrical and computer engineering for students who included employees from Boeing, Microsoft and quantum computing company IonQ. The lab is hiring for a full-time manager to maintain the equipment and help users make the most of the facility.

“Here in academia, we can improve the building blocks for applied technologies like quantum computing, and then transfer those learnings to industry for further scaling,” Parsons said.

For more information, contact Parsons at mfpars@uw.edu.

]]>
New marine energy tech is put to the test at Harris Hydraulics Lab /news/2026/03/06/marine-energy-turbines-harris-hydraulics-uw-pnnl/ Fri, 06 Mar 2026 17:29:14 +0000 /news/?p=90849

At the 91̽ Harris Hydraulics Lab, an odd scene plays out. Over and over again, researchers from the 91̽and the (PNNL) pass a small rubber model of a marine animal through a large tank filled with flowing water and fitted with a spinning turbine. On some runs, the model bonks against the turbine blades; on others, it receives a glancing blow or sails past undisturbed. When bonks or knicks occur, a small collision sensor on one of the turbine’s blades detects the impacts and plots the interactions in a computer program.

The researchers are repeatedly simulating something that they hope will rarely happen in the wild: a collision between marine wildlife like a seabird, seal, fish or whale — or submerged debris like logs — and an underwater turbine.

“We want to make sure we’re minimizing the chances of a collision in the first place,” said Aidan Hunt, a senior research engineer in mechanical engineering at the 91̽and member of the (PMEC). “But if a collision were to occur, we want to be able to detect it, and potentially avoid it, in real time. The available evidence suggests that collisions are rare, but we’re taking a ‘trust-but-verify’ approach.”

Marine energy — power harvested from tides, waves and currents — has enormous potential as a clean, renewable resource. But more information is needed about how large, commercial installations of underwater turbines or power-generating buoys could affect marine wildlife, whether through increased noise in the environment, habitat change or direct interactions with equipment.

The marine collision experiments are part of the , a collection of projects led by PNNL to study the environmental impact of marine energy.

The work at Harris Hydraulics follows a by PNNL and the 91̽Applied Physics Lab using a four-foot-tall prototype turbine installed at the entrance to Sequim Bay. In that study, researchers trained an underwater camera on the turbine for 109 days and then catalogued every instance of an animal approaching or interacting with the turbine. The camera captured more than 1,000 instances of fish, birds and seals approaching the turbine blades. There were only four collisions, and all were small fish.

“This study was a first step, but a promising one,” said co-author , a research scientist at the 91̽Applied Physics Lab. “We didn’t see any endangered species in our study, and the risk of collision for seals and sea birds seemed to be quite low. We’re excited to get back out there with the camera and learn even more.”

The Sequim Bay experiment generated hours of valuable data, but that degree of intense monitoring may not be practical in large commercial installations in the future. Cheaper impact sensors, like the ones logging bath toy impacts at Harris Hydraulics, could be a solution, researchers say. 

The project is funded by the U.S. Department of Energy’s Hydropower & Hydrokinetics Office, through the Pacific Northwest National Laboratory’s Triton Initiative and the TEAMER program.

For more information, contact Hunt at ahunt94@uw.edu or Emma Cotter at emma.cotter@pnnl.gov.

]]>
DopFone app can accurately track fetal heart rate using only a smartphone /news/2026/02/26/dopfone-fetal-heart-rate-app/ Thu, 26 Feb 2026 16:58:23 +0000 /news/?p=90704
DopFone uses an off-the-shelf smartphone’s existing speaker and microphone to accurately estimate fetal heart rate. The phone mimics a Doppler ultrasound, emitting a tone and listening for the subtle variations in its echo caused by fetal heart beats. A machine learning model then estimates the heart rate. Photo: Garg et al./Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

Heart rate is an important sign of fetal health, yet few technologies exist to easily and inexpensively track fetal heart rates outside of doctors’ offices. This can create risks for pregnancies in low-resource regions where doctors are far away or inaccessible.

A team led by 91̽ researchers has created DopFone, a system that uses an off-the-shelf smartphone’s existing speaker and microphone to accurately estimate fetal heart rate. The phone mimics a Doppler ultrasound, emitting a tone and listening for the subtle variations in its echo caused by fetal heart beats. A machine learning model then estimates the heart rate. In a clinical test with 23 pregnant women, DopFone estimated heart rate with an average error of 2 beats per minute, or bpm. The accepted clinical range is within 8 bpm.

The team Dec. 2 in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.

“Eventually DopFone could let people test fetal heart rate regularly, rather than relying on the intermittent tests at a doctor’s office, or not getting tested at all,” said lead author , a 91̽doctoral student in the Paul G. Allen School of Computer Science & Engineering. “Patients might then send this data to doctors so that they can better judge patients’ health when they’re not in a clinic.”

Traditional Doppler ultrasounds, the clinical standard for fetal heart rate monitoring, work by sending high-frequency sound into a person’s body and tracking how the echo changes in frequency. They’re very accurate at measuring fetal heart rate but require costly equipment and a skilled technician to operate it.

To use DopFone, a user places the phone’s microphone against their abdomen for one minute. The phone emits a subaudible 18 kilohertz tone. The team chose this low frequency because — unlike a Doppler’s high frequencies, above 2,000 kilohertz —  it sits within the range smartphone microphones can record while still traveling well through tissue. As the tone is reflected through the user’s abdomen, the fetus’s heartbeat creates small shifts in the sound.

A machine learning model then estimates the heart rate using the audio and the patient’s demographic information

The team tested DopFone in 91̽Medicine’s maternal-fetal medicine division on 23 pregnant patients between 19 and 39 weeks of pregnancy. On average its readings were within 2.1 bpm of the medical Doppler ultrasound. Its accuracy was slightly diminished for patients with high body mass indexes, though those readings were still within normal limits. Because an irregular fetal heartbeat is often an emergency, DopFone was not tested on patients with irregularities.

Next, the team plans to gather more data outside a lab to better train the model. Eventually they want to deploy it as a publicly available app.

“This women’s health space is often overlooked,” Garg said. “So I want to focus on accessible alternatives that can be available to people in low resource areas, whether that’s here in the U.S. or in other countries. Because health belongs to everyone.”

Co-authors include , a 91̽graduate student in electrical and computer engineering; and , both OB/GYNs in 91̽Medicine’s  maternal-fetal medicine division; and , a 91̽assistant professor in the Allen School. , a 91̽professor in the Allen School and in electrical and computer engineering, and of the Georgia Institute of Technology, were senior authors. This research was funded by the 91̽Gift Fund.

For more information, contact Garg at pgarg70@uw.edu.

]]>
Rubin Observatory launches real-time monitoring of the sky with thousands of alerts /news/2026/02/25/rubin-observatory-real-time-alerts-dirac/ Wed, 25 Feb 2026 18:02:01 +0000 /news/?p=90703 A large telescope sits on a mountain top beneath a starry night sky.
The Vera C. Rubin Observatory sits on its mountain peak in Chile during observation activities in April 2025. The observatory will soon begin real-time nightly monitoring of the entire Southern Hemisphere sky. Photo: RubinObs/NOIRLab/SLAC/NSF/DOE/AURA/P. Horálek (Institute of Physics in Opava)

On Feb. 24, astronomers’ computers around the world lit up with a deluge of cosmic notifications — 800,000 alerts about new asteroids in our solar system, exploding stars across the galaxy and other noteworthy changes in the night sky. The discoveries were made by the Simonyi Survey Telescope at the in Chile and distributed globally within about two minutes.

That flurry of notifications marked the commencement of the observatory’s Alert Production Pipeline, a sophisticated software system developed at the 91̽ that is eventually expected to produce up to seven million alerts per night.

“Rubin’s alert system was designed to allow anyone to identify interesting astronomical events with enough notice to rapidly obtain time-critical follow-up observations,” said , a research associate professor of astronomy at the 91̽who leads the Alert Production Pipeline Group for the Rubin Observatory. “Rubin will survey the sky at an unprecedented scale and allow us to find the most rare and unusual objects in the universe. We can’t wait to see the exciting science that comes from these data.”

The beginning of scientific alerts is one of the last major milestones before Rubin Observatory launches its (LSST) later this year. During the LSST, Rubin will scan the Southern Hemisphere sky nightly for 10 years to precisely capture every visible change using . These alerts will chronicle the treasure trove of scientific discoveries that Rubin will make through its time-lapse record of the universe. In the first year of the LSST, Rubin is expected to capture images of more objects than all other optical observatories combined in human history.

The 91̽played a central role in the software that enabled this month’s milestone. The alert pipeline was developed by a team of about two dozen researchers and software developers in the astronomy department’s . The team has spent the past decade working with other data management teams around the country to figure out how to process the staggering 10 terabytes of images that Rubin produces every night, and will continue to develop and operate the alert system throughout the 10-year LSST survey.

A grid of 12 images of blurry grayscale celestial images.
As new images are taken, Rubin Observatory’s software automatically compares each one with a template image. The template image, built by combining images Rubin has previously taken of the same area in the same filter, is subtracted from the new image, leaving only the changes. Each change triggers an alert within minutes of image capture. Photo: NSF–DOE Vera C. Rubin Observatory/NOIRLab/SLAC/AURA. Alert images with classifications provided by ALeRce and Lasair.

“Enabling real-time discovery on such a massive data stream has required years of technical innovation in image processing algorithms, databases and data orchestration. We’re thrilled to continue the UW’s legacy of excellence in data-driven science.” Bellm said.

While the night sky seems calm and unchanging to the casual viewer, it’s actually alive with motion and transformation. Each alert signals something that has changed in the sky since Rubin last looked — a new source of light, a star that brightened or dimmed, or an object that moved. With Rubin’s alerts, scientists will have a greater ability to catch supernovae in their earliest moments, discover and track asteroids to assess potential threats to Earth and spot rare interstellar objects as they race through the solar system.

Scientists can use these data to better understand the nature of dark matter, dark energy and other unknown aspects of the universe.

“The discoveries reported in these alerts reflect the power of NSF-DOE Rubin Observatory as a tool for astrophysics and the importance of sustained federal support,” said Kathy Turner, program manager in the High Energy Physics program in the U.S. Department of Energy’s . “Rubin Observatory’s groundbreaking capabilities are revealing untold astrophysical treasures and expanding scientists’ access to the ever-changing cosmos.”

Every 40 seconds during nighttime observations, Rubin captures a new region of the sky. It then sends the data on a seconds-long journey from Chile to the U.S. Data Facility (USDF) at the in California for initial processing. Rubin’s data management system automatically compares it to a template made from previous images of the same region. This comparison allows it to detect the slightest variations. With every change, such as the appearance of a new point of light, an object’s movement or a change in brightness, the system generates a public alert within two minutes.

“The scale and speed of the alerts are unprecedented,” says Hsin-Fang Chiang, a SLAC software developer leading operations for data processing at the USDF. “After generating hundreds of thousands of test alerts in the last few months, we are now able to say, within minutes, with each image, ‘Here is everything. Go.’”

Rubin’s alerts are public, meaning anyone — from professional researchers to students and citizen scientists — can access and explore them. The speed of the alerts allows scientists using other ground- and space-based telescopes around the world to coordinate follow-up observations. This collaboration will enable fast and detailed studies of unfolding phenomena.

Additionally, through collaborations with platforms like , Rubin will empower the global community to help classify cosmic events and contribute directly to discovery.

Rubin Observatory is jointly operated by NSF and SLAC.

For more information, contact Bellm at ecbellm@uw.edu.

This story was adapted from a press release by and .

Operations of the Vera C. Rubin Observatory are funded by the U.S. National Science Foundation and the U.S. Department of Energy’s Office of Science.

]]>
In a study, AI model OpenScholar synthesizes scientific research and cites sources as accurately as human experts /news/2026/02/04/in-a-study-ai-model-openscholar-synthesizes-scientific-research-and-cites-sources-as-accurately-as-human-experts/ Wed, 04 Feb 2026 16:02:30 +0000 /news/?p=90533 A screenshot of the OpenScholar demo.
91̽and Ai2 research team built OpenScholar, an open-source AI model designed specifically to synthesize current scientific research. In tests, OpenScholar cited sources as accurately as human experts, and 16 scientists preferred its response to those written by subject experts 51% of the time. Above is the user-interface for a free online demo of the model.

Keeping up with the latest research is vital for scientists, but given that are published every year, that can prove difficult. Artificial intelligence systems show promise for quickly synthesizing seas of information, but they still tend to make things up, or “hallucinate.” 

For instance, when a team led by researchers at the 91̽ and , or Ai2, studied a recent OpenAI model, , they found it fabricated 78-90% of its research citations. And general-purpose AI models like ChatGPT often can’t access papers that were published after their training data was collected.

So the 91̽and Ai2 research team built OpenScholar, an open-source AI model designed specifically to synthesize current scientific research. The team also created the first large, multi-domain for evaluating how well models can synthesize and cite scientific research. In tests, OpenScholar cited sources as accurately as human experts, and 16 scientists preferred its response to those written by subject experts 51% of the time.

The team Feb. 4 in Nature. The project’s are publicly available and free to use.

“After we started this work, we put the demo online and quickly, we got a lot of queries, far more than we’d expected,” said senior author , a 91̽associate professor in the Paul G. Allen School of Computer Science & Engineering and senior director at Ai2. “When we started looking through the responses we realized our colleagues and other scientists were actively using OpenScholar. It really speaks to the need for this sort of open-source, transparent system that can synthesize research.”

Try the

Researchers trained the model and then created a set of 45 million scientific papers for OpenScholar to pull from to ground its answers in established research. They coupled this with a technique called “,” which lets the model search for new sources, incorporate them and cite them after it’s been trained.

“Early on we experimented with using an AI model with Google’s search data, but we found it wasn’t very good on its own,” said lead author , a research scientist at Ai2 who completed this research as a 91̽doctoral student in the Allen School. “It might cite some research papers that weren’t the most relevant, or cite just one paper, or pull from a blog post randomly. We realized we needed to ground this in scientific papers. We then made the system flexible so that it could incorporate emerging research through results.” 

To test their system, the team created ScholarQABench, a benchmark against which to test systems on scientific search. They gathered 3,000 queries and 250 longform answers written by experts in computer science, physics, biomedicine and neuroscience.

“AI is getting better and better at real world tasks,” Hajishirzi said. “But the big question ultimately is whether we can trust that its answers are correct.”

The team compared OpenScholar against other state-of-the-art AI models, such as OpenAI’s GPT-4o and two models from Meta. ScholarQABench automatically evaluated AI models’ answers on metrics such as their accuracy, writing quality and relevance.

OpenScholar outperformed all the systems it was tested against. The team had 16 scientists review answers from the models and compare them with human-written responses. The scientists preferred OpenScholar answers to human answers 51% of the time, but when they combined OpenScholar citation methods and pipelines with GPT-4o (a much bigger model), the scientists preferred the AI written answers to human answers 70% of the time. They picked answers from GPT-4o on its own only 32% of the time.

“Scientists see so many papers coming out every day that it’s impossible to keep up,” Asai said. “But the existing AI systems weren’t designed for scientists’ specific needs. We’ve already seen a lot of scientists using OpenScholar and because it’s open-source, others are building on this research and already improving on our results. We’re working on a followup model, , which builds on OpenScholar’s findings and performs multi-step search and information gathering to produce more comprehensive responses.” 

Other co-authors include , , , all 91̽doctoral students in the Allen School; , a 91̽professor emeritus in the Allen School and general manager and chief scientist at Ai2; , a 91̽postdoc in the Allen School and postdoc at Ai2; , a 91̽professor in the Allen School; , a 91̽assistant professor in

the Allen School; Amanpreet Singh, Joseph Chee Chang, Kyle Lo, Luca Soldaini, Sergey Feldman, Mike D’Arcy, David Wadden, Matt Latzke, Jenna Sparks and Jena D. Hwang of Ai2; Wen-tau Yih of Meta; Minyang Tian, Shengyan Liu, Hao Tong and Bohao Wu of University of Illinois Urbana-Champaign; Pan Ji of University of North Carolina; Yanyu Xiong of Stanford University; and Graham Neubig of Carnegie Mellon University.

For more information, contact Asai at akaria@allenai.org and Hajishirzi at hannaneh@cs.washington.edu.

]]>
Q&A: 91̽researchers create a smart glove with its own sense of touch /news/2026/01/27/smart-glove-electronic-touch-pressure-sensor-engineeering-soft-robotics/ Tue, 27 Jan 2026 21:19:51 +0000 /news/?p=90498 Two pieces of an electronic glove lie on a table.
Inside the OpenTouch Glove (right) is a grid of wires (left) that allows the glove to sense the location and degree of any pressure applied to it. Photo: 91̽

Yiyue Luo’s at the 91̽ is full of machinery that’s oddly cozy. Here, soft and pliable sensors are sewn, knit and glued directly into clothing to give everyday garments new capabilities.

One of the lab’s newest curiosities is a nondescript gray work glove embedded with sensors that enable it to “feel” on its own. An array of small wires hidden inside the glove report the location and degree of pressure anywhere along its surface. When in use, the signals from the glove inform a realtime “heat map” of pressure that could one day help physical therapy patients track their progress, teach robots to grasp objects, and more.

The project, as it’s officially known, is led by 91̽electrical and computer engineering doctoral student as part of a collaboration with the and at MIT. 91̽News caught up with Murphy to learn more about the glove and its potential uses.

What inspired you to create this glove?

Devin Murphy: Our hands are arguably our greatest tools as humans. We interact with the world through our hands in so many different ways. But the nature of how we grasp and manipulate things in our environment is super nuanced and complex, and it’s hard to capture. We have very mature electronics that record sight and sound — think of the cameras and microphones in your smartphone. But there aren’t many electronic devices that record our other senses — like touch. That’s what I’ve been working to remedy with the OpenTouch Glove.

How does the glove work? What are its capabilities?

DM: There are two flexible circuit boards inside each glove that form a grid of wires across the gripping surface of the glove. We can measure pressure at any point in that mesh where two wires meet. The circuit boards connect to a little box of electronics at the user’s wrist, which processes the signals and sends them wirelessly to a laptop.

We can then generate a “heat map” image showing where force is being applied on the hand, where the hand is applying force to different objects and how much force the hand is applying.

This kind of data gives us extra nuance that a camera can’t capture. For example, if your hand is in a bag or behind an object while it’s grasping things, a camera wouldn’t be able to tell what your hand is doing, whereas this glove can follow along.

What are some potential applications for the glove?

DM: I’m particularly excited about how this technology might help patients recovering from an injury. Physical therapists have patients perform a variety of tasks to regain mobility in their hands — if we can measure how much force people apply during this process, we can provide them with concrete feedback. The patient and therapist can both track progress by monitoring grip strength of the patient over time.

We’re also seeing lots of new companies invest in physical intelligence for robotics — basically recording how robots interact with the physical world. If we can record human hand grip signals, we might be able to teach robotic hands how to mimic human behavior.

One other interesting application is in augmented reality or virtual reality. If we replaced traditional controllers with these gloves, it could give users a more natural way to interact with virtual objects and scenery — though we’d need some additional technology for users to feel pressure when gripping virtual things.

How can other researchers access this technology?

DM: It’s really important to us that the glove is accessible to other researchers and anyone else who might want to use it for their own applications. You can order all of the components of the glove directly from commercial manufacturers, and we have released all of the manufacturing files and instructions for putting the glove together yourself.

We’ve also shown some demos of the glove “in the wild” to showcase the different kinds of data it can collect, and we’re planning to release an open source data set collected with the glove in partnership with researchers at MIT.

I’m really excited about developing new wearable technologies that allow people to record less popular sensing modalities like touch. I want to figure out how we can capture the nuances of touch-based interactions, so that ultimately we can get better insights into our daily lives.

For more information, contact Murphy at devinmur@uw.edu.

]]>
91̽researchers analyzed which anthologized writers and books get checked out the most from Seattle Public Library /news/2026/01/08/seattle-public-library-data-anthologized-writers/ Thu, 08 Jan 2026 17:04:04 +0000 /news/?p=90225
91̽researchers analyzed the checkout data from the last 20 years of the 93 authors included in the post-1945 volume of “The Norton Anthology of American Literature,” which is assigned in U.S. English classes more than nearly any other anthology. Photo:

Seattle Public Library, or SPL, is the only U.S. library system that makes its anonymized, granular checkout data public. Want to find out how many times people borrowed the e-book version of Toni Morrison’s “Beloved” in May 2018? That data is available.

The hitch is that the library’s data set contains nearly 50 million rows, and a single title can appear variously. Morrison’s “Beloved,” for instance, is listed as “Beloved,” “Beloved (unabridged),” “Beloved : a novel / by Toni Morrison” and so on.

To track trends in the catalogue over the last 20 years, 91̽ researchers analyzed the checkout data of the 93 authors included in the post-1945 volume of “The Norton Anthology of American Literature.” It’s assigned in U.S. English classes more than virtually any other anthology, so what’s thought of as the contemporary American — the books and writers we’ve deemed culturally important.

The team found that among these vaunted writers — including Morrison, Viet Thanh Nguyen, David Foster Wallace and Joan Didion — science fiction was particularly popular. Ursula K. Le Guin and Octavia E. Butler topped the list.

The team Nov. 21 in Computational Humanities Research 2025, and created .

Related:

  • looks at how checkouts correspond with book sales and other library circulation

“It’s kind of mind-boggling and ironic that in this age of abundant data, we have so little data about what people are reading,” said senior author , a 91̽assistant professor in the Information School. “, particularly for researchers, so I’ve been obsessed with SPL’s data for years now. But extracting insights from it is actually a really hard computational and bibliographic modeling problem.”

To organize the data, the team used computational methods, such as stripping away subtitles and standardizing punctuation. They also manually identified things like translations of a work.

“We worked with the Norton anthology in part because it’s a small enough scale for us to handle,” said lead author , a 91̽doctoral student in the Information School. “It allows us to have a ground truth to work off of. We can still put a human eye on things.” 

In all the team looked at 1,603 works by the 93 authors, which were checked out a total of 980,620 times since 2005.

A line graph shows checkouts of Ursula K. Le Guin increasing over two decades.
This graph follows how many times Ursula K. Le Guin’s books were borrowed since 2005. Photo: Gupta et al./Computational Humanities Research 2025

The 10 top authors were:

  1. Ursula K. Le Guin
  2. Octavia E. Butler
  3. Louise Erdrich
  4. N.K. Jemisin
  5. Toni Morrison
  6. Kurt Vonnegut
  7. George Saunders
  8. Philip K. Dick
  9. Sherman Alexie
  10. James Baldwin

The 10 top books were: 

  1. “Parable of the Sower” by Octavia E. Butler
  2. “Lincoln in the Bardo” by George Saunders
  3. “The Fifth Season” by N.K. Jemisin
  4. “The Sympathizer” by Viet Thanh Nguyen
  5. “Kindred” by Octavia E. Butler
  6. “Beloved” by Toni Morrison
  7. “The Left Hand of Darkness” by Ursula K. Le Guin
  8. “The Absolutely True Diary of a Part-Time Indian” by Sherman Alexie
  9. “The Year of Magical Thinking” by Joan Didion
  10. “The Sentence” by Louise Erdrich

Researchers noted several trends that may have driven checkouts. In general, books with genre and sci-fi elements were some of the most popular.

“I found the prevalence of sci-fi books and writers really interesting,” Gupta said. “These are recent additions to the anthology, since sci-fi and genre fiction haven’t always been seen as important literature. So while it’s a bit unsurprising, it’s also striking to see that despite comprising a small portion of the anthology, these are the authors people are actually reading the most.”

News events also drove spikes in readership, such as film adaptations of James Baldwin’s “If Beale Street Could Talk” and Don DeLillo’s “White Noise,” or the deaths of authors such as Didion, Wallace, Morrison and Philip Roth.

The top book, “Parable of the Sower,” saw a huge spike in readership in 2024 — the year the futuristic novel is set, and the year SPL selected the novel for its program.

“We’ve deemed these canonical authors important enough to continue reading, to continue teaching, to continue studying and talking about, so it’s fascinating to see who we’re actually reading and when,” Walsh said. “I find it very beautiful that after years of these big debates about diversifying the canon, the works that people are turning to the most are by women and Black and Native writers, who previously were not even included in these anthologies.”

Co-authors include Daniella Maor, Karalee Harris, Emily Backstrom and Hongyuan Dong, all students at the UW. This research was supported in part by the .

For more information, contact Walsh at melwalsh@uw.edu and Gupta at ngupta1@uw.edu.

]]>
Video: Drivers struggle to multitask when using dashboard touch screens, study finds /news/2025/12/16/video-drivers-struggle-to-multitask-when-using-dashboard-touch-screens-study-finds/ Tue, 16 Dec 2025 17:00:09 +0000 /news/?p=90099

Once the domain of buttons and knobs, car dashboards are increasingly home to large touch screens. While that makes following a mapping app easier, it also means drivers can’t feel their way to a control; they have to look. But how does that visual component affect driving?

New research from the 91̽ and Toyota Research Institute, or TRI, explores how drivers balance driving and using touch screens while distracted. In the study, participants drove in a vehicle simulator, interacted with a touch screen and completed memory tests that mimic the mental effort demanded by traffic conditions and other distractions. The team found that when people multitasked, their driving and touch screen use both suffered. The car drifted more in the lane while people used touch screens, and their speed and accuracy with the screen declined when driving. The effects increased further when they added the memory task.

These results could help auto manufacturers design safer, more responsive touch screens and in-car interfaces.

The team Sept. 30 at the ACM Symposium on User Interface Software and Technology in Busan, Korea.

“We all know ,” said co-senior author , a 91̽professor in the Paul G. Allen School of Computer Science & Engineering. “But what about the car’s touch screen? We wanted to understand that interaction so we can design interfaces specifically for drivers.”

As the study’s 16 participants drove the simulator, sensors tracked their gaze, finger movements, pupil diameter and electrodermal activity. The last two are common ways to measure mental effort, or “cognitive load.” For instance, pupils tend to grow when people are concentrating.

Related:

  • Story from

While driving, participants had to touch specific targets on a 12-inch touch screen, similar to how they would interact with apps and widgets. They did this while completing three levels of an “N-back task,” a memory test in which the participants hear a series of numbers, 2.5 seconds apart, and have to repeat specific digits.

The participants’ performance changed significantly under different conditions:

  • When interacting with the touch screen, participants drifted side to side in their lane 42% more often. Increasing cognitive load had no effect on the results.
  • Touch screen accuracy and speed decreased 58% when driving, then another 17% under high cognitive load.
  • Each glance at the touchscreen was 26.3% shorter under high cognitive load.
  • A “hand-before-eye” phenomenon, in which drivers’ reached for a control before looking at it, increased from 63% to 71% as memory tasks were introduced.

The team also found that increasing the size of the target areas participants were trying to touch did not improve their performance.

“If people struggle with accuracy on a screen, usually you want to make bigger buttons,” said , a 91̽doctoral student in the Allen School. “But in this case, since people move their hand to the screen before touching, the thing that takes time is the visual search.”

Based on these findings, the researchers suggest future in-car touch screen systems might use simple sensors in the car — eye tracking, or touch sensors on the steering wheel — to monitor drivers’ attention and cognitive load. Based on these readings, the car’s system might adjust the touch screen’s interface to make important controls more prominent and safer to access.

“Touch screens are widespread today in automobile dashboards, so it is vital to understand how interacting with touch screens affects drivers and driving,” said co-senior author , a 91̽professor in the Information School. “Our research is some of the first that scientifically examines this issue, suggesting ways for making these interfaces safer and more effective.”

, a 91̽doctoral student in the Information School, is co-lead author. Other co-authors include , , and of TRI. This research was funded in part by TRI.

For more information, contact Wobbrock at wobbrock@uw.edu and Fogarty at jfogarty@cs.washington.edu.

]]>
AI can pick up cultural values by mimicking how kids learn /news/2025/12/11/ai-training-cultural-values/ Thu, 11 Dec 2025 17:04:44 +0000 /news/?p=90064 A video game shows two kitchens of different sizes.
In the Overcooked video game, players work to cook and deliver as much onion soup as possible. In the study’s version of the game, one player can give onions to help the other who has further to travel to make the soup. The research team wanted to find out if AI systems could learn altruism by watching different cultural groups play the game. Photo:

Artificial intelligence systems absorb values from their training data. The trouble is that values differ across cultures. So an AI system trained on data from the entire internet won’t work equally well for people from different cultures.

But a new 91̽ study suggests that AI could learn cultural values by observing human behavior. Researchers had AI systems observe people from two cultural groups playing a video game. On average, participants in one group behaved more altruistically. The AI assigned to each group learned that group’s degree of altruism, and was able to apply that value to a novel scenario beyond the one they were trained on.

The team Dec. 9 in PLOS One.

“We shouldn’t hard code a universal set of values into AI systems, because many cultures have their own values,” said senior author , a 91̽professor in the Paul G. Allen School of Computer Science & Engineering and co-director of the Center for Neurotechnology. “So we wanted to find out if an AI system can learn values the way children do, by observing people in their culture and absorbing their values.”

As inspiration, the team looked to showing that 19-month-old children raised in Latino and Asian households were more than those from other cultures.

In the AI study, the team recruited 190 adults who identified as white and 110 who identified as Latino. Each group was assigned an AI agent, a system that can function autonomously.

These agents were trained with a method called inverse reinforcement learning, or IRL. In the more common AI training method, reinforcement learning, or RL, a system is given a goal and gets rewarded based on how well it works toward that goal. In IRL, the AI system observes the behavior of a human or another AI agent, and infers the goal and underlying rewards. So a robot trained to play tennis with RL would be rewarded when it scores points, while a robot trained with IRL would watch professionals playing tennis and learn to emulate them by inferring goals such as scoring points.

This IRL approach more closely aligns with how humans develop.

“Parents don’t simply train children to do a specific task over and over. Rather, they model or act in the general way they want their children to act. For example, they model sharing and caring towards others,” said co-author , a 91̽professor of psychology and co-director of Institute for Learning & Brain Sciences (I-LABS). “Kids learn almost by osmosis how people act in a community or culture. The human values they learn are more ‘caught’ than ‘taught.’”

In the study, the AI agents were given the data of the participants playing a modified version of the video game Overcooked, in which players work to cook and deliver as much onion soup as possible. Players could see into another kitchen where a second player had to walk further to accomplish the same tasks, putting them at an obvious disadvantage. Participants didn’t know that the second player was a bot programmed to ask the human players for help. Participants could choose to give away onions to help the bot but at the personal cost of delivering less soup.

Researchers found that overall the people in the Latino group chose to help more than those in the white group, and the AI agents learned the altruistic values of the group they were trained on. When playing the game, the agent trained on Latino data gave away more onions than the other agent.

To see if the AI agents had learned a general set of values for altruism, the team conducted a second experiment. In a separate scenario, the agents had to decide whether to donate a portion of their money to someone in need. Again, the agents trained on Latino data from Overcooked were more altruistic.

“We think that our proof-of-concept demonstrations would scale as you increase the amount and variety of culture-specific data you feed to the AI agent. Using such an approach, an AI company could potentially fine-tune their model to learn a specific culture’s values before deploying their AI system in that culture,” Rao said.

Additional research is needed to know how this type of IRL training would perform in real-world scenarios, with more cultural groups, competing sets of values, and more complicated problems.

“Creating culturally attuned AI is an essential question for society,” Meltzoff said. “How do we create systems that can take the perspectives of others into account and become civic minded?”

, a 91̽research engineer in the Allen School, and , a software engineer at Microsoft who completed this research as a 91̽student, were co-lead authors. Other co-authors include , a scientist at the Allen Institute who completed this research as a 91̽doctoral student; , an assistant professor at San Diego State University, who completed this research as a post-doctoral scholar at UW; and , a professor in the Allen School and director of the at UW.

For more information, contact Rao at rao@cs.washington.edu.

]]>