William Poor – 91̽News /news Tue, 14 Apr 2026 22:17:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 At quantum testbed lab, researchers across the 91̽probe ‘spooky’ mysteries of quantum phenomena /news/2026/04/13/qt3-quantum-computing-testbed-lab-dilution-fridge/ Mon, 13 Apr 2026 23:09:13 +0000 /news/?p=91294 Three people stand next to a complex metal tube-shaped machine
Max Parsons (left), assistant professor of electrical and computer engineering, works with undergraduate staff members Reynel Cariaga (center) and Jesus Garcia (right) at the QT3 lab. The device in the foreground is a scanning tunneling microscope that can image individual atoms within a material by scanning an extremely fine needle — just one atom thick at the tip — across the sample. Photo: Erhong Gao/91̽

Even on a campus like the 91̽’s — home to particle accelerators, wave tanks and countless other bespoke pieces of equipment — the machinery in the stands out. Take the dilution fridge, a large, white, cylindrical device that can cool a small chamber to one hundredth of a kelvin above absolute zero — the coldest possible temperature in the universe.

“This is the coldest fridge money can buy,” said , a 91̽assistant professor of electrical and computer engineering and the former director of the lab, which goes by the nickname QT3. “When it’s running, the chamber inside this device is about 100 times colder than outer space. At that temperature, it’s much easier to study and manipulate a material’s quantum properties.”

The lab also houses a photon qubit tabletop lab: a nondescript set of boxes, lasers and lenses that can demonstrate the “spooky” — a term scientists actually use — phenomenon known as quantum entanglement, where two particles appear to communicate instantaneously with each other despite being physically apart.

Or there’s the lab’s latest acquisition, the scanning tunneling microscope, which can image individual atoms within a solid material, allowing researchers to study the structure of materials at the smallest scales.

An interdisciplinary group of researchers has been marshalling resources and expertise to create QT3 for three years, and now, the lab is opening its doors as a unique one-stop shop resource for quantum researchers and educators at the UW.

“The idea of this lab is to improve access to quantum hardware,” Parsons said. “It’s rather hard to acquire equipment like this. And there are a lot of researchers that may have good ideas that they want to test, but don’t have the resources yet for their own equipment. So we’re inviting researchers, initially from across campus, but also from other universities and from industry, to come in and test their ideas. This can be a hub for quantum experts to share their ideas and collaborate.”

The lab also boasts hardware that can demonstrate known quantum principles and techniques, making it useful for students in quantum fields. In addition to the entanglement device, Parsons’ students developed a machine that can suspend charged particles — in this case, tiny grains of pollen — in midair using electric fields. Researchers use the same technique to trap single atoms and manipulate their quantum properties, making the lab’s ion-trapping machine good practice for more complex work.

Two tiny dots hover back and forth in a tube
The QT3 facility’s ion trapping lab gives students a chance to practice techniques used in quantum computing research. Here, students have suspended two tiny grains of pollen — the red dots hovering back and forth — in midair using electric fields. Photo: Robert Thomas

Some students even work at the lab through an undergraduate staffing program, and have helped install instrumentation, write code to power equipment and build parts for custom microscopes. The program provides yet another avenue for students to get hands-on experience with unusual machinery and techniques.

“Quantum mechanics is inherently counterintuitive, and that makes it a powerful teaching tool,” Parsons said. “In the QT3 lab, students will encounter systems where their everyday intuition breaks down, and they must rely on careful reasoning and experimentation instead. They learn how to debug when results don’t match expectations, how to test simple cases and how to build understanding about hardware step by step.”

The cosmically cold dilution fridge remains something of a centerpiece, even as the lab fills up with specialized equipment. The extreme environment within the device strips heat, light and other stray energy away from materials, allowing researchers to observe the peculiar quantum properties that remain. One such property is superposition, or the ability of a particle like an electron to maintain multiple mutually exclusive properties at the same time. Scientists use superposition to create a powerful, tiny piece of technology: a quantum bit, or qubit.

“Traditional computers use bits, which can only be one or zero. A qubit, on the other hand, we can make one plus zero,” Parsons said. “It’s both at the same time, and only when we measure it do we find out which one it is. We can use this unusual property to build a new class of computers that excel at tasks like communications and encryption.”

QT3 is part of a collaborative effort to solidify 91̽as a leader in quantum research and applications. Most of the lab hardware was funded by a congressional earmark championed by Senator Maria Cantwell’s office. Departmental funding from across the College of Engineering and the College of Arts and Sciences helped rehab the lab space. The National Science Foundation provided seed funding for the instructional lab equipment.

a repeating hexagonal pattern of small golden blobs
An image captured by the QT3 lab’s scanning tunneling microscope reveals a lattice of individual atoms in a sample of silicon. Photo: Rajiv Giridharagopal

The 91̽has also spent the past decade investing heavily in faculty with quantum expertise.

“Very few places have expertise across the full quantum stack, from materials up to algorithms,” said , a 91̽professor of physics and founder of QT3. “The 91̽has quantum faculty in electrical and mechanical engineering, physics, computer science, materials science and chemistry. Our faculty work on superconducting qubits, spin defects, photons, trapped ions, neutral atoms and topological qubits. Our advantage is the breadth of our investment.”

The lab is now available to researchers and students across the UW, and private companies are encouraged to reach out about partnering. Parsons has already used the lab to teach a graduate-level class in electrical and computer engineering for students who included employees from Boeing, Microsoft and quantum computing company IonQ. The lab is hiring for a full-time manager to maintain the equipment and help users make the most of the facility.

“Here in academia, we can improve the building blocks for applied technologies like quantum computing, and then transfer those learnings to industry for further scaling,” Parsons said.

For more information, contact Parsons at mfpars@uw.edu.

]]>
Climate change may complicate avalanche risk across the Pacific Northwest /news/2026/03/23/climate-change-avalanche-risk/ Mon, 23 Mar 2026 17:07:56 +0000 /news/?p=91066 Snowy mountains with two signs in foreground. A yellow sign reads “AVALANCHE AREA”; a red and white sign reads “NO STOPPING OR STANDING NEXT ¾ MILE”.
Warming temperatures throughout the Pacific Northwest are likely to complicate avalanche forecasting in the coming years, according to a new 91̽study. Cooler inland regions such as Idaho and Western Montana may see increased risk from avalanches caused by layers of icy crusts that form when rain falls on snow and freezes. Photo: iStock

This winter was ; as a result, many snowy, alpine areas have seen bouts of winter rainfall where there would ordinarily only be snow. These unusual weather patterns have contributed to an abysmal ski season, but they can also set the stage for dangerous avalanches. At temperatures close to freezing, precipitation can fall as rain but freeze when it hits the snow, forming an icy crust. Snow that accumulates on top of that crust is unstable and prone to abrupt slides, causing an avalanche that can close down a major highway in moments, endanger backcountry skiers and more.

Avalanche experts in Western Washington know how to manage the risks associated with rain-on-snow events, but many of their counterparts in colder regions like Eastern Washington, Idaho and Montana are less familiar with these dynamics. New research from the 91̽ shows that as winters in these regions warm, their snowpacks may come to resemble those of maritime areas, with more rain-on-snow events, icy crusts and complex avalanche forecasting.

The findings in ARC Geophysical Research.

“This winter’s warmth is a harbinger,” said lead author , a 91̽graduate student of civil and environmental engineering. “We know that temperatures will keep rising, and our work is a red flag for cooler regions of the greater Pacific Northwest, such as Idaho and Western Montana, that aren’t used to dealing with ice crusts and their resulting avalanche problems.”

A cross-section of a snow drift with a shovel in the foreground. A horizontal line is visible running through the drift about halfway up.
A cross-section of snowpack reveals a thin, darker ice layer running horizontally through the snow. Ice layers like this one form when rain falls onto snow and freezes, forming a crust. This creates a boundary within the snowpack that can cause snow to slip and trigger an avalanche. Photo: Clinton Alden

The study is part of a larger effort to understand the structure of snow as it accumulates, which has implications for weather and avalanche forecasting, wildlife dynamics and more.

“Snow scientists are pretty good at measuring snow depth and volume,” said senior author , a 91̽professor of civil and environmental engineering. “We’re also pretty good at figuring out how much water you get if all that snow melts. But our models aren’t as good at representing snow structure, such as layers of different densities and crystal types that increase avalanche risks. And we really want to know how the structure of snow changes as the climate changes. That’s a tricky question that no one has tackled, particularly for rain-on-snow conditions.”

To dig into that question, the researchers studied how warming influences ice layer formation in seasonal snowpacks. First, they collected temperature and precipitation data captured by 53 monitoring stations across the Pacific Northwest for the past 25 years. They used a computer model to identify days when ice layers likely formed at each location. They then checked the model against real-world measurements at one of the locations — a station at Snoqualmie Pass — and found that the model matched the measurements with 74% accuracy.

Finally, they used the same model to simulate those same 25 winters at 2 C and 4 C warmer than they were, and looked for changes to the number of ice crusts across the region. , the Pacific Northwest is expected to warm by 2 C to 5 C by 2050 as compared to pre-2000 temperatures.

A map of the Pacific Northwest with red and blue triangles scattered across it. The red triangles point down and the blue triangles point up.
This map shows the change in number of “ice crust days” across the 53 monitoring sites during the simulated winter with 2 C warming. The Cascade sites overwhelmingly saw fewer theoretical ice crust days, whereas cooler inland regions overwhelmingly saw more. Photo: Alden et. al/ARC Geophysical Research

The results were split regionally by the Cascade mountains. In colder, inland parts of the Pacific Northwest — places like Eastern Washington, Idaho and Montana — higher temperatures created more rain-on-snow days and more avalanche-prone ice layers. Locations in the warmer, maritime Cascades saw the opposite effect: Higher temperatures created slush instead of ice, potentially reducing the avalanche risk associated with ice crusts.

The predicted snowpack changes may also impact wildlife behavior. Some foraging mammals, such as reindeer, dig down into the snow in search of food and may have a hard time breaking through an icy crust. Conversely, firm ice might provide a better running surface for animals fleeing predators. Specific regional effects will require additional study.

What’s clear now is that those who work or play in avalanche terrain in broad swaths of the Pacific Northwest — and even beyond — may need to adjust to a new set of risk factors.

“I get calls from avalanche forecasters in places like Colorado, Wyoming and Montana. They tell me they’re getting rain at 10,000 feet, which they’ve never seen before,” said co-author , the avalanche forecaster supervisor at Washington State Department of Transportation at Snoqualmie Pass, who earned his master’s in transportation and highway engineering at the UW. “They want to know when to expect the onset of avalanches and when to expect the return to stability.”

Alden hopes that this research will encourage further collaboration within the avalanche forecasting community.

“I’d love to see this shared with avalanche forecasters widely, both as a call to action and as a way to help them understand what their snowpack might look like in the future,” Alden said.

, the director of geospatial science at Audubon Alaska and former doctoral student of environmental and forest sciences at the UW, is a co-author.

This research was funded by the NASA Interdisciplinary Research in Earth Science program and the 91̽Program on Climate Change’s Graubard Fellowship.

For more information, contact Alden at cdalden@uw.edu.

]]>
91̽astronomers collect rare evidence of two planets colliding /news/2026/03/11/uw-astronomers-spot-planet-collision-evidence/ Wed, 11 Mar 2026 16:24:35 +0000 /news/?p=90876 Two planets collide, creating a cloud of dust that partly obscures a nearby star.
Lead author Andy Tzanidakis’ rendering of the planetary collision he suspects occurred around star Gaia20ehk in 2021. Photo: Andy Tzanidakis

was combing through old telescope data from 2020 when he found an otherwise boring star acting very strangely. The star, named Gaia20ehk, was about 11,000 light-years from Earth near . It was a stable “main sequence” star, much like our sun, which meant that it should emit steady, predictable light. Yet this star began to flicker wildly.

“The star’s light output was nice and flat, but starting in 2016 it had these three dips in brightness. And then, right around 2021, it went completely bonkers,” said Tzanidakis, a doctoral candidate in astronomy at the 91̽. “I can’t emphasize enough that stars like our sun don’t do that. So when we saw this one, we were like ‘Hello, what’s going on here?’”

The cause of the flickering had nothing to do with the star itself: Huge quantities of rocks and dust — seemingly from out of nowhere — were passing in front of the distant star as the material orbited the system, patchily dimming the light that reached Earth. The likely source of all that debris was even more remarkable: a catastrophic collision between two planets.

“It’s incredible that various telescopes caught this impact in real time,” Tzanidakis said. “There are only a few other planetary collisions of any kind on record, and none that bear so many similarities to the impact that created the Earth and moon. If we can observe more moments like this elsewhere in the galaxy, it will teach us lots about the formation of our world.”

in The Astrophysical Journal Letters.

A starfield with an inset box zooming into a particular area. One star within the inset box is highlighted.
Star Gaia20ehk — seen here in the center of the orange crosshairs in the inset image — is roughly 11,000 light-years from Earth, near the constellation Pupis. Photo: NASA/NSF NOIRLab

Planets form when gravity forces together matter — dust, gas, ice or rocky debris, for example — orbiting a new star. Early solar systems are chaotic — planets routinely collide and explode or go flying off into outer space. Through this process, and over perhaps 100 million years, solar systems like ours winnow their planets down and settle into an equilibrium.

As common as these collisions probably are, observing one in a distant solar system requires patience and luck. The orbits of the planets must take them directly between us and their star, so that the resulting debris obscures some of the star’s light. The telltale flicker then takes years to play out.

“Andy’s unique work leverages decades of data to find things that are happening slowly — astronomy stories that play out over the course of a decade,” said senior author , a 91̽assistant research professor of astronomy. “Not many researchers are looking for phenomena in this way, which means that all kinds of discoveries are potentially up for grabs.”

Tzanidakis, the study’s lead author, studies extreme variability in stars over time. His previous work at the 91̽identified a system with a binary star and a large dust cloud that caused a seven-year eclipse.

The behavior of Gaia20ehk, however, posed a new mystery. The star’s particular fluctuation — short dips in brightness and then chaos — had never before been observed. The team was stumped, until Davenport suggested that they use data from a different telescope to look for infrared light rather than visible light.

“The infrared light curve was the complete opposite of the visible light,” Tzanidakis said. “As the visible light began to flicker and dim, the infrared light spiked. Which could mean that the material blocking the star is hot — so hot that it’s glowing in the infrared.”

A cataclysmic collision between planets would certainly produce enough heat to explain the infrared energy. What’s more, the right kind of collision could also explain those initial dips in light.

Two graphs show a series of readings of both visible and infrared light from 2020 to 2025.
The top graph shows brightness measurements (green and yellow dots) of Gaia20ehk’s brightness in the visible light spectrum. Three small dips in brightness are apparent, followed by a more chaotic overall decline. The bottom graph shows measurements (pink, black and blue dots) of the star’s brightness in the infrared spectrum. The measurements show a sharp increase in infrared as the star’s visible brightness declines. Photo: Tzanidakis et al./The Astrophysical Journal Letters

“That could be caused by the two planets spiraling closer and closer to each other,” Tzanidakis said. “At first, they had a series of grazing impacts, which wouldn’t produce a lot of infrared energy. Then, they had their big catastrophic collision, and the infrared really ramped up.”

There are also clues that the collision resembles the one that created the Earth and moon . The dust cloud is orbiting Gaia20ehk at roughly one astronomical unit, the same distance from the sun to the Earth. At that distance, the material could eventually cool down enough to solidify into something similar to our Earth-moon system. Scientists like Tzanidakis and Davenport can’t know for sure until the dust settles — literally — in the system. That could take a few years, or a few million.

In the meantime, their discovery is a call to action to find more collisions. The powerful Simonyi Survey Telescope at the NSF–DOE Vera C. Rubin Observatory will be well suited to the task when it begins its later this year; some back-of-the-napkin math by Davenport suggests that Rubin could find 100 new impacts over the next 10 years. That could ultimately help narrow the search for habitable worlds outside our solar system.

“How rare is the event that created the Earth and moon? That question is fundamental to astrobiology,” Davenport said. “It seems like the moon is one of the magical ingredients that makes the Earth a good place for life. It can help shield Earth from some asteroids, it produces ocean tides and weather that allow chemistry and biology to mix globally, and it may even play a role in driving tectonic plate activity. Right now, we don’t know how common these dynamics are. But if we catch more of these collisions, we’ll start to figure it out.”

For more information, contact Tzanidakis at atzanida@uw.edu and Davenport at jrad@uw.edu.

This research was funded by Breakthrough Initiatives.

]]>
New marine energy tech is put to the test at Harris Hydraulics Lab /news/2026/03/06/marine-energy-turbines-harris-hydraulics-uw-pnnl/ Fri, 06 Mar 2026 17:29:14 +0000 /news/?p=90849

At the 91̽ Harris Hydraulics Lab, an odd scene plays out. Over and over again, researchers from the 91̽and the (PNNL) pass a small rubber model of a marine animal through a large tank filled with flowing water and fitted with a spinning turbine. On some runs, the model bonks against the turbine blades; on others, it receives a glancing blow or sails past undisturbed. When bonks or knicks occur, a small collision sensor on one of the turbine’s blades detects the impacts and plots the interactions in a computer program.

The researchers are repeatedly simulating something that they hope will rarely happen in the wild: a collision between marine wildlife like a seabird, seal, fish or whale — or submerged debris like logs — and an underwater turbine.

“We want to make sure we’re minimizing the chances of a collision in the first place,” said Aidan Hunt, a senior research engineer in mechanical engineering at the 91̽and member of the (PMEC). “But if a collision were to occur, we want to be able to detect it, and potentially avoid it, in real time. The available evidence suggests that collisions are rare, but we’re taking a ‘trust-but-verify’ approach.”

Marine energy — power harvested from tides, waves and currents — has enormous potential as a clean, renewable resource. But more information is needed about how large, commercial installations of underwater turbines or power-generating buoys could affect marine wildlife, whether through increased noise in the environment, habitat change or direct interactions with equipment.

The marine collision experiments are part of the , a collection of projects led by PNNL to study the environmental impact of marine energy.

The work at Harris Hydraulics follows a by PNNL and the 91̽Applied Physics Lab using a four-foot-tall prototype turbine installed at the entrance to Sequim Bay. In that study, researchers trained an underwater camera on the turbine for 109 days and then catalogued every instance of an animal approaching or interacting with the turbine. The camera captured more than 1,000 instances of fish, birds and seals approaching the turbine blades. There were only four collisions, and all were small fish.

“This study was a first step, but a promising one,” said co-author , a research scientist at the 91̽Applied Physics Lab. “We 徱’t see any endangered species in our study, and the risk of collision for seals and sea birds seemed to be quite low. We’re excited to get back out there with the camera and learn even more.”

The Sequim Bay experiment generated hours of valuable data, but that degree of intense monitoring may not be practical in large commercial installations in the future. Cheaper impact sensors, like the ones logging bath toy impacts at Harris Hydraulics, could be a solution, researchers say.

The project is funded by the U.S. Department of Energy’s Hydropower & Hydrokinetics Office, through the Pacific Northwest National Laboratory’s Triton Initiative and the TEAMER program.

For more information, contact Hunt at ahunt94@uw.edu or Emma Cotter at emma.cotter@pnnl.gov.

]]>
Selective forest thinning in the eastern Cascades supports both snowpack and wildfire resilience /news/2026/03/03/forest-thinning-snowpack-snow-drought-wildfire-resilience/ Tue, 03 Mar 2026 13:24:55 +0000 /news/?p=90813 An aerial photo of a snowy forest with a mountain range in the background. In the foreground, several small figures stand next to a pickup truck.
91̽researchers, including members of the RAPID facility, fly a drone along Cle Elum Ridge in the Eastern Cascades. The drone was equipped with a lidar sensor that helped the team build a detailed 3D map of the study area and changes to the snowpack there. Photo: Mark Stone/91̽

As climate change nudges weather in the eastern Cascades in extreme and volatile directions, forest managers in the region have a lot to juggle. Hotter, drier summers are contributing to bigger and more frequent wildfires. Meanwhile, warmer winters may cause the Cascades to lose 50% of its annual snowpack over the next 70 years. Mountain snow supplies the Yakima River Basin with 75% of its water supply, making it a crucial reservoir for both nature and agriculture . Less winter snow also leads to drier and more fire-prone forests in the summer.

To encourage fire resilience, forest managers use tried-and-true tools like controlled burning and the selective felling of trees to thin out the forest. Both methods remove fuel and help return forests to historical conditions — but less is known about their impact on snowpack.

To address this knowledge gap, a team of researchers at the 91̽ and The Nature Conservancy (TNC) embarked on an ambitious, multiyear study of snowpack along Cle Elum Ridge, an area of the eastern Cascades in the headwaters of the Yakima River Basin. The group experimentally thinned the forest to varying degrees in a roughly 150-acre area. Then, they measured the amount and duration of snowpack during the winter of 2023 and compared it to a previous winter before the forest treatment.

The results were encouraging: Forest thinning efforts increased snowpack by 30% on north-facing slopes and by 16% on south-facing slopes. Thinning aided snowpack the most where it created a patchwork of gaps in the forest rather than a more even density; gaps of 4-16 meters in diameter seemed to be the “sweet spot” for snow.

The research points toward more refined forest management practices that can optimize for both wildfire resilience and snowpack.

in Frontiers in Forest and Global Change.

“At its core, this research shows that reducing wildfire risk and protecting water resources don’t have to be competing goals,” said lead author , a postdoctoral researcher at the University of Alaska who completed this work as a 91̽doctoral student of civil and environmental engineering. “That’s genuinely good news for a place facing both growing wildfire threats and increasing water vulnerability. So much of the climate conversation focuses on loss, which makes findings like this especially meaningful.”

A figure adjusts a drone sitting on a launchpad in a snowy field.
A figure straps a camera onto a tree in a forest.
A figure in an orange vest attaches a gadget to a tripod in a snowy field.
A figure in an orange vest operates a drone that is hovering 10 feet in the air.
A figure inspects an instrument covered with snow.
Two figures measure the depth of a hole in the snow with a pole.

Predicting snowpack in forested areas, especially those at higher altitudes, hinges on understanding how much snow reaches the ground and how much lands in the forest canopy. Snow on the ground is more likely to stick around through the season, whereas snow in the trees may either melt or sublimate back into water vapor. In either case, it wouldn’t add to the reservoir of water that melts in the spring and summer.

“Trees intercept snow and so can reduce snowpack, but trees also shade snow and so can retain snowpack,” said senior author , a 91̽professor of civil and environmental engineering. “The dominant effect depends on winter temperatures, and the Cascade crest near Cle Elum is right on the border where the effect flips from trees decreasing snow to trees saving snow.”

found that natural gaps in the forests of the eastern Cascades accumulated more snow. This, combined with other research, gave the team reason to hope for a positive connection between forest thinning and snowpack, though it wasn’t a sure thing. have found that open areas elsewhere in the Western U.S. saw reduced snowpack.

Thus, it was time for a direct — and complex — study of managed forests.

Researchers picked Cle Elum Ridge for the work, where TNC’s forest managers were planning thinning treatments to improve forest health and wildfire resiliency. The orientation of the ridge allowed them to compare north- and south-facing slopes — southern slopes in the region see more sunshine and less snow retention on average. From October 2021 to September 2022, the researchers worked with TNC’s forest managers and local contract loggers to remove trees on both slopes in a gradient, from no thinning to extensive. The team also set up time-lapse cameras at several strategic points to measure snow depth over time.

Then, they waited for snow to fall.

By March 2023, the area was close to its peak snowpack, and the team returned with staff and equipment from the 91̽ (RAPID). The RAPID crew flew a specialized drone that generated a detailed 3D map of the study area using a laser-mapping technology called lidar.

By comparing the new 3D map and timelapse imagery to lidar data captured before the forest treatment, the team was finally ready to calculate two things: the change to the forest structure, and its effect on the snowpack.

Three photorealistic 3D renderings of trees in a snowy forest.
Lidar renderings of three different areas of the forest studied by the team. Left: a dense, untreated forest stand. Center: a medium-density thinned stand with tree clumps and gaps. Right: a dense stand with a canopy gap. Photo: Cassie Lumbrazo and Karen Dedinsky

Across the whole study area, the team found that thinning helped the forest recover 12.3 acre-feet (or about four million gallons) of water in the form of snow per 100 acres on north-facing slopes, and 5.1 acre-feet (or about 1.5 million gallons) per 100 acres on south-facing slopes.

As expected, areas where the thinning opened gaps in the canopy were most effective at restoring snow storage that had been previously lost to environmental degradation and climate change. Gaps of 4-16 meters in diameter seemed to retain the most snow, though there were few gaps larger than 16 meters to evaluate.

One surprising result: The way forest managers thin forests doesn’t reliably create gaps. Forest managers map out their reductions using the density of trunks in an area, not canopies, as their primary measurement.

“Imagine a group of 100 people all holding umbrellas in the rain,” said co-author , director of the 91̽Climate Impacts Group. “They’re standing close enough together that their umbrellas overlap, so none of the rain hits the ground. If you remove 10 of the umbrellas randomly, you’d still have plenty of coverage overall. But, if you remove 10 umbrellas that are right next to one another, you create a gap in the umbrella ‘canopy,’ and you get a 10% increase in the amount of rain that hits the ground.”

That realization adds a nuance to the findings. It’s likely that forest thinning can benefit both wildfire and snowpack resilience at the same time, but only if managers keep canopy gaps in mind.

“One thing we all learned was that snow people and tree people speak different languages,” Lumbrazo said. “Different experts look at totally different variables to help them decide whether or not to cut down a single tree. So an important goal is to get everyone speaking the same language. And I think this paper is one step towards better communication.”

A short documentary from 2023 highlights the team’s fieldwork.

Overall, the results suggest practical changes to forest management practices in the eastern Cascades. For example, managers might consider more tree-thinning on north-facing slopes, since snowpack gains may be greater there. With further research, these learnings may also extend to other regions in the Pacific Northwest.

The work could also aid collaboration between forest managers and hydrologists at a time when the region needs all the water it can get.

“As we lose snowpack, everything becomes really squeezed,” said co-author , a senior aquatic ecologist at TNC who earned her doctorate in aquatic and fishery sciences at the UW. “We are currently in our third consecutive year of water restrictions in the Yakima River Basin, and are staring down one of the lowest snow years on record. However, our research shows that the treatments currently used for restoring fire resilient forests are compatible with the forest structure needed for supporting water security. And in a world where climate change is reducing water supplies and increasing wildfire severity, we are pleased to report that the same forest treatments can support both goals.”

Co-authors include , a former 91̽graduate student of civil and environmental engineering; , a former 91̽undergraduate student of atmospheric and climate science; , a data processing specialist at the 91̽RAPID facility; and , director of Forest Conservation and Management at The Nature Conservancy.

This research was funded by The Washington Department of Natural Resources, The Nature Conservancy and the National Science Foundation.

For more information, contact Lundquist at jdlund@uw.edu, Dickerson-Lange at dickers@uw.edu or Howe at emily.howe@tnc.org.

]]>
Q&A: 91̽researchers create a smart glove with its own sense of touch /news/2026/01/27/smart-glove-electronic-touch-pressure-sensor-engineeering-soft-robotics/ Tue, 27 Jan 2026 21:19:51 +0000 /news/?p=90498 Two pieces of an electronic glove lie on a table.
Inside the OpenTouch Glove (right) is a grid of wires (left) that allows the glove to sense the location and degree of any pressure applied to it. Photo: 91̽

Yiyue Luo’s at the 91̽ is full of machinery that’s oddly cozy. Here, soft and pliable sensors are sewn, knit and glued directly into clothing to give everyday garments new capabilities.

One of the lab’s newest curiosities is a nondescript gray work glove embedded with sensors that enable it to “feel” on its own. An array of small wires hidden inside the glove report the location and degree of pressure anywhere along its surface. When in use, the signals from the glove inform a realtime “heat map” of pressure that could one day help physical therapy patients track their progress, teach robots to grasp objects, and more.

The project, as it’s officially known, is led by 91̽electrical and computer engineering doctoral student as part of a collaboration with the and at MIT. 91̽News caught up with Murphy to learn more about the glove and its potential uses.

What inspired you to create this glove?

Devin Murphy: Our hands are arguably our greatest tools as humans. We interact with the world through our hands in so many different ways. But the nature of how we grasp and manipulate things in our environment is super nuanced and complex, and it’s hard to capture. We have very mature electronics that record sight and sound — think of the cameras and microphones in your smartphone. But there aren’t many electronic devices that record our other senses — like touch. That’s what I’ve been working to remedy with the OpenTouch Glove.

How does the glove work? What are its capabilities?

DM: There are two flexible circuit boards inside each glove that form a grid of wires across the gripping surface of the glove. We can measure pressure at any point in that mesh where two wires meet. The circuit boards connect to a little box of electronics at the user’s wrist, which processes the signals and sends them wirelessly to a laptop.

We can then generate a “heat map” image showing where force is being applied on the hand, where the hand is applying force to different objects and how much force the hand is applying.

This kind of data gives us extra nuance that a camera can’t capture. For example, if your hand is in a bag or behind an object while it’s grasping things, a camera wouldn’t be able to tell what your hand is doing, whereas this glove can follow along.

What are some potential applications for the glove?

DM: I’m particularly excited about how this technology might help patients recovering from an injury. Physical therapists have patients perform a variety of tasks to regain mobility in their hands — if we can measure how much force people apply during this process, we can provide them with concrete feedback. The patient and therapist can both track progress by monitoring grip strength of the patient over time.

We’re also seeing lots of new companies invest in physical intelligence for robotics — basically recording how robots interact with the physical world. If we can record human hand grip signals, we might be able to teach robotic hands how to mimic human behavior.

One other interesting application is in augmented reality or virtual reality. If we replaced traditional controllers with these gloves, it could give users a more natural way to interact with virtual objects and scenery — though we’d need some additional technology for users to feel pressure when gripping virtual things.

How can other researchers access this technology?

DM: It’s really important to us that the glove is accessible to other researchers and anyone else who might want to use it for their own applications. You can order all of the components of the glove directly from commercial manufacturers, and we have released all of the manufacturing files and instructions for putting the glove together yourself.

We’ve also shown some demos of the glove “in the wild” to showcase the different kinds of data it can collect, and we’re planning to release an open source data set collected with the glove in partnership with researchers at MIT.

I’m really excited about developing new wearable technologies that allow people to record less popular sensing modalities like touch. I want to figure out how we can capture the nuances of touch-based interactions, so that ultimately we can get better insights into our daily lives.

For more information, contact Murphy at devinmur@uw.edu.

]]>
Q&A: A 91̽materials lab probes the mysteries of toughness at the nano scale /news/2026/01/21/lucas-meza-nanoscale-architecture-nanomaterials-mechanical-engineering/ Wed, 21 Jan 2026 17:13:20 +0000 /news/?p=90387 .wp-video { margin-top: -20px; margin-bottom: 5px; } .wp-video br { display: none; }
A splitscreen image showing a black and white webbed material on the left and a bubbled, foamy black and white material on the right.
Researchers in the Meza Research Group at the 91̽ draw inspiration from natural structures to develop new materials. On the left is a scanning electron microscope (SEM) image of naturally occurring spider silk. On the right is an SEM image of an engineered plastic material with a similar structure. The plastic is foamed using tiny carbon dioxide bubbles to make it lighter and tougher. Photo: Haynl et. al/Nature Scientific Reports (left) and Dwivedi et. al/Journal of the Mechanics and Physics of Solids (right).

UPDATE (Feb. 17, 2026): This story has been updated to note Meza’s work with the NSF I-Corps program and CoMotion Innovation Gap Fund.

Biology is full of architecture. Materials like wood, crab shells and bone all contain microscopic structures such as layers, lattices, cells and interwoven fibers. Those structures give natural materials an ideal combination of lightness and toughness, and they’ve inspired engineers to build artificial materials with similar properties. But how those tiny architectures lead to such tough materials is something of a mystery.

In 2019, , assistant professor of mechanical engineering, set up the at the 91̽ to tease out the mechanical secrets of structures that are as small as 100 nanometers, which is about the size of a virus. He arrived with an ambitious plan to build a new generation of nanomaterials, but soon discovered that the field was missing a fundamental understanding of toughness at tiny scales.

“We had to go back to basics,” Meza said.

In the years since, Meza and his team have flipped the script on nanomaterial toughness. They’re applying what they’ve learned to new kinds of bespoke materials, though along the way they’re still surprised by tiny structures behaving in ways they theoretically shouldn’t.

Meza spoke with 91̽News about his strange and surprising journey into the nano realm.

What questions did you establish your lab to tackle?

Lucas Meza: Very broadly, we’re trying to design better materials, but not by introducing new material chemistries. Instead, we use architecture. This is something humans have done throughout history — think of woven textiles and fabrics, or straw-reinforced mud bricks. These are “architected materials,” where the structure of materials allows us to control useful properties like strength, toughness and flexibility.

The thing that I was particularly interested in was introducing architecture at the nanoscale. What if, instead of building a wall with bricks, we could use nanoplatelets? Or instead of making fabrics with yarn, we could use nanofibers? How would those properties change?

Engineers have found that nanomaterials are stronger, more flaw resistant and more deformable. The challenge is: How do you actually do something with them? We need to build them into large-scale materials in a way that preserves their unique nanoscale properties.

What material properties are you most interested in?

LM: We’re using architecture to tinker with a few interrelated properties. The first is a material’s strength, which is how much stress it can take before it permanently deforms. The second is ductility, which is how much a material can stretch before it breaks. Those two features sort of combine to determine a material’s toughness, which is the total amount of energy you have to put into a material to break it.

To give a couple of opposing examples: A ceramic plate is strong, meaning it can take a lot of stress, but it has very low ductility, meaning it barely deforms before breaking. So overall, it’s not a very tough material. Conversely, a rubber band is not strong at all — you can bend and stretch it with very little stress. But, it’s extremely ductile — it can stretch to many times its original dimensions without snapping. So as a result, rubber is very tough.

Credit: 91̽ (left) and Envato (right).

Toughness is a particularly interesting property to study because there’s no limit on how tough a material can be. There are very hard limits on how strong and how stiff a material can be, and you can use architecture to optimize them, but you can’t exceed the properties of the base material. On the other hand, you can use architecture to improve the overall toughness of a material.

Nature has already created a lot of really interesting micro- and nano-structures. Every natural material has to be porous to transport nutrients, and on top of that we see things like lattices in some bone and in sea sponges; shells all have layered architectures; wood and bone are fiber composites; and all of this happens at the micro- and nanoscale.

There had to be a reason that nature was making these architectural motifs at the micro and nanoscale, and I had a strong hunch that it had to do with toughness.

What has your lab learned about toughness at the small scale?

LM: Initially, we learned a surprising amount about what we 徱’t know. My thought in getting into this work was that people know enough about fracture mechanics — how things break and why — so we can just dive into making really complicated architectures and studying their toughness, like l made by my former doctoral student, . We realized the scientific community has some big gaps in their understanding of fracture toughness. So instead, we had to go simple — basically we pulled and pushed and broke a lot of small things to understand what gives a material ductility and toughness.

We learned that all material behavior centers around something called a “plastic zone size.” Basically, when you pull on a part that has a crack, a little ball of energy builds up right at the tip of that crack. That energy ball grows as you add more stress, and at a certain point it shoots through the sample and causes a break. The size of the ball at its breaking point is the material’s plastic zone size, and it’s different for every material.

We realized that what makes a material ductile or not . If a material is smaller than its plastic zone size, that ball of energy can’t grow big enough to cause the crack to grow, so instead it spreads outward and the material bends.

The four material samples in this video are all the same size, but structural differences at the nanoscale produce different levels of ductility. In each example, the cyan color represents the sample’s plastic zone size. In less ductile samples, the cyan-colored area remains small and the material snaps, whereas in more ductile samples, the cyan area spreads out and the material stretches. Credit: Dwivedi et. al/Journal of the Mechanics and Physics of Solids

This is the key for how to use architecture to cheat and get more ductility out of a material. If you take a brittle material and make a nanoscale lattice or foam out of it, . The new tougher “architected material” can also have a larger plastic zone size, sometimes as much as 100 times larger, meaning it is likely to be ductile as well. This is why things like fabrics and meshes can be really hard to tear.

How are you applying what you’re learning to real-world materials?

LM: We’re building lots of our material architectures painstakingly at the small scale using resources like the and the 91̽. That “bottom-up” approach — building things one nanofeature at a time — gives us lots of control over the building blocks we’re playing with, but it’s a real challenge to scale.

The “top-down” approach, where you let physics and kinetics just self-assemble things for you, is much easier. One example is “solid state foaming”, a technique my colleague has been working on for decades. Basically, you take a thermoplastic material — something that melts when you heat it up — throw it in a chamber with high pressure carbon dioxide so it saturates the sample, then heat it up so that dissolved gas forms tiny bubbles in the material. With this process we have less control over the precise architecture — it’s a random foam — but by controlling the amount of dissolved gas we can easily control the size of the bubbles. Those materials turned out to be super tough! My doctoral student has , where we show they could even be tougher than the material they were made from. This goes against everything we knew about normal foam fracture processes.

A black and white image showing a dense, webbed material.
A black and white image showing a dense, webbed material.
A black and white image showing a dense, webbed material.

A plastic nanofoam material created by Kush Dwivedi, a doctoral student in Meza’s lab, seen at 2,500x, 12,000x and 35,000x magnifications. Credit: Dwivedi et. al/Journal of the Mechanics and Physics of Solids.

I’m currently pursuing an earlier-stage commercialization effort to use tiny foams as a filtration material for biomedical applications. We can make nanoporous filter materials — think of the reverse osmosis system that might be under your sink — but we can do it without using any of the harsh chemical processes that are currently used. We’ve been able to explore this avenue thanks to our participation in the program, which then enabled us to get a award.

I also recently got an NSF CAREER grant to study fracture in architected materials, and we’re exploring ways to make tougher sustainable and biodegradable materials. Think of the last time you used a biodegradable fork that broke off in your food. Materials like wood are actually great alternatives for this, but we’re trying to figure out how to do it without cutting down a tree or harvesting bamboo.

For more information contact Meza at lmeza@uw.edu.

]]>
AI headphones automatically learn who you’re talking to — and let you hear them better /news/2025/12/09/ai-headphones-smart-noise-cancellation-proactive-listening/ Tue, 09 Dec 2025 17:30:37 +0000 /news/?p=89888

UPDATE (Dec. 12, 2025): This story has been updated to correct Malek Itani’s department.

Holding a conversation in a crowded room often leads to the frustrating “cocktail party problem,” or the challenge of separating the voices of conversation partners from a hubbub. It’s a mentally taxing situation that can be exacerbated by hearing impairment.

As a solution to this common conundrum, researchers at the 91̽ have developed that proactively isolate all the wearer’s conversation partners in a noisy soundscape. The headphones are powered by an AI model that detects the cadence of a conversation and another model that mutes any voices which don’t follow that pattern, along with other unwanted background noises. The prototype uses off-the-shelf hardware and can identify conversation partners using just two to four seconds of audio.

The system’s developers think the technology could one day help users of hearing aids, earbuds and smart glasses to filter their soundscapes without the need to manually direct the AI’s “attention.”

The team Nov. 7 in Suzhou, China at the Conference on Empirical Methods in Natural Language Processing. The underlying code is open-source and .

“Existing approaches to identifying who the wearer is listening to predominantly involve electrodes implanted in the brain to track attention,” said senior author , a 91̽professor in the Paul G. Allen School of Computer Science & Engineering. “Our insight is that when we’re conversing with a specific group of people, our speech naturally follows a turn-taking rhythm. And we can train AI to predict and track those rhythms using only audio, without the need for implanting electrodes.”

Related:

  • For more information, visit
  • Story from

The prototype system, dubbed “proactive hearing assistants,” activates when the person wearing the headphones begins speaking. From there, one AI model begins tracking conversation participants by performing a “who spoke when” analysis and looking for low overlap in exchanges. The system then forwards the result to a second model which isolates the participants and plays the cleaned up audio for the wearer. The system is fast enough to avoid confusing audio lag for the user, and can currently juggle one to four conversation partners in addition to the wearer’s audio.

The team tested the headphones with 11 participants, who rated qualities like noise suppression and comprehension with and without the AI filtration. Overall, the group rated the filtered audio more than twice as favorably as the baseline.

A pair of headphones with a curly black microphone taped to one ear cup.
The team combined off-the-shelf noise-canceling headphones with binaural microphones to create the prototype, pictured here. Photo: Hu et al./EMNLP

Gollakota’s team has been experimenting with AI-powered hearing assistants for the past few years. They developed one smart headphone prototype that can pick a person’s audio out of a crowd when the wearer looks at them, and another that creates a “sound bubble” by muting all sounds within a set distance of the wearer.

“Everything we’ve done previously requires the user to manually select a specific speaker or a distance within which to listen, which is not great for user experience,” said lead author Guilin Hu, a doctoral student in the Allen School. “What we’ve demonstrated is a technology that’s proactive — something that infers human intent noninvasively and automatically.”

Plenty of work remains to refine the experience. The more dynamic a conversation gets, the more the system is likely to struggle, as participants talk over one another or speak in longer monologues. Participants entering and leaving a conversation present another hurdle, though Gollakota was surprised by how well the current prototype performed in these more complicated scenarios. The authors also note that the models were tested on English, Mandarin and Japanese dialog, and that the rhythms of other languages might require further fine-tuning.

The current prototype uses commercial over-the-ear headphones, microphones and circuitry. Eventually, Gollakota expects to make the system small enough to run on a tiny chip within an earbud or a hearing aid. In that appeared at , the authors demonstrated that it is possible to run AI models on tiny hearing aid devices.

Co-authors include, a 91̽doctoral student in the Allen School; and , a 91̽doctoral student in the electrical and computer engineering department.

This research was funded by the Moore Inventor Fellows program.

For more information, contact proactivehearing@cs.washington.edu

]]>
New ‘liquid metal’ composite material enables recyclable, flexible and reconfigurable electronics /news/2025/10/22/liquid-metal-composite-recyclable-flexible-electronics-ewaste/ Wed, 22 Oct 2025 21:08:24 +0000 /news/?p=89685 Gray blobs of liquid metal are scattered within a black background.
Researchers at the 91̽ created a recyclable composite material made of tiny droplets of liquid metal infused into a stretchy polymer. The droplets, pictured in this microscope image, can be connected easily together to form an electrical circuit. Photo: Y. Han/Advanced Functional Materials

Electronic waste is piling up around the world , partly because to recover useful materials from discarded gadgets. When processed improperly, spent electronics to lead, mercury and other toxic chemicals. Without systemic changes, our global appetite for electronics could produce an annual .

This conundrum inspired a team at the 91̽ to create an easily recyclable material that could one day replace many traditional circuit boards, the foundation of most electronics. The new material is flexible, self-healing and can be made conductive without additional components.

This research was supported by a National Science Foundation grant to fund a 91̽graduate student internship at Oak Ridge National Laboratory.

This suite of features could help produce a more sustainable generation of wearable electronics, soft robotics and more.

“We created a lot of functionality within one material,” said senior author , a 91̽assistant professor of mechanical engineering. “Our goal is to build a widely useful platform for flexible, reusable devices.”

in Advanced Functional Materials.

Conventional circuit boards pass electrical signals through conductive metal traces, which are bonded to a rigid board commonly made of fiberglass and resin. In contrast, the new material is a soft and stretchable composite made from a recyclable polymer infused with microscopic droplets of a liquid metal alloy based on gallium. A circuit can be created on this composite by lightly scoring a pattern into its surface, which connects adjacent embedded droplets and allows electricity to flow. The rest of the material remains electrically insulating.

has been experimenting with liquid metal-infused polymers since 2019 — the team uses . It’s proven to be a promising class of materials, but the rising cost of the liquid metal motivated the team to focus on reusability.

The new composite has a few tricks up its sleeve. The polymer holding the liquid metal droplets is still stretchy and strong, but it can be broken down through a simple chemical process, freeing the metal for reuse. In experiments, researchers recovered 94% of the metal from their samples.

Four boxes in a row show: four red lights lit up within a gray material; the material submerged in a glass beaker with a clear liquid; the beaker with a blob of liquid metal within it; and four green lights lit up in a different design within a gray material.
Researchers demonstrated easy reclamation and recycling of 94% of the liquid metal in the newly created composite material. In their demonstration, a composite sample with a functioning circuit (box 1) was dissolved in a series of chemical solutions (box 2), allowing most of the liquid metal within it to be isolated (box 3). The metal was then used to create a fresh composite sample complete with a new functioning circuit (box 4). Photo: Y. Han/Advanced Functional Materials

The composite also has self-healing properties. Users can cut the material into pieces, rearrange them, and bond them back together using only heat and pressure. An electrical circuit chopped up in this manner will still function when reconnected in a new configuration.

Malakooti envisions a new wave of electronics built with composites like this one, but also a new paradigm for use and reuse. Instead of mass producing gadgets and then tossing them out, he argues, we could design devices and their components to be used, repaired, reconfigured and ultimately recycled.

“We’re trying to make a difference now to shape the future of flexible and wearable electronics,” Malakooti said. “We can’t make all these devices and then go back and try to figure out how to recycle them. That’s how we ended up with the electronic waste problem we face today. I want to tackle this problem from the very start.”

Co-authors include , a 91̽doctoral student of mechanical engineering; , a 91̽undergraduate student of mechanical engineering; and , and at the Oak Ridge National Laboratory.

This research was funded by the National Science Foundation and the Department of Energy.

For more information, contact Malakooti at malakoot@uw.edu.

]]>
Programmable proteins use logic to improve targeted drug delivery /news/2025/10/09/programmable-proteins-targeted-drug-delivery-synthetic-biology/ Thu, 09 Oct 2025 16:17:28 +0000 /news/?p=89515 A diagram shows four outlines of a human body, each with different areas highlighted in a different color.
Therapies that are sensitive to multiple biomarkers could allow medicines to reach only the areas of the body where they are needed. The diagram above shows three theoretical biomarkers that are present in specific, sometimes overlapping areas of the body. A therapy designed to find the unique area of overlap between the three will act on only that area. Photo: DeForest et al./Nature Chemical Biology

Targeted drug delivery is a powerful and promising area of medicine. Therapies that pinpoint the exact areas of the body where they’re needed — and nowhere they’re not — can reduce the medicine dosage and avoid potentially harmful “off target” effects elsewhere in the body. A targeted immunotherapy, for example, might seek out cancerous tissues and activate immune cells to fight the disease only in those tissues.

The tricky part is making a therapy truly “smart,” where the medicine can move freely through the body and decide which areas to target.

Researchers at the 91̽ took a significant step toward that goal by designing proteins with autonomous decision-making capabilities. In a proof-of-principles study in Nature Chemical Biology, researchers demonstrated that by adding smart tail structures to therapeutic proteins, they could control the proteins’ localization based on the presence of specific environmental cues. These protein tails fold themselves into preprogrammed shapes that define how they react to different combinations of cues. In addition, the experiment showed that the smart protein tails could be attached to a carrier material for delivery to living cells.

Advances in synthetic biology also allowed the researchers to manufacture these proteins cheaply and in a matter of days instead of months.

“We’ve been thinking about these concepts for some time but have struggled with ways to increase and automate production,” said senior author , a 91̽professor of chemical engineering and bioengineering. “We’ve now finally figured out how to produce these systems faster, at scale and with dramatically enhanced logical complexity. We are excited about how these will lead to more sophisticated and scalable disease-honing therapies.”

The concept of programmable biomaterials isn’t new. Scientists have developed numerous strategies to make systems responsive to individual cues — such as pH levels or the presence of specific enzymes — that are associated with a particular disease or area of the body. But it’s rare to find one cue, or “biomarker,” that’s unique to one spot, so a material that hones in on just one biomarker might act on a few unintended places in addition to the target.

One solution to this problem is to seek out a combination of biomarkers. There might be many areas of the body with particular enzyme or pH levels, but there are likely fewer areas with both of those factors. In theory, the more biomarkers a material can identify, the more finely targeted drug delivery can be.

In 2018, DeForest’s lab created a new class of materials that responded to multiple biomarkers using Boolean logic, a concept traditionally used in computer programming.

A diagram represents proteins as different colored shapes; some are linear, while others are ring-shaped.
The diagrams above show linker structures that can perform different logical operations. In box 1, the protein therapeutic (star) is released from a material (pink wedge) in the presence of either biomarker X or Y; in box 2, the protein will release only if both biomarkers X and Y are present. Photo: DeForest et al./Nature Chemical Biology

“We realized that we could program how therapeutics were released based simply on how they were connected to a carrier material,” DeForest said. “For example, if we linked a therapeutic cargo to a material via two degradable groups connected in series — that is, each after the other — it would be released if either group was degraded, acting as an OR gate. When the degradable groups were instead connected in parallel — that is, each on a different half of a cycle — both groups had to be degraded for cargo release, functioning as an AND gate. Excitingly, by combining these basic gates we could readily create advanced logical circuits.”

It was a big step forward, but it wasn’t scalable — the team built these large and complex logic-responsive materials manually through traditional organic chemistry.

But over the next several years, the related field of synthetic biology advanced by leaps and bounds.

“The field has developed exciting new protein-based tools that can allow researchers to form permanent bonds between proteins,” said co-first author , a 91̽doctoral student of bioengineering. “It opened doors for new protein structures that were previously unachievable, which made more complex logical operations possible.”

Additionally, it became practical to use living cells as factories to produce these complex proteins, allowing scientists to design custom DNA blueprints for new proteins, insert the DNA into bacteria or other host cells, and then collect the proteins with the desired structure directly from the cells.

With these new tools, DeForest and his team streamlined and improved many steps of the process at once. They designed and produced proteins with tails that spontaneously fold into more bespoke shapes, creating complex “circuits” that can respond to up to five different biomarkers. These new proteins can attach to various carriers — hydrogels, tiny beads or living cells — for delivery to a cell, or theoretically a disease site. The team even loaded up one carrier with three different proteins, each programmed to deliver their unique cargo based on different sets of environmental cues.

A diagram represents a complex protein in a two-ringed shape; a box next to it shows a series of and/or statements connected together.
The research team designed protein tails that fold into custom shapes to create sophisticated logical circuits. Box 1 shows a protein designed to be responsive to five different biomarkers; box 2 shows the logical conditions that must be met to fully break apart the tail and release the protein. Photo: DeForest et al./Nature Chemical Biology

“We were so excited about the results,” DeForest said. “Using the old process, it would take months to synthesize just a few milligrams of each of these materials. Now it takes us a couple of weeks to go from construct design to product. It’s been a complete game changer for us.”

“The sky’s the limit. You can create delayed and independent delivery of many different components in one treatment,” Ross said. “And I think we could create much, much larger logical circuits that a protein can be responsive to. We’re at the point now that the technology is outpacing what we’ve seriously considered in terms of applications, which is a great place to be.”

The researchers will now continue searching for more biomarkers that proteins could target. They also hope to start collaborating with other labs at the 91̽and beyond to build and deploy real-world therapies.

The team outlined other uses for the technology as well. The same tools could manufacture therapies within a single cell and direct them to specific regions, a sort of microcosm of how the process works in the body. DeForest also envisions diagnostic tools like blood tests that could, say, turn a certain color when a complex set of cues within the blood sample are present.

DeForest thinks the first practical applications are likely to be cancer treatments, but with more research, the possibilities feel endless.

“The dream is to be able to pick any arbitrary location inside of the body — down to individual cells — and program a material to go and act there,” he said. “That’s a tall order, but with these technologies we’re getting closer. With the right combination of biomarkers, these materials will just get more and more precise.”

Co-authors include , a former 91̽undergraduate student of chemical engineering; , a 91̽undergraduate student of bioengineering; and , a 91̽doctoral student of chemical engineering.

This research was funded by the National Science Foundation and the National Institutes of Health.

For more information, contact DeForest at profcole@uw.edu.

]]>