Eli Shlizerman – 91̽News /news Tue, 28 Sep 2021 18:55:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 New NSF-funded institute to harness AI for accelerated discoveries in physics, astronomy and neuroscience /news/2021/09/28/nsf-a3d3-institute/ Tue, 28 Sep 2021 16:44:13 +0000 /news/?p=75979
Aerial view of 91̽ campus in Seattle. Photo: Alex Alspaugh/91̽

Science is in the midst of a data deluge: Experiments are churning out more information than researchers can process. But a new endeavor, centered on artificial intelligence, will help scientists navigate this data-rich reality.

On Sept. 28, the National Science Foundation  a $15 million, five-year grant to integrate AI tools into the scientific research and discovery process. The award will fund the Accelerated AI Algorithms for Data-Driven Discovery Institute — or A3D3 Institute — a partnership of nine universities, led by the 91̽.

The A3D3 Institute aims to accelerate the discovery pipeline by providing scientists with new, paradigm-shifting AI tools for analyzing the types of large and complex datasets that are an increasingly common feature of research — from medical laboratories to particle colliders.

An image of a man staring at the camera
Shih-Chieh Hsu Photo: 91̽

“I have been fortunate to work with an exceptional group of talented researchers, and am thrilled to continue to be a part of solving some of the most fundamental issues in science and engineering. The ultimate goal of A3D3 is to construct the institutional knowledge essential for real-time applications of AI in any scientific field,” said , a 91̽associate professor of physics and director of the A3D3 Institute. “A3D3 will empower scientists with new tools to deal with the coming data deluge through dedicated outreach efforts.”

The A3D3 Institute — part of the NSF’s Harnessing the Data Revolution program — is a collaboration among researchers at the 91̽; the University of Illinois at Urbana-Champaign; Duke University; the Massachusetts Institute of Technology; the University of Minnesota, Twin Cities; the California Institute of Technology; Purdue University; the University of California, San Diego; and the University of Wisconsin–Madison.

In addition to Hsu, other 91̽faculty involved with the A3D3 Institute are , professor of electrical and computer engineering; , assistant professor of bioengineering and of electrical and computer engineering; and , associate professor of applied mathematics and of electrical and computer engineering.

A3D3 will combine innovations in AI algorithms and computing platforms with research applications in physics, astronomy and neuroscience. Photo: Philip Harris/Massachusetts Institute of Technology

From detectors searching for gravitational waves to electrical sensors monitoring the activity of the brain, research is handing scientists ever-larger datasets to analyze. Experiments are generating more data in part because researchers are developing better tools, from sharper medical imaging techniques to more precise sensors for particle physics experiments. A single experiment at CERN’s , for example, 1 petabyte of data — that’s 1 million gigabytes — per second from tens of millions of collisions. But as datasets increase in size and complexity, the algorithms needed to analyze data and put the most relevant bits — or bytes — before the eyes of scientists run the risk of outstripping current computing capacity.

A3D3 research will focus on developing AI-based algorithms that can perform real-time analyses of large datasets in three data-rich fields: multi-messenger astrophysics, high-energy particle physics and neuroscience.

An image of a man staring into the camera
Scott Hauck Photo: 91̽

“The advancement of computing power from machine learning techniques on high-performance computing platforms is providing exciting new avenues for scientific discovery, while the unique challenges in high-speed and high-throughput data collection for science applications drive new demands for researchers,” said Hauck.

Multi-messenger astrophysics integrates observations of the cosmos from diverse sources — including gravitational wave detectors, neutrino detectors and telescopes — to identify and study sudden and often violent events in the cosmos like supernovae, stellar collisions and black hole mergers. A3D3 researchers will work to develop AI algorithms that can quickly identify these events and help astronomers to cross-correlate observations of the same event from different sources, building a more complete picture of the types of transient events in our sky.

High-energy physics experiments, such as those studied by Hsu at the Large Hadron Collider, have the potential to upend our understanding of the universe by discovering new types of particles — like candidate dark matter particles — as well as new fundamental forces. A3D3 efforts will focus on AI-fueled approaches to detect unexpected anomalies in collision data and “reconstruct” the particles underlying 40 million collisions per second that occur in high-energy experiments. These tools will streamline the downstream analysis processes, accelerating and simplifying the pipeline of discovery.

an image of a woman staring into the camera
Amy Orsborn Photo: 91̽

In neuroscience, A3D3 efforts will center on understanding the complex neural networks within the human brain that govern motor functions and process sensory information.

“We can now measure more of the brain for longer periods of time. We need new tools to analyze these massive datasets,” said Orsborn, who is also a core staff scientist at the Washington National Primate Research Center. “Analyzing data quickly will also enable new experiments and therapies where we can intervene based on ongoing brain activity.”

Researchers need AI-based algorithms to analyze neural datasets in real time — such as electrical recordings from implanted electrodes and for a wide range of basic science studies. A3D3 researchers will focus on developing these types of tools, which can help decipher the neural underpinnings of behaviors like basic motor functions and responses to stimuli.

An image of a man staring into the camera and smiling
Eli Shlizerman Photo: 91̽

“Critically, A3D3 researchers will focus on developing scalable analysis tools, which can adapt not just to the datasets of today, but also to the massive and intricate datasets expected in the coming decades,” said Shlizerman.

With the rapid growth in the amount of data generated by scientific research, the A3D3 Institute also has its eyes on the future. The institute will pursue training and research opportunities for both graduate and undergraduate students, including students from backgrounds that are underrepresented in STEM communities. These endeavors will ensure that A3D3’s impact spreads and endures beyond its immediate goals, said Hsu.

For more information, contact Hsu at schsu@uw.edu.

]]>
‘Audeo’ teaches artificial intelligence to play the piano /news/2021/02/04/audeo-teaches-artificial-intelligence-play-piano/ Thu, 04 Feb 2021 18:09:32 +0000 /news/?p=72632
A 91̽ team created Audeo, a system that can generate music using only visual cues of someone playing the piano. Photo:

Anyone who’s been to a concert knows that something magical happens between the performers and their instruments. It transforms music from being just “notes on a page” to a satisfying experience.

For journalists

A 91̽ team wondered if artificial intelligence could recreate that delight using only visual cues — a silent, top-down video of someone playing the piano. The researchers used machine learning to create a system, called Audeo, that creates audio from silent piano performances. When the group tested the music Audeo created with music-recognition apps, such as SoundHound, the apps correctly identified the piece Audeo played about 86% of the time. For comparison, these apps identified the piece in the audio tracks from the source videos 93% of the time.

The researchers Dec. 8 at the NeurIPS 2020 conference.

“To create music that sounds like it could be played in a musical performance was previously believed to be impossible,” said senior author , an assistant professor in both the applied mathematics and the electrical and computer engineering departments. “An algorithm needs to figure out the cues, or ‘features,’ in the video frames that are related to generating music, and it needs to ‘imagine’ the sound that’s happening in between the video frames. It requires a system that is both precise and imaginative. The fact that we achieved music that sounded pretty good was a surprise.”

Audeo uses a series of steps to decode what’s happening in the video and then translate it into music. First, it has to detect which keys are pressed in each video frame to create a diagram over time. Then it needs to translate that diagram into something that a music synthesizer would actually recognize as a sound a piano would make. This second step cleans up the data and adds in more information, such as how strongly each key is pressed and for how long.

“If we attempt to synthesize music from the first step alone, we would find the quality of the music to be unsatisfactory,” Shlizerman said. “The second step is like how a teacher goes over a student composer’s music and helps enhance it.”

The researchers trained and tested the system using YouTube videos of the . The training consisted of about 172,000 video frames of Barton playing music from well-known classical composers, such as Bach and Mozart. Then they tested Audeo with almost 19,000 frames of Barton playing different music from these composers and others, such as .

A video from Paul Barton’s YouTube channel with the sound removed.

The video from above with sound generated by Audeo.

Once Audeo has generated a transcript of the music, it’s time to give it to a synthesizer that can translate it into sound. Every synthesizer will make the music sound a little different — this is similar to changing the “instrument” setting on an electric keyboard. For this study, the researchers used two different synthesizers.

“Fluidsynth makes synthesizer piano sounds that we are familiar with. These are somewhat mechanical-sounding but pretty accurate,” Shlizerman said. “We also used PerfNet, a new AI synthesizer that generates richer and more expressive music. But it also generates more noise.”

Paul Barton playing Scott Joplin’s “The Entertainer.” The sound was generated using Audeo with FluidSynth as the synthesizer.

Paul Barton playing Scott Joplin’s “The Entertainer.” The sound was generated using Audeo with PerfNet as the synthesizer.

Audeo was trained and tested only on Paul Barton’s piano videos. Future research is needed to see how well it could transcribe music for any musician or piano, Shlizerman said.

More details about this project are .

“The goal of this study was to see if artificial intelligence could generate music that was played by a pianist in a video recording — though we were not aiming to replicate Paul Barton because he is such a virtuoso,” Shlizerman said. “We hope that our study enables novel ways to interact with music. For example, one future application is that Audeo can be extended to a virtual piano with a camera recording just a person’s hands. Also, by placing a camera on top of a real piano, Audeo could potentially assist in new ways of teaching students how to play.”

and , both doctoral students in electrical and computer engineering, are co-authors on this paper. This research was funded by the Washington Research Foundation Innovation Fund as well as the applied mathematics and electrical and computer engineering departments.

For more information, contact Shlizerman at shlizee@uw.edu.

]]>
Scientists crack secrets of the monarch butterfly’s internal compass /news/2016/04/14/scientists-crack-secrets-of-the-monarch-butterflys-internal-compass/ Thu, 14 Apr 2016 16:00:09 +0000 /news/?p=47192
“Now where was I going?” Photo: Flickr/Wikimedia Commons

Each fall, monarch butterflies across Canada and the United States turn their orange, black and white-mottled wings toward the Rio Grande and migrate over 2,000 miles to the relative warmth of central Mexico.

Photo: Monarch Watch

This journey, repeated instinctively by generations of monarchs, continues even as monarch numbers have due to loss of their sole larval food source — milkweed. But amid this sad news, a research team believes they have of the internal, genetically encoded compass that the monarchs use to determine the direction — southwest — they should fly each fall.

“Their compass integrates two pieces of information — the time of day and the sun’s position on the horizon — to find the southerly direction,” said , a 91̽ assistant professor.

While the nature of the monarch butterfly’s ability to integrate the time of day and the sun’s location in the sky are known from previous research, scientists have never understood how the monarch’s brain receives and processes this information. Shlizerman, who has joint appointments in the Department of Applied Mathematics and the Department of Electrical Engineering, partnered with colleagues at the University of Michigan and the University of Massachusetts how the monarch’s compass is organized within its brain.

“We wanted to understand how the monarch is processing these different types of information to yield this constant behavior — flying southwest each fall,” said Shlizerman, who is lead author on the team’s in the journal .

Monarchs use their large, complex eyes to monitor the sun’s position in the sky. But the sun’s position is not sufficient to determine direction. Each butterfly must also combine that information with the time of day to know where to go. Fortunately, like most animals including humans, monarchs possess an internal clock based on the rhythmic expression of key genes. This clock maintains a daily pattern of physiology and behavior. In the monarch butterfly, the clock is centered in the antennae, and its information travels via neurons to the brain.

Biologists have previously studied the rhythmic patterns in monarch antennae that control the internal clock, as well as how their compound eyes decipher the sun’s position in the sky. Shlizerman’s collaborators, including at the University of Massachusetts, recorded signals from antennae nerves in monarchs as they transmitted clock information to the brain as well as light information from the eyes.

Shlizerman and colleagues modeled how the monarch brain integrates the time of day with the sun’s position in the sky. Photo: Eli Shlizerman

“We created a model that incorporated this information — how the antennae and eyes send this information to the brain,” said Shlizerman. “Our goal was to model what type of control mechanism would be at work within the brain, and then asked whether our model could guarantee sustained navigation in the southwest direction.”

In their model, two neural mechanisms — one inhibitory and one excitatory — controlled signals from clock genes in the antennae. Their model had a similar system in place to discern the sun’s position based on signals from the eyes. The balance between these control mechanisms would help the monarch brain decipher which direction was southwest.

Based on their model, it also appears that when making course corrections monarchs do not simply take the shortest turn to get back on route. Their model includes a unique feature — a separation point that would control whether the monarch turned right or left to head in the southwest direction.

“The location of this point in the monarch butterfly’s visual field changes throughout the day,” said Shlizerman. “And our model predicts that the monarch will not cross this point when it makes a course correction to head back southwest.”

Based on their simulations, if a monarch gets off course due to a gust of wind or object in its path, it will turn whichever direction won’t require it to cross the separation point.

Additional studies would need to confirm whether the researchers’ model is consistent with monarch butterfly brain anatomy, physiology and behavior. So far, aspects of their model, such as the separation point, seem consistent with observed behaviors.

“In experiments with monarchs at different times of the day, you do see occasions where their turns in course corrections are unusually long, slow or meandering,” said Shlizerman. “These could be cases where they can’t do a shorter turn because it would require crossing the separation point.”

Their model also suggests a simple explanation why monarch butterflies are able to reverse course in the spring and head northeast back to the United States and Canada. The four neural mechanisms that transmit information about the clock and the sun’s position would simply need to reverse direction.

“And when that happens, their compass points northeast instead of southwest,” said Shlizerman. “It’s a simple, robust system to explain how these butterflies — generation after generation — make this remarkable migration.”

In addition to Reppert, other co-authors on the paper were James Phillips-Portillo at the University of Massachusetts and at the University of Michigan. Shlizerman’s work was funded by the National Science Foundation and the Washington Research Fund.

Additional information can be found at .

###

For more information, contact Shlizerman at 206-543-6658 or shlizee@uw.edu.

Grant number: DMS-1361145

]]>