Sham Kakade – 91探花News /news Wed, 26 Aug 2020 20:47:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Faculty from Allen School, Evans School tapped for NSF institutes on artificial intelligence /news/2020/08/26/faculty-from-allen-school-evans-school-tapped-for-nsf-institutes-on-artificial-intelligence/ Wed, 26 Aug 2020 18:58:17 +0000 /news/?p=70026 91探花 faculty are part of two new National Science Foundation institutes devoted to artificial intelligence research.

, a professor in the Evans School of Public Policy and Governance, will be part of the AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography, led by the University of Oklahoma. , , and , faculty in the Paul G. Allen School of Computer Science & Engineering, and , associate professor of statistics, will be part of the AI Institute for Foundations of Machine Learning, led by the University of Texas at Austin.

The NSF on Wednesday announced five institutes in all, based at research universities around the country and part of a collaboration among the U.S. departments of Agriculture, Homeland Security and Transportation. The institutes aim to accelerate research, expand America’s workforce and transform society in the coming decades. Each institute receives $20 million in NSF funding over five years.

The National Science Foundation has announced new AI institutes at universities around the country. 91探花 faculty are affiliated with institutes based at the University of Texas and the University of Oklahoma. Photo: National Science Foundation

The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography assembles researchers in machine learning, atmospheric and ocean science and risk communication to develop user-driven, trustworthy AI that addresses pressing concerns in weather, climate and coastal hazards prediction.

“In collaboration with our colleagues in this new institute, the risk communication research team will examine how AI information influences trust and use of AI over time by decision makers in ecological and water resource management, weather forecasting and emergency management,” Bostrom said. “It鈥檚 an exciting opportunity to advance fundamental research on mental models and perceptions of AI in environmental science contexts that have critical consequences for all of us.”

In addition to the 91探花and the University of Oklahoma, other participating institutions are Colorado State University, the University of New York at Albany, North Carolina State University, Texas A&M University-Corpus Christi, Del Mar College; the National Center for Atmospheric Research; and private industry partners including Google, IBM, NVIDIA and Disaster Tech.

Amy McGovern, a professor of computer science and meteorology at the University of Oklahoma and lead researcher for this NSF institute, said the long-term goal is to apply AI to a broad array of environmental challenges.

“This institute is a convergent center that will create trustworthy AI for environmental science, revolutionize prediction and understanding of high-impact weather and ocean hazards, and benefit society by protecting lives and property,” McGovern said. “Leading experts from AI, atmospheric and ocean science, risk communication, and education, will work synergistically to develop and test trustworthy AI methods that will transform our understanding and prediction of the environment.”

The NSF Institute for Foundations of Machine Learning will focus on major theoretical challenges in AI, including next-generation algorithms for deep learning, neural architecture optimization, and efficient robust statistics.

At their core, tools from machine learning still rely on models and algorithms that are often ill-equipped to process dynamic, complex datasets. For example, algorithms designed to help machines recognize, categorize and label images can鈥檛 keep up with the massive amount of video data people upload to the internet every day.

“This institute tackles the foundational challenges that need to be solved to keep AI on its current trajectory and maximize its impact on science and technology,” said Oh, an associate professor in the Allen School. “We plan to develop a toolkit of advanced algorithms for deep learning, create new methods for coping with the dynamic and noisy nature of training datasets, learn how to exploit structure in real-world data, and target more complex and real-world objectives. These four goals will help solve research challenges in multiple areas, including medical imaging and robot navigation.”

Wichita State University and Microsoft Research are also participating in this institute.

NSF’s history of investment in AI research and workforce development “paved the way for many of the breakthrough commercial technologies permeating and driving society today,” said NSF Director Sethuraman Panchanathan. “NSF invests more than $500 million in AI research annually. We are supporting five NSF AI Institutes this year, with more to follow, creating hubs for academia, industry, and government to collaborate on profound discoveries and develop new capabilities to advance American competitiveness for decades to come.”

The other NSF institutes announced Tuesday are the AI Institute for Student-AI Teaming, led by the University of Colorado Boulder; the AI Institute for Molecular Discovery, Synthetic Strategy and Manufacturing, led by the University of Illinois Urbana-Champaign; and the AI Institute for Artificial Intelligence and Fundamental Interactions, led by the Massachusetts Institute of Technology.

For more information on the NSF AI institutes, visit www.nsf.gov.

 

Adapted from press releases from the National Science Foundation, the University of Oklahoma and the University of Texas at Austin.

]]>
What makes Bach sound like Bach? New dataset teaches algorithms classical music /news/2016/11/30/what-makes-bach-sound-like-bach-new-dataset-teaches-algorithms-classical-music/ Wed, 30 Nov 2016 16:30:10 +0000 /news/?p=50776
MusicNet is a new publicly available dataset from 91探花researchers that labels each note of 330 classical compositions in ways that can teach machine learning algorithms about the basic structure of music. Photo: , flickr

The composer Johann Sebastian Bach left behind an upon his death, either as an unfinished work or perhaps as a puzzle for future composers to solve.

A classical music dataset released Wednesday by 91探花 researchers 鈥 which enables machine learning algorithms to learn the features of classical music from scratch 鈥 raises the likelihood that a computer could expertly finish the job.

is the first publicly available large-scale classical music dataset with curated fine-level annotations. It鈥檚 designed to allow machine learning researchers and algorithms to tackle a wide range of open challenges 鈥 from note prediction to automated music transcription to offering listening recommendations based on the structure of a song a person likes, instead of relying on generic tags or what other customers have purchased.

鈥淎t a high level, we鈥檙e interested in what makes music appealing to the ears, how we can better understand composition, or the essence of what makes Bach sound like Bach. It can also help enable practical applications that remain challenging, like automatic transcription of a live performance into a written score,鈥 said , a 91探花associate professor of computer science and engineering and of statistics.

鈥淲e hope MusicNet can spur creativity and practical advances in the fields of machine learning and music composition in many ways,鈥 he said.

Described in a published Nov. 30 in the arXiv pre-print repository, MusicNet is a collection of 330 freely licensed classical music recordings with of each individual note, what instrument plays the note and its position in the composition鈥檚 metrical structure.聽 It includes more than 1 million individual labels from 34 hours of chamber music performances that can train computer algorithms to deconstruct, understand, predict and reassemble components of classical music.

鈥淭he music research community has been working for decades on hand-crafting sophisticated audio features for music analysis. We built MusicNet to give researchers a large labelled dataset to automatically learn more expressive audio features, which show potential to radically change the state-of-the-art for a wide range of music analysis tasks,鈥 said , a 91探花assistant professor of statistics.

It鈥檚 similar in design to , a public dataset that revolutionized the field of computer vision by labeling basic objects 鈥 from penguins to parked cars to people 鈥 in millions of photographs. This vast repository of visual data that computer algorithms can learn from has enabled huge strides in everything from image searching to self-driving cars to algorithms that recognize your face in a photo album.

鈥淎n enormous amount of the excitement around artificial intelligence in the last five years has been driven by supervised learning with really big datasets, but it hasn鈥檛 been obvious how to label music,鈥 said lead author , a 91探花computer science and engineering doctoral student.

鈥淵ou need to be able to say from 3 seconds and 50 milliseconds to 78 milliseconds, this instrument is playing an A. But that鈥檚 impractical or impossible for even an expert musician to track with that degree of accuracy.鈥

The 91探花research team overcame that challenge by applying a technique called 鈥 which aligns similar content happening at different speeds 鈥 to classical music performances. This allowed them to synch a real performance, such as Beethoven鈥檚 鈥楽erioso鈥 string quartet, to a synthesized version of the same piece that already contained the desired musical notations and scoring in digital form.

Time warping and mapping that digital scoring back onto the original performance yields the precise timing and details of individual notes that make it easier for machine learning algorithms to learn from musical data.

In their arXiv paper, the 91探花research team tested the ability of some common end-to-end deep learning algorithms used in speech recognition and other applications to predict missing notes from compositions. They are so machine learning researchers and music hobbyists can adapt or develop their own algorithms to advance music transcription, composition, research or recommendations.

鈥淣o one鈥檚 really been able to extract the properties of music in this way, which opens so many opportunities for creative play,鈥 said Kakade.

For instance, one could imagine asking your computer to make up a performance that鈥檚 similar to songs you鈥檝e listened to, or to hum a melody and tell it to make a fugue on command.

鈥淚鈥檓 really interested in the artistic opportunities. Any composer who crafts their art with the assistance of a computer 鈥 which includes many modern musicians 鈥 could use these tools,鈥 said Thickstun. 鈥淚f the machine has a higher understanding of what they鈥檙e trying to do, that just gives the artist more power.鈥

This research was funded by the Washington Research Foundation and the Canadian Institute for Advanced Research (CIFAR), where Harchaoui is an associate fellow.

For more information, contact the research team at musicnet@cs.washington.edu.

]]>