Ali Farhadi – 91探花News /news Tue, 27 Oct 2020 17:44:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Three 91探花scientists awarded Sloan Fellowships for early-career research /news/2017/02/21/three-uw-scientists-awarded-sloan-fellowships-for-early-career-research/ Tue, 21 Feb 2017 17:42:37 +0000 /news/?p=52145 Three faculty members at the 91探花 have been awarded early-career from the Alfred P. Sloan Foundation. The new Sloan Fellows, Feb. 21, include , assistant professor of computer science and engineering; , assistant professor of astronomy; and , assistant professor of physiology and biophysics.

Open to scholars in eight scientific and technical fields 鈥 chemistry, computer science, economics, mathematics, molecular biology, neuroscience, ocean sciences and physics 鈥 the fellowships honor those early-career scholars whose achievements mark them as the next generation of scientific leaders.

The 126 were awarded in close coordination with the scientific community. Candidates are nominated by their fellow scientists, and winning fellows are selected by independent panels of senior scholars based on each candidate鈥檚 research accomplishments, creativity and potential to become a leader in his or her field. Each fellow will receive $60,000 to apply toward research endeavors.

This year鈥檚 fellows come from 60 institutions across the United States and Canada, spanning fields from evolutionary biology to data science. The new Sloan Fellows at the 91探花reflect this diversity, probing complex questions in machine learning, stellar astrophysics and neuroscience.

Ali Farhadi

In the Department of Computer Science & Engineering, Farhadi focuses on computer vision, machine learning, the intersection of natural language and vision, analysis of the role of semantics in visual understanding, and visual reasoning. His work seeks to enable computers to perform visual tasks that human brains perform seamlessly 鈥 from intuiting why an 鈥渁bnormal鈥 image looks strange to predicting how objects will move if acted upon and understanding actions and behaviors in a scene.

As the听听for the computer vision group at the Seattle-based Allen Institute for Artificial Intelligence, Farhadi also leads听. The project focuses on the intersection of artificial intelligence and computer vision and involves extracting knowledge from images, diagrams and videos; designing visual reasoning and planning algorithms; and parsing visual data.

Emily Levesque

In the Department of Astronomy, Levesque studies the behavior, composition and life cycles of 鈥渕assive鈥 stars. Some of her targets are stellar behemoths in our own neighborhood, like Betelgeuse, while others straddle the edge of the visible universe.

Massive stars 鈥 which are at least eight times more massive than our own sun 鈥 harbor about our universe. Thanks to the light they put out and the gases they ionize, massive stars account for the vast majority of light astronomers observe in other young, star-forming galaxies. Levesque鈥檚 data can help astronomers understand how these stars form, evolve and interact with the galaxies where they are born. She also studies how massive stars die, usually as explosive supernovae. Since some supernovae also belch out bursts of gamma rays, which are powerful enough to be observed during the deaths of some of the first stars, her data from these events can help scientists envision the infancy of the universe.

Levesque makes her observations on telescopes in Chile, Hawaii and the American Southwest, including the Apache Point Observatory 3.5-meter telescope in New Mexico in which 91探花is a founding partner. She also helps improve the methods astronomers use to analyze data gathered on these shared platforms. Astronomy has no shortage of cosmological questions, and Levesque wants to ensure that our telescopic divining rods give us clear answers.

John Tuthill

Tuthill, a 91探花Medicine scientist, explores how the nervous system detects and decodes mechanical signals to guide movement and behavior.听 From the rat whose whiskers let it slip through building eaves, to an insect landing on a leaf, animals use mechanosensory clues to navigate.

the tiny nervous system of the fruit fly, Drosophila. The lab records neural activity from the fruit fly brain with electrophysiology and 2-photon imaging, while manipulating neural circuit function with advanced genetic tools. By combining these techniques with fine-scale analysis of fly behavior, the lab seeks to understand how activity in neural circuits senses and coordinates body movements.

The Tuthill Lab hopes to identify fundamental sensory and motor function principles that could illuminate underlying mechanisms of human movement disorders and pathological sensory conditions, such as chronic pain. Despite the apparent differences between flies and humans, the basic building blocks of the nervous system are the same.

While he was a doctoral student at Howard Hughes Medical Institute/Janelia, Tuthill studied how the fly brain detects visual motion. Later, as a Harvard Medical School postdoctoral fellow, he pioneered studies of touch processing in the fly. He joined the 91探花medical school faculty in 2016.

###

For more information, contact James Urton at the 91探花Office of News & Information at 206-543-2580 or jurton@uw.edu.

]]>
AI system solves SAT geometry questions as well as average human test taker /news/2015/09/21/ai-system-solves-sat-geometry-questions-as-well-as-average-human-test-taker/ Mon, 21 Sep 2015 13:44:52 +0000 /news/?p=38711
Photo: Aaron Escobar, flickr

The (AI2) and 91探花 researchers have created an artificial intelligence (AI) system that can solve SAT geometry questions as well as the average American 11th-grade student, a breakthrough in AI research.

This system, called , uses a combination of computer vision to interpret diagrams, natural language processing to read and understand text and a geometric solver to achieve 49 percent accuracy on official SAT test questions. If these results were extrapolated to the entire Math SAT test, the computer roughly achieved an SAT score of 500 (out of 800), the average test score for 2015.

A paper outlining the research, “,” was a joint effort between the 91探花Computer Science & Engineering department and AI2.

These results, presented at the (EMNLP) in Lisbon, Portugal, were achieved by GeoS solving unaltered SAT questions that it had never seen before and that required an understanding of:

* Implicit relationships

* Ambiguous references

* The relationships between diagrams and natural-language text

鈥淯nlike the Turing Test, standardized tests such as the SAT provide us today with a way to measure a machine鈥檚 ability to reason and to compare its abilities with that of a human,鈥 said , CEO of AI2. 鈥淢uch of what we understand from text and graphics is not explicitly stated, and requires far more knowledge than we appreciate. Creating a system to be able to successfully take these tests is challenging, and we are proud to achieve these unprecedented results.鈥

Said , senior research manager for Vision at AI2 and 91探花assistant professor of computer science and engineering, “We are excited about GeoS’s performance on real-world tasks. Our biggest challenge was converting the question to a computer-understandable language. One needs to go beyond standard pattern-matching approaches for problems like solving geometry questions that require in-depth understanding of text, diagram and reasoning.”

How GeoS Works

GeoS is the first end-to-end system that solves SAT plane geometry problems. It does this by first interpreting a geometry question by using the diagram and text in concert to generate the best possible logical expressions of the problem, which it sends to a geometric solver to solve. Then it compares that answer to the multiple-choice answers for that question.

A demonstration of the system鈥檚 problem-solving .

This process is complicated by the fact that SAT questions contain many unstated assumptions.

Photo: AI2/91探花

For example, in the SAT problem at right, there are several unstated assumptions, such as the fact that lines BD and AC intersect at E, that 鈥渃ircle O has a radius of 5鈥 is the same as 鈥渃ircle O radius equals 5鈥 and that the drawing may or may not be to scale.

GeoS had a 96 percent accuracy rate on questions it was confident enough to answer, which is an important dimension of learning. Today, GeoS can solve plane geometry questions; AI2 is moving to solve the full set of SAT math questions in the next three years.

As part of AI2鈥檚 commitment to sharing its research for the common good, all data sets and software are .

AI2 is also building systems that can tackle science tests, which require a knowledge base that includes elements of the unstated, common-sense knowledge that humans generate over their lives. This Aristo project .

Co-authors include lead author , a 91探花computer science and engineering doctoral student, 91探花electrical engineering assistant research professor , and former 91探花undergraduate student Clint Malcolm.

About AI2

was founded in 2014 with the singular focus of conducting high-impact research and engineering in the field of artificial intelligence, all for the common good. AI2 is the creation of Paul Allen, Microsoft cofounder, and is led by Dr. Oren Etzioni, a renowned researcher in the field of AI. AI2 employs more than 35 top-notch researchers and engineers, attracting individuals of varied interests and backgrounds from across the globe. AI2 prides itself on the diversity and collaboration of this team, and takes a results-oriented approach to complex challenges in AI.

About 91探花 Computer Science & Engineering ( 91探花CSE):

educates tomorrow’s innovators, conducts high-impact research, transfers new discoveries to society and creates opportunities for faculty and students to push the boundaries of a rapidly expanding field while developing solutions to humanity’s greatest challenges.

For more information, contact Hamilton McCulloh at 206-957-4260 or hamiltonm@greenrubino.com or Jennifer Langston at 206-543-2580 or jlangst@uw.edu.

]]>
New computer program aims to teach itself everything about anything /news/2014/06/12/new-computer-program-aims-to-teach-itself-everything-about-anything/ Thu, 12 Jun 2014 18:00:28 +0000 /news/?p=32468 In today’s digitally driven world, access to information appears limitless.

But when you have something specific in mind that you don’t know, like the name of that niche kitchen tool you saw at a friend’s house, it can be surprisingly hard to sift through the volume of information online and know how to search for it. Or, the opposite problem can occur 鈥 we can look up anything on the Internet, but how can we be sure we are finding everything about the topic without spending hours in front of the computer?

Some of the many variations that the new program has learned for three different concepts: "Horse," "Dog" and "Walking."
Some of the many variations the new program has learned for three different concepts.

Computer scientists from the 91探花 and the in Seattle have created the first fully automated computer program that teaches everything there is to know about any visual concept. Called听, or LEVAN, the program searches millions of books and images on the Web to learn all possible variations of a concept, then displays the results to users as a comprehensive, browsable list of images, helping them explore and understand topics quickly in great detail.

“It is all about discovering associations between textual and visual data,” said听, a 91探花assistant professor of computer science and engineering. “The program learns to tightly couple rich sets of phrases with pixels in images. This means that it can recognize instances of specific concepts when it sees them.”

The research team will present the project and a听 this month at the in Columbus, Ohio.

The program learns which terms are relevant by looking at the content of the images found on the Web and identifying characteristic patterns across them using object recognition algorithms. It’s different from online image libraries because it draws upon a rich set of phrases to understand and tag photos by their content and pixel arrangements, not simply by words displayed in captions.

Users can browse the existing library of roughly 175 concepts. Existing concepts range from “airline” to “window,” and include “beautiful,” “breakfast,” “shiny,” “cancer,” “innovation,” “skateboarding,” “robot,” and the researchers’ first-ever input, “horse.”

If the concept you’re looking for doesn’t exist, you can听any search term and the program will automatically begin generating an exhaustive list of subcategory images that relate to that concept. For example, a search for “dog” brings up the obvious collection of subcategories: Photos of “Chihuahua dog,” “black dog,” “swimming dog,” “scruffy dog,” “greyhound dog.” But also “dog nose,” “dog bowl,” “sad dog,” “ugliest dog,” “hot dog” and even “down dog,” as in the yoga pose.

The technique works by searching the text from millions of books written in English and available on听, scouring for every occurrence of the concept in the entire digital library. Then, an algorithm filters out words that aren’t visual. For example, with the concept “horse,” the algorithm would keep phrases such as “jumping horse,” “eating horse” and “barrel horse,” but would exclude non-visual phrases such as “my horse” and “last horse.”

Once it has learned which phrases are relevant, the program does an image search on the Web, looking for uniformity in appearance among the photos retrieved. When the program is trained to find relevant images of, say, “jumping horse,” it then recognizes all images associated with this phrase.

“Major information resources such as dictionaries and encyclopedias are moving toward the direction of showing users visual information because it is easier to comprehend and much faster to browse through concepts. However, they have limited coverage as they are often manually curated. The new program needs no human supervision, and thus can automatically learn the visual knowledge for any concept,” said听, a research scientist at the Allen Institute for Artificial Intelligence and an affiliate scientist at 91探花in computer science and engineering.

The research team also includes听, a 91探花professor of computer science and engineering. The researchers launched the program in March with only a handful of concepts and have watched it grow since then to tag more than 13 million images with 65,000 different phrases.

Right now, the program is limited in how fast it can learn about a concept because of the computational power it takes to process each query, up to 12 hours for some broad concepts. The researchers are working on increasing the processing speed and capabilities.

The team wants the open-source program to be both an educational tool as well as an information bank for researchers in the computer vision community. The team also hopes to offer a smartphone app that can run the program to automatically parse out and categorize photos.

This research was funded by the U.S. Office of Naval Research, the National Science Foundation and the UW.

 

###

For more information, contact Farhadi at ali@cs.washington.edu or 206-221-8976 and Divvala at santosh@cs.washington.edu.

]]>