Maya Cakmak – 91探花News /news Tue, 27 Oct 2020 18:42:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Creating curious robots: 91探花researchers get Honda grant to build a mathematical model of curiosity /news/2018/10/25/curious-robots/ Thu, 25 Oct 2018 15:27:33 +0000 /news/?p=59435 Curious Minded Machine Logo
Curious Minded Machine is a new initiative by Honda Research Institute USA, Inc. to design a system that learns continuously in a humanlike, curiosity-driven way. Photo: Honda Research Institute USA, Inc.

Children are curious because it helps them better understand their world. Now researchers are curious if the same is true for robots.

Honda Research Institute USA, Inc., today , called Curious Minded Machine, which will design a robot or system that learns continuously in a humanlike, curiosity-driven way. Curious robots would be lifelong learners that could expand their list of skills without any additional training.

The 91探花 will lead one of three teams that will partner with the institute to explore the mechanisms behind curiosity and seek advances in artificial cognition. The UW-led team will receive $2.7 million over the next three years to generate a mathematical model of curiosity.

Siddhartha Srinivasa with his robot HERB (or “Home Exploring Robot Butler”). Photo: Dennis Wise/91探花

“We wish to explore several questions in our work,” said team leader , who is the Boeing Endowed professor with the UW’s Paul G. Allen School of Computer Science & Engineering. “What is curiosity? Can we build a rich mathematical model that makes a robot curious? Will a curious robot be accepted more? Will we be more tolerant of its mistakes?”

Two other Allen School researchers, assistant professor and professor , bring their experience studying human鈥搑obot interactions, including designing programmable robots that users can personalize for specific tasks and developing ways for robots to perceive objects in an environment. , an acting associate professor of psychology at the University of California, Santa Cruz, rounds out the team with expertise on the social science of how humans interact with robots.

“Our first step is to better understand curiosity in humans, starting from infants’ constant experimentation with their surroundings, to 4-year-olds asking why everything is the way it is, to adults’ interest in topics completely outside their professions,” Cakmak said. “Humans are intrinsically rewarded by new information even when that information is not necessarily applicable, but curiosity has long-term benefits. We would like to give robots similar benefits for being curious.”

After the researchers develop a model of curiosity, they hope to use it to create two separate robot prototypes: a social robot that interacts with people in an office building or a home, and an arm robot that could manipulate objects placed in front of it, like in an assembly line. The team argues that the same model could be used for both prototypes.

Maya Cakmak with a robot
Maya Cakmak with a PR2 robot. Cakmak’s research aims to make robots that can be used by a wide variety of users, each with unique needs. Photo: 91探花

“Let’s say our model of curiosity rewards the robot for obtaining novel information that is irrelevant to its current task,” Cakmak said. “For the building robot this could manifest as taking a different path on the way back from a delivery. For the arm robot, this could result in the robot ‘playing’ with objects that are not part of its current task.”

Curious robots, the team said, are a step toward making robots that can perform constantly changing tasks. For example, future caregiver robots will need to be able to adjust their jobs to meet patients’ fluctuating needs.

“We think that curious robots will not only be better at their jobs, but they will appeal more to people,” Cakmak said. “Prior work also shows that robots can spark curiosity in people, which would be a wonderful side effect of curious robots.”

The UW-led team’s efforts will dovetail with work by two other teams, led by the Massachusetts Institute of Technology and the University of Pennsylvania. These teams will address other areas, such as how robots perceive and interact with the world and how robots can predict upcoming actions. After the three-year program, Honda Research Initiative will combine the work from all the teams to form the foundation for a future curious-minded robot.

“Honda is a pioneer in robotics research,” Srinivasa said. “Their was a game-changer in locomotion, motion planning, control and AI. To date, it is one of the most impressive and intelligent robots I have ever seen. With this collaboration, we hope to bring about a similar wave of new excitement to human鈥搑obot interaction and the field of robotics.”

]]>
Five 91探花scientists awarded Sloan Fellowships for early-career research /news/2018/02/15/five-uw-scientists-awarded-sloan-fellowships-for-early-career-research/ Thu, 15 Feb 2018 15:06:09 +0000 /news/?p=56592 Five faculty members at the 91探花 have been awarded early-career聽聽from the Alfred P. Sloan Foundation. The new Sloan Fellows,聽聽Feb. 15, include聽, assistant professor of computer science and engineering; , assistant professor of clean energy and physics; , assistant professor of electrical engineering and physics; , assistant professor of astronomy; and , assistant professor of aquatic and fishery sciences.

Open to scholars in eight scientific and technical fields 鈥 chemistry, computer science, economics, mathematics, molecular biology, neuroscience, ocean sciences and physics 鈥 the fellowships honor those early-career researchers whose achievements mark them as the next generation of scientific leaders.

The 126聽聽were selected in close coordination with the research community. Candidates are nominated by their peers, and fellows are selected by independent panels of senior scholars based on each candidate鈥檚 research accomplishments, creativity and potential to become a leader in his or her field. Each fellow will receive $65,000 to apply toward research endeavors.

This year鈥檚 fellows come from 53 institutions across the United States and Canada, spanning fields from evolutionary biology to data science. The new Sloan Fellows at the 91探花reflect this diversity, probing complex questions in robotics, quantum physics and the formation of the galaxy.

Maya Cakmak Photo: 91探花

Cakmak, for example, directs the , where she studies human-robot interactions, end-user programming and assistive robotics. She aims to develop robots that can be programmed and controlled by diverse users.

鈥淚t鈥檚 about packaging robot capabilities at the right level and creating the right interface for different users,鈥 said Cakmak.

Rather than aiming for a one-size-fits-all robot, Cakmak argues for customizing each robot to the unique needs, preferences and environments of users. Today, only expert roboticists can do that sort of customization. Cakmak aims to make robot programming accessible to a much wider audience. She believes this could be the key to mass adoption of robots and democratize 鈥渞obot programming鈥 jobs of the future.

Jiun-Haw Chu Photo: 91探花

Chu, of the , 聽focuses on the synthesis and characterization of materials with unconventional electronic and magnetic ground states, such as high-temperature superconductors and topological insulators. Simply put, Chu manufactures materials and measures their properties.

鈥淢y goal is to find more materials of this kind and study their properties to find why they come out this way, or if there are additional hidden properties that people don鈥檛 know about,鈥 said Chu.

The goal is to understand and control these emergent quantum behaviors and apply them to energy and information technology.

Arka Majumdar Photo: 91探花

Majumdar, a researcher with the , is at the forefront of the interdisciplinary research that combines quantum materials and nanophotonics. His research attempts to store light in an optical resonator to study its tiniest components. Majumdar is setting out to build quantum systems using light that can mimic the interactions between electrons in many of today鈥檚 technologies. That would pave the way for new materials and optical nano-structures that could revolutionize computing. Developing these technologies, however, can be very difficult.

鈥淥ur plan is to engineer new materials and new optical nanostructures to make photons interact with each other, which is a key element for performing computation with light, be it quantum or classical computing,鈥 said Majumdar.

Jessica Werk Photo: 91探花

Werk is a kind of galaxy historian, studying matter on atomic scales to help understand how galaxies 鈥 and the universe as a whole 鈥 evolve. By aiming giant telescopes at the night鈥檚 sky, she uses spectrographs to study atoms billions of light years away. Werk looks at the distinction between subatomic particles that exist both outside and inside galaxies. The outcome, she hopes, will help elucidate a better understanding of our own cosmic origins.

鈥淲hen I look at the sky I see lots of different atomic transitions that I鈥檓 trying to piece together into a coherent picture,鈥 said Werk.

Chelsea Wood Photo: 91探花

奥辞辞诲鈥檚 research explores the ecology of parasites and pathogens in a changing world. She is interested in how human impacts on ecosystems affect the transmission of parasites. 奥辞辞诲鈥檚 work has shown that disruption can alter what kinds of parasites are common and rare 鈥 increasing the abundance of some kinds of parasites and decreasing the abundance of others. The Sloan Fellowship will allow Wood and her team to look back in time at how parasite transmission changed as industrialization intensified human impacts on the oceans. She鈥檒l accomplish this by examining parasites preserved in museum specimens 鈥 mainly fish floating perennially in ethanol 鈥 including many that are more than a century old.

鈥淭hese fish are basically parasite time capsules,鈥 said Wood.

By developing time profiles of parasite abundance, Wood will provide the world鈥檚 first glimpse of what parasite communities might have been like in a more 鈥減ristine鈥 ocean.

###

For more information, contact Jackson Holtz at the 91探花News Office at 206-543-2580 or聽jjholtz@uw.edu.

 

 

 

]]>
Kids, parents alike worried about privacy with internet-connected toys /news/2017/05/10/kids-parents-alike-worried-about-privacy-with-internet-connected-toys/ Wed, 10 May 2017 15:53:02 +0000 /news/?p=53168 , and other toys connected to the internet can joke around with children and respond in surprising detail to questions posed by their young users. The toys record the voices of children who interact with them and store those recordings in the cloud, helping the toys become “smarter.”

As Wi-Fi-enabled toys like these compete for attention in the home, a new analysis finds that kids are unaware of their toys’ capabilities, and parents have numerous privacy concerns.

91探花 researchers have conducted a that explores the attitudes and concerns of both parents and children who play with internet-connected toys. Through a series of in-depth interviews and observations, the researchers found that kids didn’t know their toys were recording their conversations, and parents generally worried about their children’s privacy when they played with the toys.

CogniToys Dino, left, and Hello Barbie. Photo: U of Washington/Barbie

“These toys that can record and transmit are coming into a place that’s historically legally very well-protected 鈥 the home,” said co-lead author , associate director of the UW’s . “People have different perspectives about their own privacy, but it’s crystalized when you give a toy to a child.”

The researchers presented their paper May 10 at the .

Though internet-connected toys have taken off commercially, their growth in the market has not been without security breaches and public scrutiny. VTech, a company that produces tablets for children, was storing personal data of more than 200,000 children when its in 2015. Earlier this year, over fears that personal data could be stolen.

It’s within this landscape that the 91探花team sought to understand the privacy concerns and expectations kids and parents have for these types of toys.

The researchers conducted interviews with nine parent-child pairs, asking each of them questions 鈥 ranging from whether a child liked the toy and would tell it a secret to whether a parent would buy the toy or share what their child said to it on social media.

They also observed the children, all aged 6 to 10, playing with Hello Barbie and CogniToys Dino. These toys were chosen for the study because they are among the industry leaders for their stated privacy measures. Hello Barbie, for example, has an extensive permissions process for parents when setting up the toy, and it has been complimented for its strong encryption practices.

The resulting paper highlights a wide selection of comments from kids and parents, then makes recommendations for toy designers and policymakers.

A screenshot of the Hello Barbie parent panel that allows parents to listen to their child鈥檚 responses to various questions that Barbie asks, as well as share them on social networks.

Most of the children participating in the study did not know the toys were recording their conversations. Additionally, the toys’ lifelike exteriors probably fueled the perception that they are trustworthy, the researchers said, whereas kids might not have the tendency to share secrets and personal information when communicating with similar tools not intended as toys, such as Siri and Alexa.

“The toys are a social agent where you might feel compelled to disclose things that you wouldn’t otherwise to a computer or cell phone. A toy has that social exterior which might fool you into being less secure on what you tell it,” said co-lead author , an assistant professor at the Allen School. “We have this concern for adults, and with children, they’re even more vulnerable.”

Some kids were troubled by the idea of their conversations being recorded. When one parent explained how the child’s conversation with the doll could end up being shared widely on the computer, the child responded: “That’s pretty scary.”

At minimum, toy designers should create a way for the devices to notify children when they are recording, the researchers said. Designers could consider recording notifications that are more humanlike, such as having Hello Barbie say, “I’ll remember everything you say to me” instead of a red recording light that might not make sense to a child in that context.

The study found that most parents were concerned about their child’s privacy when playing with the toys. They universally wanted parental controls such as the ability to disconnect Barbie from the internet or control the types of questions to which the toys will respond. The researchers recommend toy designers delete recordings after a week’s time, or give parents the ability to delete conversations permanently.

A demonstrated that video recordings that are filtered to preserve privacy can still allow a tele-operated robot to perform useful tasks, such as organize objects on a table. This study also revealed that people are much less concerned about privacy 鈥 even for sensitive items that could reveal financial or medical information 鈥 when such filters are in place. Speech recordings on connected toys could similarly be filtered to remove identity information and encode the content of speech in less human-interpretable formats to preserve privacy, while still allowing the toy to respond intelligibly.

The researchers hope this initial look into the privacy concerns of parents and kids will continue to inform both privacy laws and toy designers, given that such devices will only continue to fill the market and home.

“It’s inevitable that kids’ toys, as with everything else in society, will have computers in them, so it’s important to design them with security measures in mind,” said co-lead author , a 91探花assistant professor at the Allen School. “I hope the security research community continues to study these specific user groups, like children, that we don’t necessarily study in-depth.”

Other co-authors are Sarah Hubbard and Timothy Lau of the Information School and Aditya Saraf of the Allen School of Computer Science & Engineering.

The study was funded by the Consumer Privacy Rights Fund at the Rose Foundation for Communities and the Environment and by the UW’s Tech Policy Lab.

###

For more information, contact Emily McReynolds at emcr@uw.edu or 206-685-4533.

 

 

]]>
Ask the crowd: Robots learn faster, better with online helpers /news/2014/06/26/ask-the-crowd-robots-learn-faster-better-with-online-helpers/ Thu, 26 Jun 2014 15:58:38 +0000 /news/?p=32680 Sometimes it takes a village to teach a robot.

91探花 computer scientists have shown that crowdsourcing can be a quick and effective way to teach a robot how to complete tasks. Instead of learning from just one human, robots could one day query the larger online community, asking for instructions or input on the best way to set the table or water the garden.

The UW鈥檚 robot builds a turtle model. Photo: U of Washington

The research team presented its at the 2014 Institute of Electrical and Electronics Engineers in Hong Kong in early June.

“We’re trying to create a method for a robot to seek help from the whole world when it’s puzzled by something,” said , an associate professor of computer science and engineering and director of the Center for Sensorimotor Neural Engineering at the UW. “This is a way to go beyond just one-on-one interaction between a human and a robot by also learning from other humans around the world.”

Learning by imitating a human is a proven approach to teach a robot to perform tasks, but it can take a lot of time. Imagine having to teach a robot how to load the dishwasher 鈥 it might take many repetitious lessons for the robot to learn how to hold different types of cookware and cutlery and how to most efficiently fill the machine.

But if the robot could learn a task’s basic steps, then ask the online community for additional input, it could collect more data on how to complete this task efficiently and correctly.

“Because our robots use machine-learning techniques, they require a lot of data to build accurate models of the task. The more data they have, the better model they can build. Our solution is to get that data from crowdsourcing,” said , a 91探花assistant professor of computer science and engineering.

The research team, led by professors Rao and Cakmak, also includes 91探花computer science and engineering graduate student Michael Jae-Yoon Chung and undergraduate Maxwell Forbes. The team designed a study that taps into the online crowdsourcing community to teach a robot a model-building task. To begin, study participants built a simple model 鈥 a car, tree, turtle and snake, among others 鈥 out of colored Lego blocks. Then, they asked the robot to build a similar object. But based on the few examples provided by the participants, the robot was unable to build complete models.

This image shows some of the crowdsourced designs for the word 鈥榯urtle.鈥
This image shows some of the crowdsourced designs for the word 鈥榯urtle.鈥 Photo: U of Washington

To gather more input about building the objects, the robots turned to the crowd. They hired people on Amazon Mechanical Turk, a crowdsourcing site, to build similar models of a car, tree, turtle, snake and others. From more than 100 crowd-generated models of each shape, the robot searched for the best models to build based on difficulty to construct, similarity to the original and the online community’s ratings of the models.

The robot then built the best models of each participant’s shape.

This type of learning is called “goal-based imitation,” and it leverages the growing ability of robots to infer what their human operators want, relying on the robot to come up with the best possible way of achieving the goal when considering factors such as time and difficulty. For example, a robot might “watch” a human building a turtle model, infer the important qualities to carry over, then build a model that resembles the original, but is perhaps simpler so it’s easier for the robot to construct.

“The end result is still a turtle, but it’s something that is manageable for the robot and similar enough to the original model, so it achieves the same goal,” Cakmak explained.

This image shows some of the crowdsourced designs for the word 鈥榩erson.鈥
This image shows some of the crowdsourced designs for the word 鈥榩erson.鈥 Photo: U of Washington

Study participants generally preferred crowdsourced versions that looked the most like their original designs. In general, the robot’s final models were simpler than the starting designs 鈥 and it was able to successfully build these models, which wasn’t always the case when starting with the study participants’ initial designs.

The team applied the same idea to learning manipulation actions on a two-armed robot. This time, users physically demonstrated new actions to the robot. Then, the robot imagined new scenarios in which it did not know how to perform those actions. Using abstract, interactive visualizations of the action, it asked the crowd to provide new ways of performing actions in those new scenarios. This work will be presented at the in November.

Other research teams at Brown University, Worcester Polytechnic Institute and Cornell University are working on for developing robots that have the ability to learn new capabilities through crowdsourcing.

The 91探花team is now looking at using crowdsourcing and community-sourcing to teach robots more complex tasks such as finding and fetching items in a multi-floor building. The researchers envision a future in which our personal robots will engage increasingly with humans online, learning new skills and tasks to better assist us in everyday life.

This research was funded by the U.S. Office of Naval Research and the National Science Foundation.

###

For more information, contact Cakmak at mcakmak@cs.washington.edu or 206-685-5643 and Rao at rao@cs.washington.edu or 206-914-4719.

]]>