Tech Policy Lab – 91̽News /news Tue, 31 Mar 2026 22:34:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Q&A: Ryan Calo, law professor and interdisciplinary researcher, talks about his new book, “Law and Technology” /news/2026/03/31/qa-ryan-calo-law-professor-and-interdisciplinary-researcher-talks-about-his-new-book-law-and-technology/ Tue, 31 Mar 2026 22:34:24 +0000 /news/?p=91165 A book cover
Ryan Calo, a 91̽professor of law, has written a new book, “Law & Technology.” Calo is also a professor in the Information School and an adjunct in the Paul G. Allen School of Computer Science & Engineering. Photo: University of Oxford Press

Since Ryan Calo joined 91̽ School of Law in 2012, he has become a leading expert on the law and emerging technology. 

Calo believes that few interesting questions — especially around technology — can be resolved by reference to a single discipline.

Calo is a co-founder of the , and the . He is also a professor in the and an adjunct in the .

Calo’s newest book, “,” published late last year, is a guide to a legal analysis of regulation and technology. Nearly a decade ago, Calo realized that the most recent book on the topic was published in the 1970s. He decided it was time for an updated resource reflecting current, rapidly evolving technology and the present regulatory environment.

91̽News spoke with Calo about the book and the current legal and policy climate in the United States.

man wearing a plaid shirt standing outside
Ryan Calo is a professor in the 91̽School of Law and the Information School. He is an adjunct in the Paul G. Allen School of Computer Science & Engineering. Photo: Doug Parry/91̽

Who is the intended audience for “Law and Technology”?

Ryan Calo: I wrote it primarily for new entrants to the field, be they junior scholars or students. I also hoped that the themes would resonate with more senior scholars and that it would be useful outside of academia for either analysis or instruction. Because ultimately, what the book does is proposes a methodology for analyzing technology from a legal perspective.

I spent a lot of time interacting with policymakers, staffers on Capitol Hill, people who work for senators and members of Congress. A legislator might come to a staffer and say,  “Hey, my constituents are really worried about augmented reality or AI. They’re really worried about deep fakes.” That staff member doesn’t really have a place to start, and they end up just calling up experts, reading New York Times articles, talking to industry, but not in any kind of methodical way. This book is designed to help them figure out what’s going on.

I also hope that this book would be of use to people who are in practice and want to be more methodical about analyzing a given technology.

Technology evolves fast. How should the legal system and policymakers prepare to navigate the relationship between law and emerging technologies?

RC: Many of us have an expectation that technology is just going to change. It’s just going to evolve, and our job as lawyers or judges or policymakers, is to kind of scramble and accommodate the resulting disruption, and perhaps try to restore the status quo. Part of what I hope to see is legal scholars and policymakers acknowledging that the disruption isn’t inevitable.

We need to empower independent researchers to figure out what’s going on with new technology. Right now researchers are disempowered because they don’t have access to the relevant data and platforms. And many times when they try to get that data, they get served with a cease and desist letter.

We need to protect whistleblowers and make sure there’s adequate, truly top-notch expertise within government. If you have those things, then you’re much more likely to be able to figure out what could go wrong with these technologies without having to observe the harm unfold over a long period of time, as we have with the internet and now with AI.

You mentioned the School of Law’s leadership in tech policy. How is the 91̽positioned nationally in this space?

RC: We are really among the leaders in this area.

The School of Law has a lot of tech policy offerings, including a . Many faculty have contributed to scholarship over the years. We have lots of faculty writing about law and technology.

We also have been really a model for impactful interdisciplinary collaboration. Law students can work in the clinic or the Tech Policy Lab. I’m one of the founders of the Center for an Informed Public, which bridges human centered and design engineering as well as the Information School and dozens of other departments including psychology, education and even geography.

A third important example is the . We did a whole year of work mapping out who was doing work in the space — all the centers, all the labs, all the initiatives — all the people on the three campuses identified as working at this intersection.

We’re leaders across the country at the law school in terms of our student offerings in our research, but we are also part of that interstitial glue. People think of the iSchool, which they should. They think of computer science, which they should. But they also should think about who else is in the center of this, who else is at the heart of it, and the School of Law is a big part of that.

There’s been a lot of news lately about states trying to regulate AI and the federal government pushing back. What’s your perspective?

RC: If I were trying to sabotage the innovation edge of the United States, I would do at least two things, maybe three.

First, I would divest in basic research. The United States has had an innovation edge over the rest of the world in large part because of decisions made in the 1950s and beyond to invest in basic research. I would dismantle that, and I would try to make it really hard for universities to do research, either by spending less, disrupting the relationships, or messing with overhead in ways that makes research impossible.

The second thing I would do is make it really hostile for outside innovators to come in and participate in knowledge production here. I would, whether xenophobically or not, try to make it really hard for people with ideas and talent and knowledge to come here to the United States to work on teams with other Americans, to stay here and teach in our schools, to found companies. The second enormous advantage the United States has had is that the country has become attractive because of its commitment to the rule of law and its robust higher ed system, and that’s built on its innovation and investment in research. People from all over the world come here to try and make the next Google and Amazon, or are teaching in our schools and contributing to our ecosystem.

The third thing I would do in this hypothetical situation is remove non-existent hurdles to transformative technologies like AI. What do I mean? Federal leaders are currently talking about getting out of the way of AI, but there aren’t any regulations about AI, really. There are some state laws that have a kind of European flavor of risk management, like and . There are specific things that states are worried about, including deep fakes and labeling online social media accounts that are automated. There’s almost nothing standing in the way of AI innovation in terms of regulation.

The way that our system is structured is that the individual states, under our concept of federalism, are supposed to be laboratories of ideas, experimenting with legislation, and showing that it works or it doesn’t. Pretending that you’re pro-innovation because you’re trying to stamp out the very few regulatory hurdles that companies have to have to abide by all in the name of competing with China, which has AI laws, is just senseless. We’re much better off following the wisdom of the founders, who said, “Hey, if you have something new in society, let the states serve as laboratories for different laws, and we can all learn from each other about how that’s going.” That’s classic federalism and it used to be a pillar of conservative thinking.

The President doesn’t have the power to boss the states around in terms of their legislative capacities. And Congress has taken up the question of whether to try to preempt AI laws, and they resignedly declined. I just want to comment that the overall strategy of the administration has been deeply anti-innovation in its impact, even though it is vociferously proinnovation in its rhetoric.

Any final thoughts?

RC: We have an environment in the U.S. that promotes innovation, sometimes through laws, such as laws that protect intellectual property, and laws that make people feel safe enough to use products and services that companies can sell them to us. There’s not, and never has been, a one-to-one correlation between regulation and promoting innovation. It’s really important that we acknowledge, as a society and community, that sometimes laws are written in the service of innovation. What you want is a favorable regulatory environment, not a complete absence of the rule of law.

For more information, contact Calo at rcalo@uw.edu.

]]>
Kids, parents alike worried about privacy with internet-connected toys /news/2017/05/10/kids-parents-alike-worried-about-privacy-with-internet-connected-toys/ Wed, 10 May 2017 15:53:02 +0000 /news/?p=53168 , and other toys connected to the internet can joke around with children and respond in surprising detail to questions posed by their young users. The toys record the voices of children who interact with them and store those recordings in the cloud, helping the toys become “smarter.”

As Wi-Fi-enabled toys like these compete for attention in the home, a new analysis finds that kids are unaware of their toys’ capabilities, and parents have numerous privacy concerns.

91̽ researchers have conducted a that explores the attitudes and concerns of both parents and children who play with internet-connected toys. Through a series of in-depth interviews and observations, the researchers found that kids didn’t know their toys were recording their conversations, and parents generally worried about their children’s privacy when they played with the toys.

CogniToys Dino, left, and Hello Barbie. Photo: U of Washington/Barbie

“These toys that can record and transmit are coming into a place that’s historically legally very well-protected ― the home,” said co-lead author , associate director of the UW’s . “People have different perspectives about their own privacy, but it’s crystalized when you give a toy to a child.”

The researchers presented their paper May 10 at the .

Though internet-connected toys have taken off commercially, their growth in the market has not been without security breaches and public scrutiny. VTech, a company that produces tablets for children, was storing personal data of more than 200,000 children when its in 2015. Earlier this year, over fears that personal data could be stolen.

It’s within this landscape that the 91̽team sought to understand the privacy concerns and expectations kids and parents have for these types of toys.

The researchers conducted interviews with nine parent-child pairs, asking each of them questions ― ranging from whether a child liked the toy and would tell it a secret to whether a parent would buy the toy or share what their child said to it on social media.

They also observed the children, all aged 6 to 10, playing with Hello Barbie and CogniToys Dino. These toys were chosen for the study because they are among the industry leaders for their stated privacy measures. Hello Barbie, for example, has an extensive permissions process for parents when setting up the toy, and it has been complimented for its strong encryption practices.

The resulting paper highlights a wide selection of comments from kids and parents, then makes recommendations for toy designers and policymakers.

A screenshot of the Hello Barbie parent panel that allows parents to listen to their child’s responses to various questions that Barbie asks, as well as share them on social networks.

Most of the children participating in the study did not know the toys were recording their conversations. Additionally, the toys’ lifelike exteriors probably fueled the perception that they are trustworthy, the researchers said, whereas kids might not have the tendency to share secrets and personal information when communicating with similar tools not intended as toys, such as Siri and Alexa.

“The toys are a social agent where you might feel compelled to disclose things that you wouldn’t otherwise to a computer or cell phone. A toy has that social exterior which might fool you into being less secure on what you tell it,” said co-lead author , an assistant professor at the Allen School. “We have this concern for adults, and with children, they’re even more vulnerable.”

Some kids were troubled by the idea of their conversations being recorded. When one parent explained how the child’s conversation with the doll could end up being shared widely on the computer, the child responded: “That’s pretty scary.”

At minimum, toy designers should create a way for the devices to notify children when they are recording, the researchers said. Designers could consider recording notifications that are more humanlike, such as having Hello Barbie say, “I’ll remember everything you say to me” instead of a red recording light that might not make sense to a child in that context.

The study found that most parents were concerned about their child’s privacy when playing with the toys. They universally wanted parental controls such as the ability to disconnect Barbie from the internet or control the types of questions to which the toys will respond. The researchers recommend toy designers delete recordings after a week’s time, or give parents the ability to delete conversations permanently.

A demonstrated that video recordings that are filtered to preserve privacy can still allow a tele-operated robot to perform useful tasks, such as organize objects on a table. This study also revealed that people are much less concerned about privacy ― even for sensitive items that could reveal financial or medical information ― when such filters are in place. Speech recordings on connected toys could similarly be filtered to remove identity information and encode the content of speech in less human-interpretable formats to preserve privacy, while still allowing the toy to respond intelligibly.

The researchers hope this initial look into the privacy concerns of parents and kids will continue to inform both privacy laws and toy designers, given that such devices will only continue to fill the market and home.

“It’s inevitable that kids’ toys, as with everything else in society, will have computers in them, so it’s important to design them with security measures in mind,” said co-lead author , a 91̽assistant professor at the Allen School. “I hope the security research community continues to study these specific user groups, like children, that we don’t necessarily study in-depth.”

Other co-authors are Sarah Hubbard and Timothy Lau of the Information School and Aditya Saraf of the Allen School of Computer Science & Engineering.

The study was funded by the Consumer Privacy Rights Fund at the Rose Foundation for Communities and the Environment and by the UW’s Tech Policy Lab.

###

For more information, contact Emily McReynolds at emcr@uw.edu or 206-685-4533.

 

 

]]>
91̽to host first of four White House public workshops on artificial intelligence /news/2016/05/19/uw-to-host-first-of-four-white-house-public-workshops-on-artificial-intelligence/ Thu, 19 May 2016 18:27:10 +0000 /news/?p=47969 From self-driving vehicles to social robots, artificial intelligence is evolving at a rapid pace, creating vast opportunities as well as complex challenges.

Recognizing that, the White House Office of Science and Technology Policy is co-hosting four public workshops on artificial intelligence — the first of them May 24 at the 91̽. Subsequent events will take place in ; in ; and in .

Put on by the and the 91̽, the will focus on legal and policy issues around artificial intelligence, or AI.

Speakers include:

  • , law school dean and president of the Association of American Law Schools
  • , special assistant to the president for economic and technology policy
  • , White House deputy U.S. chief technology officer
  • , a 91̽assistant professor of law and co-director of the Tech Policy Lab
  • , a 91̽professor of computer science and engineering and author of “”
  • , chief executive officer of the Allen Institute for Artificial Intelligence and a 91̽professor of computer science and engineering
  • , an associate professor in the School of Information at UC Berkeley and co-director of the Berkeley Center for Law & Technology
  • , a principal researcher at Microsoft Research New York City and senior researcher at NYU Information Law Institute
  • , a law professor at Yale Law School
  • Camille Fischer, policy advisor, National Economic Council
  • Terah Lyons, policy advisor, White House Office of Science and Technology Policy

Etzioni will provide an overview on the current state of artificial intelligence, followed by two panel discussions. The first will examine issues around making decisions in the private or public sector using artificial intelligence.

The second panel will focus on logistical aspects of AI applications, such as when the government might reasonably feel comfortable turning mail delivery over to robots or how safe autonomous flight must be to be used for deliveries.

The aim of the workshops is to look at the advantages and drawbacks of artificial intelligence. As a White House points out, President Obama’s and the will both rely on AI to identify patterns in medical data and help doctors diagnose diseases and determine treatment plans. But others worry the technology will displace human workers, or go so far as to that it could pose a threat to the human race.

The 91̽workshop, free and open to the public, will be held from 1:30 to 5 p.m. May 24 in the Magnuson Jackson Courtroom 138 at the 91̽School of Law. A reception follows from 5 to 7 p.m. Registration is available , and the conference will be .

The next in the series, about artificial intelligence for social good, is June 7 in Washington, D.C., followed by a June 28 on safety and control for AI at Carnegie Mellon University in Pittsburgh and a July 7 in New York City on the social and economic implications of AI.

For more information, contact Ryan Calo at rcalo@uw.edu or 206-543-1580.

]]>
Life, enhanced: 91̽professors study legal, social complexities of an augmented reality future /news/2015/11/03/life-enhanced-uw-professors-study-legal-social-complexities-of-an-augmented-reality-future/ Tue, 03 Nov 2015 17:45:14 +0000 /news/?p=39678
A mockup of an augmented reality mobile phone using a curved LED screen that renders data for the wearer/user gathered by cameras mounted on one or both sides. Photo: Leonard Low / Wikimedia commons

is the enhancement of human perception through overlaying technologies that can expand, annotate and even record the user’s moment-to-moment experience.

Those designing coming augmented reality systems should make them adaptable to change, resistant to hacking and responsive to the needs of diverse users, according to a white paper by an interdisciplinary group of researchers at the 91̽’s .

Though still in its relative infancy, augmented reality promises systems that can aid people with mobility or other limitations, providing real-time information about their immediate environment as well as hands-free obstacle avoidance, language translation, instruction and much more. From enhanced eyewear like Google Glass to Microsoft’s wearable HoloLens system, tech, gaming and advertisement industries are already investing in and deploying augmented reality devices and systems.

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction — as well as potential discrimination — are bound to follow.

The Tech Policy Lab brings together faculty and students from the , and and other campus units to think through issues of technology policy. “” is the lab’s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 , or UbiComp conference.

, assistant professor of law and Tech Policy Lab co-director, is lead author together with of the Information School and and of computer science and engineering. Other co-authors are Emily McReynolds, 91̽Tech Policy Lab associate director; Tamara Denning, who graduated from the 91̽in computer science and engineering and is now an assistant professor at the University of Utah; Bryce Newell, who graduated from the 91̽Information School and is now a postdoctoral researcher the University of Tilburg; Information School doctoral student Lassana Magassa and School of Law alumnus Jesse Woo.

The researchers used a method of work designed by the Tech Policy Lab for evaluating new technologies, first conferring with those in the computer science field to define augmented reality as precisely as possible. Then they look to the humanities and social sciences — information science, in this case — to consider the impact of the technology in question on various end users. They called these “diversity panels.”

Magassa, who organized the diversity panels, said they help to ensure that underrepresented groups are highlighted in a way that makes sense to those that develop technology and its governing policies.

“They also are important in that they increase the likelihood that the people who develop such policies get to hear and consider alternate points of view, concerns and visions as they design and develop technology policies,” he said.

The researchers sorted issues raised by augmented reality into basic categories: those relating to the collection of information, and those relating to its display.

  • The collection of information raises issues that include a reasonable expectation of privacy, the First Amendment right to free speech, intellectual property and the relaying of information to third parties.
  • The display of information in augmented reality systems prompted questions about harm caused by errors or negligence, product liability and potential discrimination or even digital assault.

The group arrived at a set of recommendations for policymakers that “do not purport to advance any particular vision, but rather provide guidance that can be used to inform the policymaking process.”

Their recommendations, briefly put, were:

Build dynamic systems: Augmented reality systems should be flexible and capable of being updated to reflect changes both technological and cultural, to remain relevant.

Conduct “threat modeling”: Hackers beat systems by finding behaviors that designers didn’t anticipate. Systems should be reviewed with an eye toward who might wish to compromise the system and how. This is particularly important because breeches of augmented reality systems could lead to physical harm.

Coordinate with designers: No technology policy should be made in isolation. Designers may not fully appreciate the legal import of a project and policymakers need to understand the technology in order to make wise decisions.

Consult with diverse potential users: People will use augmented reality in different ways depending on their own experiences and skills. Those planning such systems should consult with diverse populations, and solicit and use their feedback.

Acknowledge trade-offs: Systems open to third-party analysis or additions might promote greater freedom and innovation, but at the cost of harm through malicious applications or coding. Long-term storage, cloud processing or other advanced data processes might give faster performance at the cost of privacy.

Calo called the interdisciplinary analysis of augmented reality law and policy concerns difficult but crucial work.

“We had to come up with a process to blend the technical, legal, design and other elements into a single policy document,” he said. “I hope the finished document proves useful to policymakers of all kinds.”

###

For more information, contact Calo at rcalo@uw.edu or 206-543-1580. Follow Calo on Twitter at .

]]>