Ryan Calo – 91̽News /news Tue, 31 Mar 2026 22:34:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Q&A: Ryan Calo, law professor and interdisciplinary researcher, talks about his new book, “Law and Technology” /news/2026/03/31/qa-ryan-calo-law-professor-and-interdisciplinary-researcher-talks-about-his-new-book-law-and-technology/ Tue, 31 Mar 2026 22:34:24 +0000 /news/?p=91165 A book cover
Ryan Calo, a 91̽professor of law, has written a new book, “Law & Technology.” Calo is also a professor in the Information School and an adjunct in the Paul G. Allen School of Computer Science & Engineering. Photo: University of Oxford Press

Since Ryan Calo joined 91̽ School of Law in 2012, he has become a leading expert on the law and emerging technology. 

Calo believes that few interesting questions — especially around technology — can be resolved by reference to a single discipline.

Calo is a co-founder of the , and the . He is also a professor in the and an adjunct in the .

Calo’s newest book, “,” published late last year, is a guide to a legal analysis of regulation and technology. Nearly a decade ago, Calo realized that the most recent book on the topic was published in the 1970s. He decided it was time for an updated resource reflecting current, rapidly evolving technology and the present regulatory environment.

91̽News spoke with Calo about the book and the current legal and policy climate in the United States.

man wearing a plaid shirt standing outside
Ryan Calo is a professor in the 91̽School of Law and the Information School. He is an adjunct in the Paul G. Allen School of Computer Science & Engineering. Photo: Doug Parry/91̽

Who is the intended audience for “Law and Technology”?

Ryan Calo: I wrote it primarily for new entrants to the field, be they junior scholars or students. I also hoped that the themes would resonate with more senior scholars and that it would be useful outside of academia for either analysis or instruction. Because ultimately, what the book does is proposes a methodology for analyzing technology from a legal perspective.

I spent a lot of time interacting with policymakers, staffers on Capitol Hill, people who work for senators and members of Congress. A legislator might come to a staffer and say,  “Hey, my constituents are really worried about augmented reality or AI. They’re really worried about deep fakes.” That staff member doesn’t really have a place to start, and they end up just calling up experts, reading New York Times articles, talking to industry, but not in any kind of methodical way. This book is designed to help them figure out what’s going on.

I also hope that this book would be of use to people who are in practice and want to be more methodical about analyzing a given technology.

Technology evolves fast. How should the legal system and policymakers prepare to navigate the relationship between law and emerging technologies?

RC: Many of us have an expectation that technology is just going to change. It’s just going to evolve, and our job as lawyers or judges or policymakers, is to kind of scramble and accommodate the resulting disruption, and perhaps try to restore the status quo. Part of what I hope to see is legal scholars and policymakers acknowledging that the disruption isn’t inevitable.

We need to empower independent researchers to figure out what’s going on with new technology. Right now researchers are disempowered because they don’t have access to the relevant data and platforms. And many times when they try to get that data, they get served with a cease and desist letter.

We need to protect whistleblowers and make sure there’s adequate, truly top-notch expertise within government. If you have those things, then you’re much more likely to be able to figure out what could go wrong with these technologies without having to observe the harm unfold over a long period of time, as we have with the internet and now with AI.

You mentioned the School of Law’s leadership in tech policy. How is the 91̽positioned nationally in this space?

RC: We are really among the leaders in this area.

The School of Law has a lot of tech policy offerings, including a . Many faculty have contributed to scholarship over the years. We have lots of faculty writing about law and technology.

We also have been really a model for impactful interdisciplinary collaboration. Law students can work in the clinic or the Tech Policy Lab. I’m one of the founders of the Center for an Informed Public, which bridges human centered and design engineering as well as the Information School and dozens of other departments including psychology, education and even geography.

A third important example is the . We did a whole year of work mapping out who was doing work in the space — all the centers, all the labs, all the initiatives — all the people on the three campuses identified as working at this intersection.

We’re leaders across the country at the law school in terms of our student offerings in our research, but we are also part of that interstitial glue. People think of the iSchool, which they should. They think of computer science, which they should. But they also should think about who else is in the center of this, who else is at the heart of it, and the School of Law is a big part of that.

There’s been a lot of news lately about states trying to regulate AI and the federal government pushing back. What’s your perspective?

RC: If I were trying to sabotage the innovation edge of the United States, I would do at least two things, maybe three.

First, I would divest in basic research. The United States has had an innovation edge over the rest of the world in large part because of decisions made in the 1950s and beyond to invest in basic research. I would dismantle that, and I would try to make it really hard for universities to do research, either by spending less, disrupting the relationships, or messing with overhead in ways that makes research impossible.

The second thing I would do is make it really hostile for outside innovators to come in and participate in knowledge production here. I would, whether xenophobically or not, try to make it really hard for people with ideas and talent and knowledge to come here to the United States to work on teams with other Americans, to stay here and teach in our schools, to found companies. The second enormous advantage the United States has had is that the country has become attractive because of its commitment to the rule of law and its robust higher ed system, and that’s built on its innovation and investment in research. People from all over the world come here to try and make the next Google and Amazon, or are teaching in our schools and contributing to our ecosystem.

The third thing I would do in this hypothetical situation is remove non-existent hurdles to transformative technologies like AI. What do I mean? Federal leaders are currently talking about getting out of the way of AI, but there aren’t any regulations about AI, really. There are some state laws that have a kind of European flavor of risk management, like and . There are specific things that states are worried about, including deep fakes and labeling online social media accounts that are automated. There’s almost nothing standing in the way of AI innovation in terms of regulation.

The way that our system is structured is that the individual states, under our concept of federalism, are supposed to be laboratories of ideas, experimenting with legislation, and showing that it works or it doesn’t. Pretending that you’re pro-innovation because you’re trying to stamp out the very few regulatory hurdles that companies have to have to abide by all in the name of competing with China, which has AI laws, is just senseless. We’re much better off following the wisdom of the founders, who said, “Hey, if you have something new in society, let the states serve as laboratories for different laws, and we can all learn from each other about how that’s going.” That’s classic federalism and it used to be a pillar of conservative thinking.

The President doesn’t have the power to boss the states around in terms of their legislative capacities. And Congress has taken up the question of whether to try to preempt AI laws, and they resignedly declined. I just want to comment that the overall strategy of the administration has been deeply anti-innovation in its impact, even though it is vociferously proinnovation in its rhetoric.

Any final thoughts?

RC: We have an environment in the U.S. that promotes innovation, sometimes through laws, such as laws that protect intellectual property, and laws that make people feel safe enough to use products and services that companies can sell them to us. There’s not, and never has been, a one-to-one correlation between regulation and promoting innovation. It’s really important that we acknowledge, as a society and community, that sometimes laws are written in the service of innovation. What you want is a favorable regulatory environment, not a complete absence of the rule of law.

For more information, contact Calo at rcalo@uw.edu.

]]>
Faculty/staff honors: Legal education innovation award, stellar astronomical writing and more /news/2023/03/22/faculty-staff-honors-legal-education-innovation-award-stellar-astronomical-writing-and-more/ Wed, 22 Mar 2023 18:00:25 +0000 /news/?p=80959 Recent recognition of the 91̽ includes the Bloomberg Law 2022 Law School Innovation Program “Top Legal Education Program” for the 91̽Tech Policy Lab, 2023 Seattle Aquarium Conservation Research Award for Vera Trainer and 2023 Chambliss Astronomical Writing Award for Emily Levesque.

Bloomberg recognizes 91̽Tech Policy Lab as ‘Top Legal Innovation Program’

The 91̽Tech Policy Lab was recently recognized by Bloomberg as a in 2022 due to its unique cross-discipline approach. The award is given to pioneering schools making an impact in the legal field.

Ryan Calo

Founded in 2013 by faculty from the 91̽School of Law, the Paul G. Allen School of Computer Science & Engineering and the Information School, the lab bridges the gap between technologists and policymakers to help generate wiser, more inclusive tech policy.

“The students and community members who interact with the lab come away with the understanding that collaborating and bringing a variety of perspectives together is the key to working through contemporary challenges,” said , professor of law at the 91̽and co-director of the Tech Policy Lab. Bloomberg’s Law School Innovation Program seeks to recognize and connect law school faculty, staff and administrators who are education innovators. Submissions to the program were scored based on impact on students, ability to advance the legal industry and replicability. In its submission, the 91̽Tech Lab demonstrated the model’s unique blend of immersive experiences, opportunities for relationship-building and interdisciplinary approaches.

“Rather than try to work with every student, we offer programming open to all and work closely with a small handful of law students whom we place on interdisciplinary teams to work on consequential issues of tech policy,” Calo said. “They often go on to work in the field and get a unique perspective and experience working across disciplines.”

This was the inaugural year of the awards.

91̽professor wins 2023 Seattle Aquarium Conservation Research Award

Vera Trainer, affiliate professor of aquatic and fishery sciences at the UW, was selected as the winner for her work on harmful algal blooms, or HABs, which are proliferations of algae that cause environmental and economic damage.

Vera Trainer

The Conversation Research Award has honored leaders and innovators in marine conservation research since 2004, focusing on climate change, plastic pollution, sustainable fisheries and tourism, marine protected areas and socioeconomics.

“This award is not only for what has been accomplished, but what will be accomplished in the future,” said Trainer, a former NOAA oceanographer and current research lead for the .

Trainer’s HABs research has provided a foundation for understanding the effects climate change has had on coastal ecosystems and highlights the need for inclusion of impacted communities in decision-making.

Trainer is also co-founder of the program and founder of , a partnership that monitors HABs in the Puget Sound. These unique community collaborations provide advance warning of HABs that threaten seafood safety as well as ecosystem and human health, ultimately ensuring safe, sustainable shellfish harvests.

Astronomy professor awarded for stellar physics textbook

The American Astronomy Society awarded , associate professor of astronomy at the UW, and her co-author Henny J.G.L.M.Lamers the for their graduate textbook “Understanding Stellar Evolution.”

The Chambliss Award recognizes astronomy writing geared towards the upper-division undergraduate or graduate level, a rarely recognized category.

Emily Levesque

“It’s great to see the importance of stellar physics recognized,” said Levesque. “Henny Lamers spent more than a decade developing amazing lecture notes for our course on stellar structure and evolution, and it was great to work with him on turning these into a textbook.”

Split into three parts, the book first delves into the physics of how stars work. It then describes the evolution of stars from formation to death and explores some complicating factors of stellar evolution. The book was produced using years of lecture notes for an astronomy class at the UW.

“We spent a lot of time expanding and fleshing out roughly outlined ideas from lecture notes so that they could stand alone as complete explanations in the textbook,” said Levesque. “It was interesting to be teaching the course and writing the book at the same time in the spring of 2016. It helped alert us to a topic or detail that would spark discussion or follow-up questions in class and encouraged us to expand on the topic in the text.”

]]>
‘Telling Stories’: Imagined tales of artificial intelligence presented by the 91̽Tech Policy Lab /news/2021/03/16/telling-stories-imagined-tales-of-artificial-intelligence-presented-by-the-uw-tech-policy-lab/ Tue, 16 Mar 2021 23:55:17 +0000 /news/?p=73298 A young man exiled to a reeducation camp for the “digitally unsafe” learns to keep his face blank, as cameras everywhere read expressions, and signs of anger and resistance are quickly punished.

The elderly victim of an attack feels empty after winning justice from a “panel of metal judges” in a future courtroom beyond human biases.

An online karate class is taught by artificial intelligence and robots, but over the decades, even as the sport thrives, much of its crucial human element is forgotten.

These tales of AI and its effects on future life — and many more, from points around the world — are gathered in “,” presented by the 91̽ . The lab is an interdisciplinary collaboration of the 91̽Paul G. Allen School of Computer Science & Engineering, Information School and School of Law, to “enhance technology policy through research, education and thoughtful leadership.”

Together, the 19 stories are meant to ask: “What world — what worlds? — will we build with artificial intelligence?” The stories were written by authors all over the world and edited by a five-member 91̽team led by Information School professor , law school professor and computer science and engineering professor , who are the directors of the Tech Policy Lab. Joining them in editing were , an iSchool doctoral student, and , an alumna of the lab.

Photo of authore -- Authors of "Telling Stories: On Culturally Responsive Artificial Intelligence" are Dennys Antonialli, InternetLab, Brazil; Chinmayi Arun, National Law University, Delhi, India; Joanna Bryson, University of Bath, England; Darren Byler, UW; Ryan Calo, UW; Jeff Cao, Tencent Research Institute, China; Jack Clark, OpenAI, United States, Batya Friedman, UW; Sue Glueck, Microsoft; Sabine Hauert, University of Bristol, England; Alejandro Hevia, University of Chile; Ian Kerr, University of Ottawa, Canada; Tadayoshi Kohno, UW; Lisa Nathan, University of British Columbia, Canada; Joseph Nkurunziza, Never Again Rwanda, Rwanda; Nnenna Nwakanma, World Wide Web Foundation, Côte d’Ivoire; Amir Rashidi, Center for Human Rights in Iran; Rohan Samarajiva, LIRNEasia, Sri Lanka; Jeroen van den Hoven, Delft University of Technology, Netherlands.
Authors of “Telling Stories: On Culturally Responsive Artificial Intelligence” are (not in order shown) Dennys Antonialli, InternetLab, Brazil; Chinmayi Arun, National Law University, Delhi, India; Joanna Bryson, University of Bath, England; Darren Byler, UW; Ryan Calo, UW; Jeff Cao, Tencent Research Institute, China; Jack Clark, OpenAI, United States, Batya Friedman, UW; Sue Glueck, Microsoft; Sabine Hauert, University of Bristol, England; Alejandro Hevia, University of Chile; Ian Kerr, University of Ottawa, Canada; Tadayoshi Kohno, UW; Lisa Nathan, University of British Columbia, Canada; Joseph Nkurunziza, Never Again Rwanda, Rwanda; Nnenna Nwakanma, World Wide Web Foundation, Côte d’Ivoire; Amir Rashidi, Center for Human Rights in Iran; Rohan Samarajiva, LIRNEasia, Sri Lanka; Jeroen van den Hoven, Delft University of Technology, Netherlands.

The volume, which is available free in , is intended for policymakers, educators and technologists as well as general readers. Authors in Canada, Chile, China, India, Rwanda, Sri Lanka and more submitted work.

Calo, Friedman and Kohno all penned stories for the book, as did Darren Byler, a postdoctoral researcher in the Department of Anthropology. The tales are fiction — though at least one stems from a real-world scenario. Friedman’s entry about a futuristic court, called “What Justice,” evolved from her years of work on the project, investigating the grim realities of genocide in that country.

The “Telling Stories” project grew from the Tech Policy Lab’s Global Summit on Culturally Responsive AI, underway since 2016, convening 20 to 30 scholars representing design, ethics, governance, policy and technology. The lab’s 2018 Global Summit focused on “grand challenges for developing and disseminating artificial intelligence technologies that maintain respect for and enhance culture and diversity of worldview.”

Participants to that summit brought pieces of fabric meaningful to their culture, which then sparked enactments, storytelling and conversations. “Telling Stories” employs the wisdom from these talks, and the resulting tales are meant for retelling.

In a story by law professor Ian Kerr of the University of Ottawa (who died in 2019 and to whom the volume is dedicated), one character borrows words from late astronomer Carl Sagan that seem to speak to the technological dilemmas at hand:

“We are creating world-altering contrivances and we have choices to make. We can relinquish control and roll the bones in a strange game of digital Russian roulette. Or we can rely on the bright light of human wisdom to place limits on what may and must not be done, and safely pass through times of peril.”

A hardcover edition of “Telling Stories” will be available in May through the University Bookstore.

For more information, contact Calo at rcalo@uw.edu, Friedman at batya@uw.edu or Kohno at yoshi@cs.washington.edu.

]]>
91̽Space Policy and Research Center brings researchers, policymakers together for online symposium Nov. 6 /news/2020/10/29/uw-space-policy-and-research-center-brings-researchers-policymakers-together-for-online-symposium-nov-6/ Thu, 29 Oct 2020 20:29:26 +0000 /news/?p=71355 Even as residents of Earth grapple with a global pandemic, our work in space continues. At the 91̽, the — SPARC for short — brings together researchers, policymakers and industry professionals each year to discuss the challenges of human presence and endeavors in space.

The SPARC 2020 symposium is free for those in the 91̽community to attend.
.

The daylong will be held online on Nov. 6 and will feature introductory remarks by 91̽President Ana Mari Cauce and U.S. Sen. Maria Cantwell as well as of the U.S. Space Command. The symposium’s many come from academia, government and the aerospace industry in the Pacific Northwest and beyond.

The symposium’s theme will be Autonomous Operations in Space: Tech & Policy. In the concluding , 91̽law professor and physicist will talk with “The Martian” author and others in a panel on “Building our Future in Deep Space.”

The co-directors of SPARC are , 91̽professor and chair of aeronautics and astronautics, and , professor of international studies. 91̽Notebook connected with Pekkanen over email with a few questions about this year’s symposium.

First, as a general overview, what is the mission of SPARC and its annual symposium?  

Saadia Pekkanen, co-director of SPARC
Saadia Pekkanen

Saadia Pekkanen: SPARC’s mission is to bring together science, technology, and policy in a way that speaks across many disciplines. We seek to advance collaborative research as well as the education, training and networks of the next generation of space professionals.

Space entrepreneurship will be a key topic, as in years past. How is the Pacific Northwest faring as a growing hub for the space industry?  

S.P.: One of the key trends we are now seeing is that more established and well-known companies are also in the space startup business, so to speak. Many of our large local players are now tailoring some part of their operations to get into the space business, particularly focused on the hardware and data from operational satellites.

Amazon, for example, says it will invest $10 billion in a satellite constellation. Known as Project Kuiper, it will launch over 3,200 satellites to provide broadband internet access worldwide. Microsoft has recently announced a partnership with SpaceX to go after the cloud computing business focused on commercial, government and military space customers.

91̽law professor , director of the , will moderate a panel on protecting Earth from orbital debris and near-Earth objects. We hear of low-Earth orbit being cluttered and of “near-misses” in the news. What is the current danger level from space debris?  

About SPARC:
The Space Research and Policy Center (SPARC) is organized by the William E. Boeing Department of Aeronautics & Astronautics and the Jackson School of International Studies.

The center includes research and initiatives from the 91̽Astrobiology Program, the Buerk Center for Entrepreneurship, the Information School, 91̽Medicine, the Joint Center for Aerospace Technology Innovation and the School of Law as well as several departments, including astronomy, Earth and space sciences, mechanical engineering, materials science, human-centered design, electrical engineering, computer science, math, and environmental sciences.

ESS professor Kristi Morgansen is co dorector o SPARC
Kristi Morgansen

S.P.: I would say the levels for both accidental and deliberate threats are high. In both cases, the conditions enabling a runaway chain reaction of collision and more debris, called the Kessler syndrome, are concerning.There are about 2,700 known operational satellites in orbit, more than half of which belong to U.S. civilian, commercial and military stakeholders. If the number of small satellites surpasses the 100,000 mark as it is projected to the chances for accidental collisions increases.

Deliberate threats such as those posed by debris-creating anti-satellite (ASAT) tests carried out by many countries are even more concerning. All this comes at a time when the U.S has named both Russia and China as great power competitors, and these national rivalries have extended openly to outer space. We should be working on restoring diplomacy to strengthen norms and rules, which is the only way to deal with a problem at the nexus of technology and politics.

COVID remains a global challenge and menace. How has the coronavirus affected the space industry? Have projects or plans been delayed? 

S.P.: I think we will probably be assessing the impact with real data sometime next year. Right now, I imagine that most companies, especially smaller ones or new startups, are scrambling to adjust and float. Once again, the impact of the entry of the established companies may have a positive impact on the stability of supply chains and smaller startups as the competition moves forward.

What goals do you have for the Space Policy and Research Center in the next few years?

S.P.: We want to position as a premier university-centered think tank, which is seen as a trusted resource by audiences in government, business, education, media, and the nonprofit sector worldwide.

We also want to build out a truly interdisciplinary space studies curriculum for our students, speaking to technology, law and regional policies. We believe that such activities will bring together STEM, social sciences and humanities in the common enterprise of preserving peaceful prospects in outer space.

]]>
91̽ to create 91̽Center for an Informed Public with $5 million investment from Knight Foundation /news/2019/07/22/university-of-washington-to-create-uw-center-for-an-informed-public-with-5-million-investment-from-knight-foundation/ Mon, 22 Jul 2019 15:30:44 +0000 /news/?p=63264
Jevin West teaches a class in “Calling BS.” Photo: Quinn Russell Brown/91̽

The 91̽ today announced a $5 million investment from t to create the 91̽, led by an interdisciplinary group whose mission is to resist strategic misinformation, promote an informed society, and strengthen democratic discourse. The Center is also funded by a $600,000 award from the William and Flora Hewlett Foundation.

The Center brings together existing areas of excellence at the 91̽and builds upon the university’s ability to better understand how and why fake news, misinformation and disinformation are created. The Center will combat what researchers call the “misinformation epidemic.”

“We really see the Center as a university-wide effort,” said, principal investigator and inaugural director for the Center. “Misinformation touches everything.”

Kate Starbird teaches a class in misinformation. Photo: Mark Stone/91̽

The 91̽Center is one of five institutions receiving major investments from the Knight Foundation nationally and is the only recipient in the Western United States.

“A functioning democracy is an informed democracy,” said Sam Gill, Knight Foundation vice president for communities and impact. “ 91̽is bringing together leading scholars in computer science, sociology and law to equip our democracy with the right tools to navigate the digital age.”

’s support to the 91̽is part of a $10 million effort announced in 2018 to examine and combat digital disinformation’s impact on U.S. democracy and elections.

Recent decades have seen a profound shift in the ways people, groups, and organizations produce and consume information and participate in public discourse. While many positive advancements have emerged from new technologies and platforms, the new information environments also have opened the door to misinformation, disinformation and fake news.

“It’s one of the most important problems of our time that we as a society need to solve,” West said. “This is not a left or right issue. This is an issue that transcends political boundaries. Everyone wants to get this right.”

The principal investigators at the Center are a who’s who in this field of research, widely recognized for their respective expertise. In addition to West, co-director of DataLab, who is known for his Information School class “Calling B.S.: Data Reasoning in a Digital World,” there are four researchers who will lead various initiatives for the Center:

  •     , co-director the Social Media Lab, Information School;
  •     , director of the Technology & Social Change Group (TASCHA), Information School;
  •     , co-director of the Tech Policy Lab, School of Law; and,
  •     , director of the Emerging Capacities of Mass Participation Lab (emCOMP), Human Centered Design & Engineering.

The Center will be devoted to educational efforts, research, policy and community outreach around misinformation and disinformation campaigns. Additionally, researchers will establish a network of Community Labs in public libraries and other institutions to co-create and assess research-based interventions. 

Housed within the Information School, the Center for an Informed Public is scheduled to officially open in fall 2019.

###

About the John S. and James L. Knight Foundation

Knight Foundation is a national foundation with strong local roots. We invest in journalism, in the arts, and in the success of cities where brothers John S. and James L. Knight once published newspapers. Our goal is to foster informed and engaged communities, which we believe are essential for a healthy democracy. For more, visit.

About the UW

The 91̽ was founded in 1861 and is one of the pre-eminent public higher education and research institutions in the world. The 91̽has more than 100 members of the National Academies, elite programs in many fields, and annual standing since 1974 among the top five universities in receipt of federal research funding. Learn more at uw.edu.

 

]]>
91̽law professor Ryan Calo to join experts at White House conference /news/2016/10/12/uw-law-professor-ryan-calo-to-join-experts-at-white-house-conference/ Wed, 12 Oct 2016 17:32:12 +0000 /news/?p=50121 , an assistant professor in the 91̽ School of Law, will be among national experts at a White House event tomorrow on innovations in science and technology.

Ryan Calo

A nationally known expert on robotics and privacy law, Calo will speak at the at the University of Pittsburgh and Carnegie Mellon University on Oct. 13. He will be part of a panel moderated by President Obama’s chief of staff, Denis McDonough, and will discuss the role of regulation in helping ensure the safety of artificial intelligence.

“I’m delighted to be a resource for the White House and others as they think through the benefits and societal impacts of artificial intelligence and robotics,” Calo said. “It’s wonderful that our government is engaged around this issue at the highest levels.”

The conference will focus on five “frontiers” of innovation:

  • Personal frontiers in health care innovation and precision medicine
  • Local frontiers in building smart, inclusive communities, including investments in open data and the Internet of Things
  • National frontiers in harnessing the potential of artificial intelligence, data science, machine learning, automation and robotics
  • Global frontiers in clean energy and advanced climate information, tools, services and collaborations
  • Interplanetary frontiers in space exploration, including the NASA mission to Mars

Obama will host the event and participate in a discussion about future health breakthroughs with innovators in medicine and health care. The conference will include other topics inspired by the November issue of Wired magazine, which Obama is — a first for a U.S. magazine, according to Wired.

The day’s plenary session will include a talk by Tim O’Reilly, a web pioneer and the founder of O’Reilly media; a live podcast with Roman Mars, host of the radio show “99% Invisible,” and Raj Chetty, an economist and professor at Stanford University; comments from University of Georgia student Charles Orgbon III on climate challenges and a discussion on increasing access to space with panelists including NASA Deputy Administrator Dava Newman, Erika Wagner from BlueOrigin and Anousheh Ansari, the first female private space explorer.

The conference starts at 1:45 p.m. EST and will be livestreamed on the event .

For more information, contact Calo at rcalo@uw.edu or 206-543-1580.

]]>
91̽project highlights liability of internet ‘intermediaries’ in developing countries /news/2016/06/29/uw-project-highlights-liability-of-internet-intermediaries-in-developing-countries/ Wed, 29 Jun 2016 21:38:38 +0000 /news/?p=48692 How much liability do website owners and other online service providers have for content posted by other people? If someone posts content on your website that is defamatory, constitutes hate speech, disseminates child pornography or invades someone’s privacy, are you liable?

The answers to such questions can be murky in developing countries. And as internet use expands around the globe, so does the potential liability for the owners of websites, search engines, social media sites and other online platforms, who are subject to laws in each country where their websites and services are accessible.

“As sites such as Instagram and Snapchat have exploded in the number of photos and videos and other information posted, this problem has exponentially increased,” said , director of the 91̽’s (CASRIP).

“Each of those platforms has this potential liability hanging out there, with the firehose of content that’s being posted every day.”

To advance understanding of the issue, CASRIP recently commissioned and released a series of on the liability facing these kinds of online service providers as “internet intermediaries,” or entities that facilitate online use. Many of these intermediaries provide platforms where content can be posted by users; the most well-known include Facebook, Twitter, Snapchat and Instagram.

But the problem also affects search engines, blogs, network operators and even comments sections on websites and blogs. The 16 reports focus on laws concerning hate speech, privacy, child protection and defamation in five countries — Brazil, Russia, India, China and Thailand — that have research ties to the 91̽and are becoming increasingly important players in the internet liability landscape.

The reports detail differences in laws and social norms among the countries. Penalties can range from fines to suspension of business activities, criminal charges and even imprisonment.

In Russia, for example, internet service providers are required to block websites containing information about mass riots or extremist activities; a government “blacklist” of those sites totaled more than 17,500 in November 2015.

The report on India cites a study which found that more than three-quarters of Indian parents were unaware of software available to protect children online, and half of parents in Delhi allowed their children to spend more than 10 hours a day online.

The project, which received funding from Google, was carried out over a few years and involved authors, scholars and students in the five countries. , CASRIP’s program manager, said the reports show that all the countries studied — despite the sometimes strict penalties their laws carry — are striving for a balance between control over internet content and the free flow of information.

“All of the countries want to protect freedom of speech. They want to protect social media and the dissemination of information, but at the same time impose some limitations to protect people’s rights,” said Bakhmetyeva.

“But the question is, can they achieve this balance or not?”

Among the reports’ most positive findings, Bakhmetyeva said, is that the five countries generally do not hold internet intermediaries liable for unlawful content posted by users unless they knew about the content and failed to remove it. Most countries usually grant online service providers immunity, referred to as “safe harbor,” provided they comply with certain rules and remove problematic content quickly.

The reports cite a case in Brazil which concluded that holding an online provider liable “would be the same as holding the post offices liable for written crimes on letters, which would be unreasonable.” At the same time, Bakhmetyeva said, some websites have become known havens for criminal or offensive material. Governments must be careful to balance protections for intermediaries with enforcement against sites that ignore or even encourage hateful and other problematic content, she said.

has become an issue of heightened focused in recent years, as governments worldwide increasingly expect internet companies to police illegal and other problematic content, and in some cases are holding them legally accountable for doing so. Consequently, O’Connor said, internet companies — particular those with large numbers of users posting content — have a tremendous amount at stake in determining their potential liability.

“Penalties in some countries are quite severe,” said O’Connor, the Boeing International Professor in the 91̽law school. “Individuals could potentially go to jail. So this is of great concern to anyone operating in the online space.

“If people understand the stakes, they should be keenly interested in what’s going on in these reports.”

, a faculty director for CASRIP and a 91̽assistant professor of law, was also involved in the project. 91̽law students Tyler R. Quillin, Jayme Staten, Christian Kaiser, Harrison Owens, Zachary Parsons and Jason E. Parfet conducted research and helped edit the reports.

For more information, contact O’Connor at soconnor@uw.edu or 206-543-7491, or Bakhmetyeva at annab@uw.edu or 206-221-7110.

]]>
91̽to host first of four White House public workshops on artificial intelligence /news/2016/05/19/uw-to-host-first-of-four-white-house-public-workshops-on-artificial-intelligence/ Thu, 19 May 2016 18:27:10 +0000 /news/?p=47969 From self-driving vehicles to social robots, artificial intelligence is evolving at a rapid pace, creating vast opportunities as well as complex challenges.

Recognizing that, the White House Office of Science and Technology Policy is co-hosting four public workshops on artificial intelligence — the first of them May 24 at the 91̽. Subsequent events will take place in ; in ; and in .

Put on by the and the 91̽, the will focus on legal and policy issues around artificial intelligence, or AI.

Speakers include:

  • , law school dean and president of the Association of American Law Schools
  • , special assistant to the president for economic and technology policy
  • , White House deputy U.S. chief technology officer
  • , a 91̽assistant professor of law and co-director of the Tech Policy Lab
  • , a 91̽professor of computer science and engineering and author of “”
  • , chief executive officer of the Allen Institute for Artificial Intelligence and a 91̽professor of computer science and engineering
  • , an associate professor in the School of Information at UC Berkeley and co-director of the Berkeley Center for Law & Technology
  • , a principal researcher at Microsoft Research New York City and senior researcher at NYU Information Law Institute
  • , a law professor at Yale Law School
  • Camille Fischer, policy advisor, National Economic Council
  • Terah Lyons, policy advisor, White House Office of Science and Technology Policy

Etzioni will provide an overview on the current state of artificial intelligence, followed by two panel discussions. The first will examine issues around making decisions in the private or public sector using artificial intelligence.

The second panel will focus on logistical aspects of AI applications, such as when the government might reasonably feel comfortable turning mail delivery over to robots or how safe autonomous flight must be to be used for deliveries.

The aim of the workshops is to look at the advantages and drawbacks of artificial intelligence. As a White House points out, President Obama’s and the will both rely on AI to identify patterns in medical data and help doctors diagnose diseases and determine treatment plans. But others worry the technology will displace human workers, or go so far as to that it could pose a threat to the human race.

The 91̽workshop, free and open to the public, will be held from 1:30 to 5 p.m. May 24 in the Magnuson Jackson Courtroom 138 at the 91̽School of Law. A reception follows from 5 to 7 p.m. Registration is available , and the conference will be .

The next in the series, about artificial intelligence for social good, is June 7 in Washington, D.C., followed by a June 28 on safety and control for AI at Carnegie Mellon University in Pittsburgh and a July 7 in New York City on the social and economic implications of AI.

For more information, contact Ryan Calo at rcalo@uw.edu or 206-543-1580.

]]>
Life, enhanced: 91̽professors study legal, social complexities of an augmented reality future /news/2015/11/03/life-enhanced-uw-professors-study-legal-social-complexities-of-an-augmented-reality-future/ Tue, 03 Nov 2015 17:45:14 +0000 /news/?p=39678
A mockup of an augmented reality mobile phone using a curved LED screen that renders data for the wearer/user gathered by cameras mounted on one or both sides. Photo: Leonard Low / Wikimedia commons

is the enhancement of human perception through overlaying technologies that can expand, annotate and even record the user’s moment-to-moment experience.

Those designing coming augmented reality systems should make them adaptable to change, resistant to hacking and responsive to the needs of diverse users, according to a white paper by an interdisciplinary group of researchers at the 91̽’s .

Though still in its relative infancy, augmented reality promises systems that can aid people with mobility or other limitations, providing real-time information about their immediate environment as well as hands-free obstacle avoidance, language translation, instruction and much more. From enhanced eyewear like Google Glass to Microsoft’s wearable HoloLens system, tech, gaming and advertisement industries are already investing in and deploying augmented reality devices and systems.

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction — as well as potential discrimination — are bound to follow.

The Tech Policy Lab brings together faculty and students from the , and and other campus units to think through issues of technology policy. “” is the lab’s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 , or UbiComp conference.

, assistant professor of law and Tech Policy Lab co-director, is lead author together with of the Information School and and of computer science and engineering. Other co-authors are Emily McReynolds, 91̽Tech Policy Lab associate director; Tamara Denning, who graduated from the 91̽in computer science and engineering and is now an assistant professor at the University of Utah; Bryce Newell, who graduated from the 91̽Information School and is now a postdoctoral researcher the University of Tilburg; Information School doctoral student Lassana Magassa and School of Law alumnus Jesse Woo.

The researchers used a method of work designed by the Tech Policy Lab for evaluating new technologies, first conferring with those in the computer science field to define augmented reality as precisely as possible. Then they look to the humanities and social sciences — information science, in this case — to consider the impact of the technology in question on various end users. They called these “diversity panels.”

Magassa, who organized the diversity panels, said they help to ensure that underrepresented groups are highlighted in a way that makes sense to those that develop technology and its governing policies.

“They also are important in that they increase the likelihood that the people who develop such policies get to hear and consider alternate points of view, concerns and visions as they design and develop technology policies,” he said.

The researchers sorted issues raised by augmented reality into basic categories: those relating to the collection of information, and those relating to its display.

  • The collection of information raises issues that include a reasonable expectation of privacy, the First Amendment right to free speech, intellectual property and the relaying of information to third parties.
  • The display of information in augmented reality systems prompted questions about harm caused by errors or negligence, product liability and potential discrimination or even digital assault.

The group arrived at a set of recommendations for policymakers that “do not purport to advance any particular vision, but rather provide guidance that can be used to inform the policymaking process.”

Their recommendations, briefly put, were:

Build dynamic systems: Augmented reality systems should be flexible and capable of being updated to reflect changes both technological and cultural, to remain relevant.

Conduct “threat modeling”: Hackers beat systems by finding behaviors that designers didn’t anticipate. Systems should be reviewed with an eye toward who might wish to compromise the system and how. This is particularly important because breeches of augmented reality systems could lead to physical harm.

Coordinate with designers: No technology policy should be made in isolation. Designers may not fully appreciate the legal import of a project and policymakers need to understand the technology in order to make wise decisions.

Consult with diverse potential users: People will use augmented reality in different ways depending on their own experiences and skills. Those planning such systems should consult with diverse populations, and solicit and use their feedback.

Acknowledge trade-offs: Systems open to third-party analysis or additions might promote greater freedom and innovation, but at the cost of harm through malicious applications or coding. Long-term storage, cloud processing or other advanced data processes might give faster performance at the cost of privacy.

Calo called the interdisciplinary analysis of augmented reality law and policy concerns difficult but crucial work.

“We had to come up with a process to blend the technical, legal, design and other elements into a single policy document,” he said. “I hope the finished document proves useful to policymakers of all kinds.”

###

For more information, contact Calo at rcalo@uw.edu or 206-543-1580. Follow Calo on Twitter at .

]]>
Robotics and the law: When software can harm you /news/2015/07/13/robotics-and-the-law-when-software-can-harm-you/ Mon, 13 Jul 2015 19:24:48 +0000 /news/?p=37828
An artist’s concept of a NASA robotic refueling mission. Shown here, cameras light the way as a tool from a robotic refueling mission approaches a satellite to cut wire, one of the steps to remotely accessing a satellite’s triple-sealed fuel valve. Photo: NASA / Goddard Space Flight Center

Recent headlines declaring “Robot Kills Man in Germany” are examples of growing news coverage about the impact of robots on society. This is the subject of a new law review article by a 91̽ faculty member.

Twenty years in, the law is finally starting to get used to the Internet. Now it is imperative, says , assistant professor in the , that the law figure out how to deal effectively with the rise of robotics and artificial intelligence.

“Technology has not stood still. The same private institutions that developed the Internet, from the armed forces to search engines, have initiated a significant shift toward robotics and artificial intelligence,” writes Calo in “.” His article, published in June in the California Law Review, is among the first to examine what the introduction of robotics and artificial intelligence means for law and policy.

Robotics, Calo adds, is shaping up to be the next transformative technology of our time: “Courts that struggled for the proper metaphor to apply to the Internet will struggle anew with robotics.”

Though mention of robotics and artificial intelligence can prompt images of unstoppable Terminators and mutinous HAL 9000 computers, Calo dismisses such drama early on. “And yet,” he adds, “the widespread distribution of robotics in society will, like the Internet, create deep social, cultural, economic and of course legal tensions” long before any such sci-fi-style future.

To Calo, robotics is essentially different than the Internet and so will raise different legal issues.

“Robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm,” Calo writes. “Robotic systems accomplish tasks in ways that cannot be anticipated in advance, and robots increasingly blur the line between person and instrument.”

“Robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm. Robotic systems accomplish tasks in ways that cannot be anticipated in advance, and robots increasingly blur the line between person and instrument.” — Ryan Calo, 91̽School of Law

But does that mean robotics and artificial intelligence need different treatment under the law, or different laws entirely, than the technologies of which they are made, such as computers?

In the paper and a 2014 on the same subject, Calo relates an anecdote about Chicago judge and law professor Frank Easterbrook, who in 1996 famously likened research in Internet law to studying “the law of the horse.” Easterbrook felt any single approach is doomed to “be shallow and to miss unifying principles.” Calo quotes science fiction writer Cory Doctorow, who in a response to Calo in The Guardian that he could not think of a legal principle applicable to robots that would not also be usefully applied to the computer, and vice versa.

“I disagreed with Easterbrook then and I disagree with Doctorow now,” Calo writes. “Robotics has a different set of essential qualities than the Internet, which animate a new set of legal puzzles.”

Calo’s conclusion is, in a sense, a follow-up to his : Robotics and artificial intelligence, he finds, have essentially different qualities than the law has yet faced.

“So I join a chorus of voices, from Bill Gates to the White House, to assume that robotics represents an idea whose time has come. The qualities, and the experiences they generate, occasion a distinct catalogue of legal and policy issues that sometimes do, and sometimes do not, echo the central questions of contemporary cyberlaw.”

Calo, whom Business Insider named one of the , concludes, “Cyberlaw will have to engage, to a far greater degree, with the prospect of data causing physical harm, and to the line between speech and action. Rather than think of how code controls people, cyberlaw will think of what people can do to control code.”

Calo’s recent paper has already attracted from a fellow legal scholar, Yale Law School’s , who calls it a valuable discussion: “Calo’s account of the problems that robotics present for law is just terrific, and I believe it is destined to be the starting point for much future research in the area.”

###

For more information, contact Calo at 206-543-1580 or rcalo@uw.edu. Follow Calo on Twitter at

  • Read Calo’s Slate article, “
  • Watch a of Calo discussing robotics and cyberlaw at the 2014 “We Robot” conference at the University of Miami.
  • Read an with Calo about the June 29, 2015, robot-caused fatality in a German Volkswagen factory.

]]>