artificial intelligence – 91探花News /news Wed, 25 Feb 2026 18:13:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 91探花and Microsoft expand relationship to enhance AI learning and research with aim to prepare Washington鈥檚 workforce for the future /news/2026/02/24/uw-and-microsoft-expand-relationship-to-enhance-ai-learning-and-research-with-aim-to-prepare-washingtons-workforce-for-the-future/ Tue, 24 Feb 2026 23:33:11 +0000 /news/?p=90745 woman demonstrating to two men
The 91探花and Microsoft announced the expansion of their long standing partnership uniting world-class academic research with world-leading technology. Amelia Keyser-Gibson (right), a graduate student in the School of Environmental and Forest Sciences, demonstrates her research to 91探花President Robert J. Jones (center) and Microsoft Vice Chair and President Brad Smith (left). Photo: Mark Stone/91探花

The 91探花 and Microsoft have announced the expansion of their long鈥憇tanding partnership uniting world-class academic research with world-leading technology. 91探花and Microsoft aim to accelerate AI discovery, prepare students and workers for an AI-driven economy, and help communities understand and use AI responsibly.

The announcement, made today by 91探花President Robert J. Jones and Microsoft Vice Chair and President Brad Smith during an event at the UW鈥檚 Paul G. Allen School of Computer Science & Engineering, will increase the University鈥檚 access to the most advanced AI computing power, expand internship and applied research opportunities for its students, and develop community AI literacy programs, including a foundational AI course for working Washingtonians.

鈥淥ur long-standing partnership with Microsoft demonstrates what鈥檚 possible when universities and industry come together to support students and our society, and we are grateful for their continued support,鈥 Jones said. 鈥淭ogether, we鈥檙e expanding students鈥 access to hands-on learning, advancing AI research and strengthening our workforce.鈥

 

For broadcast media

This announcement builds on Microsoft鈥檚 decades-long support of the University, including $165 million of investments in student scholarships and enhancements to the UW鈥檚 world-leading computer science and engineering programs. In tandem with ongoing state and federal support, these investments have helped increase access to education and contributed to the state鈥檚 highly skilled workforce.

鈥淧resident Jones has outlined a bold vision for the 91探花, one that expands access and affordability in higher ed, forges radical partnerships and strengthens civic health,鈥 Smith said. 鈥淚t鈥檚 essential that this vision includes broad access to AI technology and the skills to use it, so students, workers and communities across Washington are prepared for this new era of computing and can share fully in its benefits.鈥

The timing of the announcement comes as forecasts predict a need to fill 1.5 million job vacancies in Washington by 2032 鈥 about 640,000 new jobs and 910,000 openings due to retirements, according to Partnership for Learning. Up to 75% of those vacancies will require post-secondary credentials, with four-year and advanced degrees in highest demand. If current trends hold, experts predict a shortfall of nearly 600,000 credentialed workers in Washington over the decade.

鈥淚t鈥檚 critical that industry, colleges and universities, and policy makers continue to work together to maintain the region鈥檚 economy and climate of innovation and discovery,鈥 Smith said. 鈥淭hat includes avoiding going backward by making cuts to core state funding that would make a college degree less accessible to our state鈥檚 students.鈥

The budgets proposed by the Washington State Legislature鈥檚 majorities would keep funding for the 91探花largely stable. Historically, the Legislature has created a fertile environment for workforce growth and training through the Washington Workforce Education Investment Act (WEIA) and the Washington State Opportunity Scholarship (WSOS).

Since passage in 2019, with support from Microsoft and other business leaders, the WEIA has generated more than $2 billion in dedicated funding to expand higher education access in Washington.听 WSOS 鈥 a first-of-its-kind public-private partnership in which private employers contribute philanthropic dollars that are matched by the State of Washington to expand access to higher education in high-demand fields 鈥 has delivered nearly $150 million in total scholarships statewide, combining private donations and state matching funds. One-third of WSOS scholars attend the UW.

鈥淭hese new elements of our partnership with Microsoft continue to position the 91探花and our state as leaders in access to higher education and at the forefront of the emerging technologies that can drive broad-based prosperity,鈥 Jones said.

Microsoft and the UW鈥檚 expanded partnership will:

  • Provide faculty, researchers and students with access to advanced computing capabilities that enable modern AI training, experimentation and research, and instruction. Microsoft is supplementing this effort by donating Microsoft Azure cloud computing credits to help accelerate the development of a research cloud computing platform.
  • Launch a new initiative to connect 91探花faculty, visiting professors and students with real-world research opportunities at Microsoft. This is based on a new 鈥渞esearch marketplace鈥 that will be established and supported by Microsoft鈥檚 AI for Good Lab. It will be complemented by 10 additional graduate student-researcher slots per year 鈥 eight through the Microsoft Research organization and two in the AI for Good Lab.
  • Support undergraduate students as they become civic leaders, helping them build ethical judgment, digital citizenship and agency to co-design how emerging technologies, including AI, will serve communities and democracy.
  • Join forces with UW鈥檚 Continuum College, an institution serving more than 50,000 learners annually through 400 programs serving young people, working adults and senior citizens. The 91探花and Microsoft will develop programming that helps Washingtonians navigate AI-related workforce transitions with confidence and purpose. This collaboration will result in new courses and other learning pathways focused on career resilience, evolving job demands and navigating the challenges that accompany shifting career identities.
  • Beginning this fall, the 91探花and Microsoft will launch a new collaboration on Microsoft鈥檚 Redmond campus that reimagines how universities and industry work together. This part of the work will deepen workforce鈥慶onnected education and applied learning. The collaboration will support the co鈥慸evelopment of select courses and learning experiences for Microsoft employees navigating rapid AI鈥慸riven change, while enabling 91探花students to learn alongside industry professionals and gain real鈥憌orld insight as part of their academic experience. Additional details will be announced later this year.

Since becoming the UW鈥檚 34th president in August 2025, President Jones has set out three key priorities for the University: increasing access to education, including through the goal of making a 91探花degree debt-free for Washington undergraduates; spurring radical collaborations with businesses and communities to advance positive change; and eliminating any artificial barriers between the University and the communities it serves.

Along with strategic planning underway at the UW, Jones is engaging with corporate and civic leaders, as well as organizations throughout the region, to expand existing partnerships with the UW. Through these relationships, he aims to support access and affordability for students and the economic vitality and social fabric of Washington state and beyond.

For more information, contact Victor Balta at balta@uw.edu.

]]>
AI headphones automatically learn who you鈥檙e talking to 鈥 and let you hear them better /news/2025/12/09/ai-headphones-smart-noise-cancellation-proactive-listening/ Tue, 09 Dec 2025 17:30:37 +0000 /news/?p=89888

UPDATE (Dec. 12, 2025): This story has been updated to correct Malek Itani’s department.

Holding a conversation in a crowded room often leads to the frustrating 鈥渃ocktail party problem,鈥 or the challenge of separating the voices of conversation partners from a hubbub. It鈥檚 a mentally taxing situation that can be exacerbated by hearing impairment.听

As a solution to this common conundrum, researchers at the 91探花 have developed that proactively isolate all the wearer鈥檚 conversation partners in a noisy soundscape. The headphones are powered by an AI model that detects the cadence of a conversation and another model that mutes any voices which don鈥檛 follow that pattern, along with other unwanted background noises. The prototype uses off-the-shelf hardware and can identify conversation partners using just two to four seconds of audio.

The system鈥檚 developers think the technology could one day help users of hearing aids, earbuds and smart glasses to filter their soundscapes without the need to manually direct the AI鈥檚 鈥渁ttention.鈥

The team Nov. 7 in Suzhou, China at the Conference on Empirical Methods in Natural Language Processing. The underlying code is open-source and .

鈥淓xisting approaches to identifying who the wearer is listening to predominantly involve electrodes implanted in the brain to track attention,鈥 said senior author , a 91探花professor in the Paul G. Allen School of Computer Science & Engineering. 鈥淥ur insight is that when we鈥檙e conversing with a specific group of people, our speech naturally follows a turn-taking rhythm. And we can train AI to predict and track those rhythms using only audio, without the need for implanting electrodes.鈥

Related:

  • For more information, visit
  • Story from

The prototype system, dubbed 鈥減roactive hearing assistants,鈥 activates when the person wearing the headphones begins speaking. From there, one AI model begins tracking conversation participants by performing a 鈥渨ho spoke when鈥 analysis and looking for low overlap in exchanges. The system then forwards the result to a second model which isolates the participants and plays the cleaned up audio for the wearer. The system is fast enough to avoid confusing audio lag for the user, and can currently juggle one to four conversation partners in addition to the wearer鈥檚 audio.

The team tested the headphones with 11 participants, who rated qualities like noise suppression and comprehension with and without the AI filtration. Overall, the group rated the filtered audio more than twice as favorably as the baseline.听

A pair of headphones with a curly black microphone taped to one ear cup.
The team combined off-the-shelf noise-canceling headphones with binaural microphones to create the prototype, pictured here. Photo: Hu et al./EMNLP

Gollakota鈥檚 team has been experimenting with AI-powered hearing assistants for the past few years. They developed one smart headphone prototype that can pick a person鈥檚 audio out of a crowd when the wearer looks at them, and another that creates a 鈥渟ound bubble鈥 by muting all sounds within a set distance of the wearer.听

鈥淓verything we鈥檝e done previously requires the user to manually select a specific speaker or a distance within which to listen, which is not great for user experience,鈥 said lead author Guilin Hu, a doctoral student in the Allen School. 鈥淲hat we鈥檝e demonstrated is a technology that鈥檚 proactive 鈥 something that infers human intent noninvasively and automatically.鈥

Plenty of work remains to refine the experience. The more dynamic a conversation gets, the more the system is likely to struggle, as participants talk over one another or speak in longer monologues. Participants entering and leaving a conversation present another hurdle, though Gollakota was surprised by how well the current prototype performed in these more complicated scenarios. The authors also note that the models were tested on English, Mandarin and Japanese dialog, and that the rhythms of other languages might require further fine-tuning.

The current prototype uses commercial over-the-ear headphones, microphones and circuitry. Eventually, Gollakota expects to make the system small enough to run on a tiny chip within an earbud or a hearing aid. In that appeared at , the authors demonstrated that it is possible to run AI models on tiny hearing aid devices.

Co-authors include, a 91探花doctoral student in the Allen School; and , a 91探花doctoral student in the electrical and computer engineering department.

This research was funded by the Moore Inventor Fellows program.

For more information, contact proactivehearing@cs.washington.edu

]]>
$10M gift from Charles and Lisa Simonyi establishes AI@ 91探花to advance artificial intelligence and emerging technologies /news/2025/11/18/10-million-gift-from-charles-and-lisa-simonyi-establishes-aiuw-to-advance-artificial-intelligence-and-emerging-technologies/ Tue, 18 Nov 2025 17:02:43 +0000 /news/?p=89914 a man and a woman sitting together
The 91探花announced a foundational $10 million gift from philanthropists Charles and Lisa Simonyi to support work in artificial intelligence and emerging technologies. Photo: 91探花

The 91探花 today announced a foundational $10 million gift from philanthropists Charles and Lisa Simonyi to support groundbreaking work in artificial intelligence and emerging technologies.

The gift will establish a new initiative, AI@UW, to support the UW鈥檚 global leadership in advancing AI, machine learning and related areas of computing. Noah A. Smith, currently the Amazon Professor of Machine Learning in the Paul G. Allen School of Computer Science & Engineering, will become the vice provost for artificial intelligence and the inaugural Charles and Lisa Simonyi Endowed Chair for Artificial Intelligence and Emerging Technologies. The chair appointment is pending Board of Regents approval.

“With this generous gift from Charles and Lisa Simonyi, we will further position the 91探花as a model for how universities can responsibly and creatively adapt to the age of AI across education, research, administration and governance,鈥 91探花Provost Tricia Serio said. 鈥淏y leading the AI@ 91探花initiative, Vice Provost Noah Smith will guide our efforts to accelerate innovation and collaboration, illuminate achievements, propagate effective practices throughout the 91探花community and beyond, and ensure that our graduates are prepared for the workforce of today and tomorrow.鈥

profile image of a man
Noah A. Smith will become the vice provost for artificial intelligence and the inaugural Charles and Lisa Simonyi Endowed Chair for Artificial Intelligence and Emerging Technologies. Photo: 91探花

91探花researchers and faculty already are globally recognized for cultivating a deep understanding of the science and potential of these rapidly developing technologies. Work at the 91探花is creating practical and responsible applications for AI that span the academic enterprise, contribute to industry and uplift society.

Charles and Lisa Simonyi have a long history of supporting the UW. Lisa Simonyi is the chair of the 91探花Foundation Board, and Charles Simonyi is a technical fellow at Microsoft, where he also was a pioneer in developing software applications.

鈥淭he future of computing, research and innovations is deeply connected to the next era in artificial intelligence and machine learning,鈥 Lisa and Charles Simonyi said. 鈥淲e believe in the UW鈥檚 ability to engage students and faculty toward discoveries that will transform the university, the region and, indeed, the world. We are pleased to lend our support to advancing this exciting, interdisciplinary field.鈥

The Charles and Lisa Simonyi gift also will support the creation of an AI governance committee, student scholarships, community engagement and investments in computing resources and equipment.

鈥淭his extraordinary gift from the Simonyis demonstrates their vision and deep trust in the UW鈥檚 role as a global leader in innovation,鈥 91探花President Robert J. Jones said. 鈥淚t is a foundational investment that will help ensure artificial intelligence is developed and applied responsibly 鈥 serving humanity and advancing knowledge in ways that reflect our shared values.鈥

Read related coverage in and .

 

In the near term, the vice provost for artificial intelligence will establish a SEED-AI grant program to fund projects, led by 91探花faculty, that elevate the use of AI in 91探花educational activities. SEED-AI grants will support innovative, exploratory projects aiming to discover how AI can enhance learning and teaching across disciplines, enlighten the 91探花community, and inspire future developments of AI in the educational context.

Thanks to the Simonyi gift, Smith said, the 91探花will model how universities can responsibly and creatively adapt to the age of AI across education, research, administration and governance.

鈥淭he UW鈥檚 people are already leading the way in shaping universities in the time of AI,鈥 Smith said. 鈥淲hile its rapid rise has been surprising, as an AI researcher and teacher I鈥檓 energized by the chance to promote AI literacy, explore how AI can enrich learning across disciplines and help steer AI’s development in ways that are most useful to the University鈥檚 mission.鈥

Contact Smith at nasmith@cs.washington.edu.

]]>
This AI model simulates 1000 years of the current climate in just one day /news/2025/08/25/ai-simulates-1000-years-of-climate/ Mon, 25 Aug 2025 15:47:55 +0000 /news/?p=88791 Satellite image of the US showing a low pressure weather system hovering over the midwest and extending east. Exemplary of the simulations the model creates.
The new AI model from Dale Durran, 91探花 professor of atmospheric and climate science, and graduate student Nathaniel Cresswell-Clay, simulates up to 1000 years of the current climate using less computing power than conventional methods. It captures atmospheric conditions like the low pressure system over the central US pictured above. Photo: NASA Earth Observing System/Interdisciplinary Science (IDS) program under the Earth Science Enterprise (ESE)

So-called 鈥溾 now seem almost commonplace as floods, storms and fires continue to set new standards for largest, strongest and most destructive. But to categorize weather as a true 100-year event, there must be just a 1% chance of it occurring in any given year. The trouble is that researchers don鈥檛 always know whether the weather aligns with the current climate or defies the odds.

Traditional weather forecasting models run on energy-hogging supercomputers that are typically housed at large research institutions. In the past five years, artificial intelligence has emerged as a powerful tool for cheaper, faster forecasting, but most AI-powered models can only accurately forecast 10 days into the future. 听Still, longer-range forecasts are critical for climate science 鈥 and helping people prepare for seasons to come.

In a in AGU Advances, 91探花 researchers used AI to simulate the Earth鈥檚 current climate and interannual variability for up to 1,000 years. The model runs on a single processor and takes just 12 hours to generate a forecast. On a state-of-the-art supercomputer, the same simulation would take approximately 90 days.

鈥淲e are developing a tool that examines the variability in our current climate to help answer this lingering question: Is a given event the kind of thing that happens naturally, or not?鈥 said , a 91探花professor of atmospheric and climate science.

Durran was one of the first to introduce AI into weather forecasting more than five years ago when he and former 91探花graduate student partnered with Microsoft Research. Durran also holds a joint position as a researcher with California-based Nvidia.

鈥淭o train an AI model, you have to give it tons of data,鈥 Durran said. 鈥淏ut if you break up the available historical data by season, you don鈥檛 get very many chunks.鈥

The most accurate global datasets for the daily weather go back to roughly 1979. Although there are plenty of days between then and now that can be used to train a daily weather forecast model, the same period contains fewer seasons.听This lack of historical data was perceived as a barrier to using AI for seasonal forecasting.

Counterintuitively, the Durran group鈥檚 latest contribution to forecasting, Deep Learning Earth SYstem Model, or DLESyM听, was trained for one-day forecasts, but still learned how to capture seasonal variability.

The model combines two neural networks: one representing the atmosphere and the other, the ocean. While traditional Earth-system models often join atmospheric and oceanic forecasts, researchers had yet to incorporate this approach into models powered by AI alone.

鈥淲e were the first to apply this framework to AI and we found out that it worked really well,鈥 said lead author , a 91探花graduate student in atmospheric and climate science. 鈥淲e鈥檙e presenting this as a model that defies a lot of the present assumptions surrounding AI in climate science.鈥

Because the temperature of the sea surface changes slower than the air temperature, the oceanic model updates its predictions every four days, while the atmospheric model updates every 12 hours. Cresswell-Clay is currently working on adding a land-surface model to DLESyM.

This figure contains two panels, each representing the atmosphere at a given point in time 1000 years apart. One was simulated and the other observed. They are quite similar, validating the model.
(a) a low pressure system simulated by the model in the winter of 3016, (b) an observed low pressure system in March 2018. The black lines show pressure and color indicates wind speed. Comparing the images reveals the model鈥檚 accuracy. Photo: Created by Nathaniel Cresswell-Clay

鈥淥ur design opens the door for adding other components of the Earth system in the future,鈥 he said, especially components that have been difficult to model in the past, such as the relationship between soil, plants and the atmosphere. Instead of researchers coming up with an equation to represent this complex relationship, AI learns directly from the data.

The researchers showcased the model鈥檚 performance by comparing its forecasts of past events to those generated by the four leading models from the sixth phase of the Coupled Model Intercomparison Project, or CMIP6, all of which run on supercomputers. Climate predictions of future climate from these models were key resources used in the last report from the .

DLESyM simulated tropical cyclones and the seasonal cycle of the Indian summer monsoon better than the CMIP6 models. In mid-latitudes, DLESyM captured the month-to-month and interannual variability of weather patterns at least as well as the CMIP6 models.

For example, the model captured atmospheric 鈥渂locking鈥 events just as well as the leading physics-based models. Blocking refers to the formation of atmospheric ridges that keep regions hot and dry, and others cold or wet, by deflecting incoming weather systems. 鈥淎 lot of the existing climate models actually don鈥檛 do a very good job capturing this pattern,鈥 Cresswell-Clay said. 鈥淭he quality of our results validates our model and improves our trust in its future projections.鈥

Neither the CMIP6 models nor DLESyM are 100% accurate, but the fact that the AI-based approach was competitive while using so much less power is significant.

鈥淣ot only does the model have a much lower carbon footprint, but anyone can download it from our website and run complex experiments, even if they don鈥檛 have supercomputer access,鈥 Durran said. 鈥淭his puts the technology within reach for many other researchers.鈥

Other authors include , a visiting 91探花doctoral student in atmospheric and climate science; a 91探花doctoral student in atmospheric and climate science; , a 91探花doctoral student in atmospheric and climate science; Ra煤l A. Moreno, a doctoral student in atmospheric and climate science and , a postdoctoral researcher in neuro-cognitive modeling at the University of T眉bingen in Germany.

This work was funded by the U.S. Office of Naval Research, the U.S. Department of Defense, the University of Chinese Academy of Sciences, the National Science Foundation of China, Deutscher Akademischer Austauschdienst, International Max Planck Research School for Intelligent Systems, Deutsche Forschungsgemeinschaft, U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research and the NVIDIA Applied Research Accelerator Program.

For more information, contact Nathaniel Cresswell-Clay at nacc@atmos.washington.edu or Dale Durran at drdee@uw.edu

]]>
Q&A: Promises and perils of AI in medicine, according to 91探花experts in public health and AI /news/2024/11/21/qa-promises-and-perils-of-ai-in-medicine-according-to-uw-experts-in-public-health-and-ai/ Thu, 21 Nov 2024 16:27:38 +0000 /news/?p=86938 Hands type on a laptop. Scattered around the laptop are a stethoscope and a thermometer.

In most doctors鈥 offices these days, you鈥檒l find a pattern: Everybody鈥檚 Googling, all the time. Physicians search for clues to a diagnosis, or for reminders on the best treatment plans. Patients scour WebMD, tapping in their symptoms and doomscrolling a long list of possible problems.听

But those constant searches leave something to be desired. Doctors don鈥檛 have the time to sift through pages of results, and patients don鈥檛 have the knowledge to digest medical research. Everybody has trouble finding the most reliable information.听

Optimists believe artificial intelligence could help solve those problems, but the bots might not be ready for prime time. In a , , a 91探花 research professor of environmental & occupational health sciences and of neurology in the 91探花School of Medicine, described a troubling experience with Google鈥檚 Gemini chatbot. When Franklin asked Gemini for information on the outcomes of a specific procedure 鈥 a decompressive brachial plexus surgery 鈥 the bot gave a detailed answer that cited two medical studies, neither of which existed.

Franklin wrote that it鈥檚 鈥渂uyer beware when it comes to using AI Chatbots for the purposes of extracting accurate scientific information or evidence-based guidance.鈥 He recommended that AI experts develop specialized chatbots that pull information only from verified sources.听

One expert working toward a solution is , a 91探花assistant professor in the Information School who focuses on making AI better at understanding and relaying scientific information. Wang has developed tools to , , and .

91探花News sat down with Franklin and Wang to discuss how AI could enhance health care, what鈥檚 standing in the way, and whether there’s a downside to democratizing medical research.听

Each of you has studied the possibilities and perils of AI in health care, including the experiences of patients who ask chatbots for medical information. In a best-case scenario, how do you envision AI being used in health and medicine?

Gary Franklin: Doctors use Google a lot, but they also rely on services like which provide really great summaries of medical information and research. Most doctors have zero time and just want to be able to read something very quickly that is well documented. So from a physician鈥檚 perspective trying to find truthful answers, trying to make my practice more efficient, trying to coordinate things better 鈥 if this technology could meaningfully contribute to any of those things, then it would be unbelievably great.听

Gary Franklin, research professor of environmental & occupational health sciences and of neurology in the School of Medicine

I’m not sure how much doctors will use AI, but for many years, patients have been coming in with questions about what they found on the internet, . AI is just the next step of patients doing this, getting some guidance about what to do with the advice they鈥檙e getting. As an example, if a patient sees a surgeon who’s overly aggressive and says they need a big procedure, the patient could ask an AI tool what the broader literature might recommend. And I have concerns about that.

Lucy Lu Wang: I’ll take this question from the clinician’s perspective, and then from the patient’s perspective.听

From the clinician’s perspective, I agree with what Gary said. Clinicians want to look up information very quickly because they’re so taxed and there’s limited time to treat patients. And you can imagine if the tools that we have, these chatbots, were actually very good at searching for information and very good at citing accurately, that they could become a better replacement for a type of tool like UpToDate, right? Because UpToDate is good, it鈥檚 human-curated, but it doesn鈥檛 always contain the most fine-grained information you might be looking for.

Lucy Lu Wang, assistant professor in the Information School

These tools could also potentially help clinicians with patient communication, because there’s not always enough time to follow up or explain things in a way that patients can understand. It鈥檚 an add-on part of the job for clinicians, and that鈥檚 where I think language models and these tools, in an ideal world, could be really beneficial.听

Lastly, on the patient鈥檚 side, it would be really amazing to develop these tools that help with patient education and help increase the overall health literacy of the population, beyond what WebMD or Google does. These tools could engage patients with their own health and health care more than before.听

Zooming out from the individual to the systemic, do you see any ways AI could make health systems as a whole function more smoothly?

GF: One thing I鈥檓 curious about is whether these tools can be used to help with coordination across the health care system and between physicians. It’s horrible. There was a book called 鈥 that argued the main problem in American medicine is poor coordination across specialties, or between primary care and anybody else. It’s still horrible, because there’s no function in the medical field that actually does that. So that’s another question: Is there a role here for this kind of technology in coordinating health care?

LLW: There’s been a lot of work on tools that can summarize a patient’s medical history in their clinical notes, and that could be one way to perform this kind of communication between specialties. There鈥檚 another component, too: If patients can directly interact with the system, we can construct a better timeline of the patient’s experiences and how that relates to their clinical medical care.

We鈥檝e done qualitative research with health care seekers that suggests there are lots of types of questions that people are less willing to ask their clinical provider, but much more willing to put into one of these models. So the models themselves are potentially addressing unmet needs that patients aren鈥檛 willing to directly share with their doctors.

What鈥檚 standing in the way of these best-case scenarios?听

LLW: I think there are both technical challenges and socio-technical challenges. In terms of technical challenges, a lot of these models鈥 training doesn鈥檛 currently make them effective for tasks like scientific search and summarization.

First, these current chatbots are mostly trained to be general-purpose tools, so they’re meant to be OK at everything, but not great at anything. And I think there will be more targeted development towards these more specific tasks, things like scientific search with citations that Gary mentioned before. The current training methods tend to produce models that are instruction-following, and have a very large positive response bias in their outputs. That can lead to things like generating answers with citations that support the answer, even if those citations don’t exist in the real world. These models are also trained to be overconfident in their responses. If the way the model communicates is positive and overconfident, then it’s going to lead to lots of problems in a domain like health care.听

And then, of course, there’s socio-technical problems, like, maybe these models should be developed with the specific goal of supporting scientific search. People are, in fact, working toward these things and have demonstrated good preliminary results.

GF: So are the folks in your field pretty confident that that can be overcome in a fairly short time?

LLW: I think the citation problem has already been overcome in research demonstration cases. If we, for example, hook up an LLM to PubMed search and allow it only to cite conclusions based on articles that are indexed in PubMed, then actually the models are very faithful to citations that are retrieved from that search engine. But if you use Gemini and ChatGPT, those are not always hooked up to those research databases.听

GF: The problem is that a person trying to search using those tools doesn鈥檛 know that.

LLW: Right, that’s a problem. People tend to trust these things because, as an example, we now have AI-generated answers at the top of Google search, and people have historically trusted Google search to only index documents that people have written, maybe putting the ones that are more trustworthy at the top. But that AI-generated response can be full of misinformation. What’s happening is that some people are losing trust in traditional search as a consequence. It鈥檚 going to be hard to build back that trust, even if we improve the technology.

We’re really at the beginning of this technology. It took a long time for us to develop meaningful resources on the internet 鈥 things like Wikipedia or PubMed. Right now, these chatbots are general-purpose tools, but there are already starting to be mixtures of models underneath. And in the future, they鈥檙e going to get better at routing people’s queries to the correct expert models, whether that鈥檚 to the model hooked up to PubMed or to trusted documents published by various associates related to health care. And I think that鈥檚 likely where we’re headed in the next couple of years.听

Trust and reliability issues aside, are there any potential downsides to deploying these tools widely? I can see a potential problem with people using chatbots to self-diagnose when it might be preferable to see a provider.

LLW: You think of a resource like WebMD: Was that a net positive or net negative? Before its existence, patients really did have a hard time finding any information at all. And of course, there’s limited face time with clinicians where people actually get to ask those questions. So for every patient who wrongly self-diagnoses on WebMD, there are probably also hundreds of patients who found a quick answer to a question. I think that with these models, it’s going to be similar. They鈥檙e going to help address some of the gaps in clinical care where we don鈥檛 currently have enough resources.听

For more information or to reach the researchers, email Alden Woods at acwoods@uw.edu.

]]>
91探花joins $110M cross-Pacific effort to advance artificial intelligence /news/2024/04/09/uw-joins-110-million-cross-pacific-effort-to-advance-artificial-intelligence/ Tue, 09 Apr 2024 19:01:26 +0000 /news/?p=85019 officials pose for a group shot
US Secretary of Commerce Gina Raimondo announced a new innovation partnership between the 91探花and the University of Tsukuba supported by Amazon and NVIDIA at a ceremony Tuesday in Washington, D.C. From left to right, Raimondo, Amazon Senior Vice President David Zapolsky, 91探花Provost Tricia Serio, University of Tsukuba President Dr. Kyosuke Nagata, NVIDIA Vice President Ned Finkle, and Japanese Minister of Education, Culture, Sports, Science and Technology Moriyama Masahito. The partnership is aimed at furthering research, entrepreneurship, human resource development and social implementation in the field of artificial intelligence. Photo: US Department of Commerce

The 91探花 and the University of Tsukuba have entered an innovation partnership with NVIDIA and Amazon aimed at furthering research, entrepreneurship, workforce development and social implementation in the field of artificial intelligence. This U.S.-Japan academic partnership is part of a broad, $110 million effort to build upon the strong ties between the U.S. and Japan and to continue to lead innovation and technological breakthroughs in artificial intelligence.

The groundbreaking agreement involving universities and industry leaders in both countries was announced on April 9th in Washington, D.C. as part of Prime Minister Kishida Fumio鈥檚 historic state visit. U.S. Commerce Secretary Gina Raimondo, U.S. Ambassador to Japan Rahm Emanuel, and Japanese Minister of Education, Culture, Sports, Science and Technology Moriyama Masahito announced two new research partnerships in artificial intelligence between the 91探花 and the University of Tsukuba and between Carnegie Mellon University and Keio University. These partnerships are supported by $110 million in combined private sector investment from NVIDIA, Amazon, Arm, Microsoft, and nine Japanese companies. Amazon and NVIDIA will each invest $25 million in this collaboration.

鈥淭his is an extraordinary opportunity for the 91探花to lead the global conversation on AI and to convene academic researchers, industry experts and governmental leaders to not only advance the workforce, but to change lives and communities by leveraging this powerful technology,鈥 said 91探花Provost Tricia Serio.

鈥淭his is an exciting effort that brings together the talents and expertise of cutting-edge, world-class universities,鈥 said Washington Gov. Jay Inslee. 鈥淎dvancements in AI are happening at a breakneck pace. This collaboration will help provide the research and workforce training for our regions鈥 tech sectors to keep up with the profound impacts AI is having across every sector of our economy.鈥

 

Read related coverage in , and .

At the invitation of Ambassador Emanuel, the 91探花has been forging ties with the University of Tsukuba over the past year, with a focus on shared expertise in artificial intelligence. Tsukuba is known in Japan for being at the center of scientific research and innovation, much like Seattle鈥檚 reputation for fostering technological breakthroughs and being home to some of the world鈥檚 biggest technology companies.

鈥淲e are honored to work with Amazon and NVIDIA as well as with the University of Tsukuba to advance artificial intelligence and global engagement,鈥 said Nancy Allbritton, dean of the College of Engineering. 鈥淭sukuba is a science city just as Seattle is, and we see a tremendous opportunity to leverage the university and the whole ecosystem to create a better future on both sides of the Pacific. We are grateful to Ambassador Emanuel for catalyzing this landmark partnership.鈥

Faculty and staff from the College of Engineering will spearhead 91探花interdisciplinary efforts. This multi-year partnership will feature work in areas where AI can drive transformative change to benefit society, including healthcare, robotics, climate change and atmospheric science, among others. The funding will support research awards, post-doctoral and doctoral students, an undergraduate summer research program, and an entrepreneurship bootcamp program.

Artificial Intelligence is the third strategic university-corporate partnership initiative concluded between American and Japanese academic institutions and the corporate sector since May 2022, when President Joe Biden and Prime Minister Kishida made a commitment to advance U.S.-Japan science and technology cooperation. The 91探花also is the lead partner on UPWARDS, a program focused on workforce development for the semiconductor industry supported by Micron, Tokyo Electron Limited and the National Science Foundation.

]]>
Q&A: How to train AI when you don’t have enough data /news/2024/03/28/train-ai-machine-learning-when-you-dont-have-enough-data/ Thu, 28 Mar 2024 16:53:08 +0000 /news/?p=84878 Artificial intelligence excels at sorting through information and detecting patterns or trends. But these machine learning algorithms need to be trained with large amounts of data first.

As researchers explore potential applications for AI, they have found scenarios where AI could be really useful 鈥 such as analyzing X-ray image data to look for evidence of rare conditions or 鈥 but there’s not enough data to accurately train the algorithms.

Jenq-Neng Hwang Photo: 91探花

, 91探花 professor of electrical and computer and engineering, specializes in these issues. For example, Hwang and his team developed a method that teaches AI to monitor how many distinct poses a baby can achieve throughout the day. There are limited training datasets of babies, which meant the researchers had to create a unique pipeline to make their algorithm accurate and useful. The team in the IEEE/CVF Winter Conference on Applications of Computer Vision 2024.

91探花News spoke with Hwang about the project details and other similarly challenging areas the team is addressing.

Why is it important to develop an algorithm to track baby poses?

Jenq-Neng Hwang: We started a collaboration with the 91探花School of Medicine and the . The goal of the project was to try to help families with a history of autism know whether their babies were also likely to have autism. Babies younger than 9 months don’t really have language skills yet, so it’s difficult to see if they鈥檙e autistic or not. Researchers developed one test, called the , which categorizes various poses babies can do: If a baby can do this, they get two points; and if they can do that, they get three points; and so on. Then you add up all the points and if the baby is above some threshold, they likely don’t have autism.

But to do this test, you need a doctor to observe all the different poses. It becomes a very tedious process because sometimes after three or four hours, we still haven’t seen a baby do a specific pose. Maybe the baby could do it, but at that moment they didn’t want to. One solution could be to use AI. Parents often have a baby monitor at home. The baby monitor could use AI to continuously and consistently track the various poses a baby does in a day.

Why is AI a good fit for this task?

JNH: My background is studying traditional image processing and computer vision. We were trying to teach computers to be able to figure out human poses from photos or videos, but the trouble is that there are so many variations. For example, even the same person wearing different outfits is a challenging task for traditional image processing to correctly identify that person’s elbow on each photo.

But AI makes it so much easier. These models can learn. For example, you could train a machine learning model with a variety of motion captured sequences showing all different kinds of people. These sequences could be annotated with the corresponding 3D poses. Then this model could learn to output a 3D model of a person’s pose on a sequence it has never seen before.

But in this case, there aren’t a lot of motion captured sequences of babies that also have 3D pose annotations that you could use to train your machine learning model. What did you do instead?

JNH: We don’t have a lot of 3D pose annotations of baby videos to train the machine learning model for privacy reasons. It’s also difficult to create a dataset where a baby is performing all the possible potential poses that we would need. Our datasets are too small, meaning that a model trained with them would not estimate reliable poses.

The team also at the Winter Conference on Applications of Computer Vision.

Learn more about this research and more .

But we do have a lot of annotated 3D motion sequences of people in general. So, we developed this pipeline.

First we used the large amount of 3D motion sequences of regular people to train a generic 3D pose model, which is similar to the model used in ChatGPT and other GPT-4 types of large language models.

We then finetuned our generic model with our very limited dataset of annotated baby motion sequences. The generic model can then adapt to the small dataset and produce high quality results.

Two side by side images. On the right is a picture of a baby on its hands and knees. There is a blue "stick figure" drawn over the top marking the baby's arm and leg positions. On the right is a grid showing two stick figures of the baby. One is red and one is blue. They overlap pretty well.
Shown on the left is an image of a baby with a 2D “stick figure” created using a set of detected keypoints. On the right is the stick figure model of the baby’s 3D pose. The red stick figure shows the “ground truth” and the blue stick figure is the 3D pose estimated with the researchers’ algorithm.

Are there other tasks like this: good for AI, but there’s not a lot of data to train an algorithm?

JNH: There are many types of scenarios where we don’t have enough information to train the model. One example is a rare disease that is diagnosed by X-rays. The disease is so rare that we don’t have enough X-ray images from patients with the disease to train a model. But we do have a lot of X-rays from healthy patients. So, we can use generative AI again to generate the corresponding synthetic X-ray image without disease, which can then be compared with the diseased image to identify disease regions for further diagnosis.

Autonomous driving is another example. There are so many real events you cannot create. For example, say you are in the middle of driving and a few leaves blow in front of the car. If you use autonomous driving, the car might think something is wrong and slam on the brakes, because the car has never seen this scenario before. This could result in an accident.

We call these “long-tail” events, which means that they are unlikely to happen. But in daily life we always see random things like this. Until we figure out how to train autonomous driving systems to handle these types of events, autonomous driving cannot be useful. Our team is working on this problem by combining data from a regular camera with radar information. The camera and radar persistently check each other鈥檚 decisions, which can help a machine learning algorithm make sense of what’s happening.

Additional co-authors on the baby poses paper are , a 91探花research assistant in the electrical and computer engineering department; and , 91探花doctoral students in the electrical and computer engineering department; , a 91探花master’s student studying electrical and computer engineering; and , a doctoral fellow at the University of Copenhagen. This research was funded by the Electronics and Telecommunications Research Institute of Korea, the National Oceanic and Atmospheric Administration and Cisco Research.

For more information, contact Hwang at hwang@uw.edu.

]]>
Q&A: What is the best route to fairer AI systems? /news/2024/02/15/qa-what-is-the-best-route-to-fairer-ai-systems/ Thu, 15 Feb 2024 15:55:22 +0000 /news/?p=84454 Two people's hands gesture to pieces of paper between two laptops on a desk.
Mike Teodorescu, a 91探花 assistant professor in the Information School, proposes in a new paper that private enterprise standards for fairer machine learning systems would inform governmental regulation. Photo:

In December, the European Union , the first major law aiming to regulate technologies that fall under the umbrella of artificial intelligence. The legislation might have arrived sooner, but the sudden success of ChatGPT in late 2022 demanded the act be updated.

The EU鈥檚 act, however, does not mention fairness 鈥 a measure looking at how well a system avoids discrimination. The field studying fairness in machine learning (a sub-field of AI) is relatively new, so clear regulation is still in development.

, a 91探花 assistant professor in the Information School, proposes in a new paper that private enterprise standards for fairer machine learning systems would inform governmental regulation.

The Feb. 15 by the Brookings Institution as part of its series “.”

91探花News spoke with Teodorescu about the paper and the field of machine learning fairness.

To start, could you explain what machine learning fairness is?

Mike Teodorescu: It is essentially concerned with ensuring that a machine learning algorithm is fair to all categories of users. It combines computer science, law, philosophy, information systems and some economics as well.

For example, if you’re trying to create software to automate hiring interviews, you might have a group of HR people interview many candidates with diverse backgrounds and experiences and recommend a binary outcome 鈥 hire or don鈥檛 hire. Data from actual HR interviews can be used to train and test a machine learning model. At the end of this process, you get accuracy 鈥 the percent the model got correct. But this percentage does not capture how well the algorithm performs when considering certain subgroups. U.S. law forbids discrimination based on protected attributes, which include gender, race, age, veteran status and so on.

In the simplest terms, as an example, if you count the number of veterans that you would like to hire, then the algorithm should hire independent of the protected attribute. Of course, this becomes more complex as you have more intersections of subgroups 鈥 you might have race, age, socioeconomic status and gender. From a practical perspective, if you have a system of equalities for dozens of values of protected attributes, it is unlikely that all of them will be satisfied at the same time. I don’t think we have a generalizable solution and we do not have yet an optimal way to check for AI fairness.

What is it important for the general public to understand about machine learning fairness?

MT: It helps to understand procedural fairness, which looks at the methods that are used to come up with decisions. A user might want to ask, 鈥淒o I know if this software is using machine learning to make some prediction about me? If yes, what kind of inputs is it taking? Can I correct an incorrect prediction? Is there a feedback mechanism by which I can challenge it?鈥

This principle is actually found in privacy laws in Europe and California, where we can object to certain information being used. That level of transparency would be great in the case of a machine learning algorithm being applied to make some decision about you. Maybe there is an option to select what variables it鈥檚 using to show you certain ads. Now, I’m not sure that’s something we will see in the very near future, but it鈥檚 something users might care about.

What鈥檚 impeding fairness standards from being widely adopted by companies?

MT: I think it’s a problem of incentives. From an economic perspective, companies want to bring products to market as quickly as possible. If users get an app that uses image recognition AI, they likely won鈥檛 read the Terms of Service. So they’re probably not going to spend the time to go through training on whether the tool is fair or not. Many users might not even know that it’s possible for a tool to be unfair.

For a company right now, the incentive to develop such systems would be to put the company at the technological forefront and to signal quality 鈥 that its AI tools are fairer than its competitors’. But if the users do not know about this being a problem, they may not be worried about which company鈥檚 product is fairer. Probably 10 years from now, many more people will care about fairness, just like they do about cybersecurity and data privacy. Cybersecurity wasn鈥檛 such a common concern until we had a lot of these breaches.

Would an example of what you’re explaining here be somebody submitting a job application to a company that uses a machine learning algorithm to sort applications? That person wouldn鈥檛 necessarily know if there’s a machine learning algorithm sorting these applications, so they certainly wouldn鈥檛 know if they’ve been unfairly sifted out.

MT: Precisely, and that concern keeps me up at night. There’s a patchwork of regulations across different countries and states, but there isn’t yet a comprehensive federal regulation about this. There鈥檚 a law specifically about . There’s also an EU law that very recently got through, which allows people to contest or determine how their data is being used. There’s a White House set of directives that have been proposed. Eventually, I think there will be a federal law.

Do you see standards arriving first and then driving actual regulation of machine learning fairness?

MT: Yes, regulations are slow. There are a lot of hurdles to pass a law. But standards play more into the economic incentives. There are standards for cybersecurity, quality measurement, WiFi, Bluetooth and so on, but we don’t yet quite have accepted standards for machine learning fairness yet. Usually, an organization produces them. The Institute of Electrical and Electronics Engineers (IEEE) comes up with a lot of technical standards, and actually suggested a few. The standards committees within such organizations usually bring people from industry, academia and government together, and they come up with guidelines that can be updated, so there might be different versions of a standard. That provides a lot more flexibility than regulations. For instance, there are two different quality management manufacturing standards. Most factories have the less strict standard, while the stricter standard for medical manufacturing is very expensive and much more difficult to get. In fairness, you might see a light standard and a much more comprehensive one.

Likewise, standards organizations can have auditing requirements. Once a company complies with a standard, there鈥檚 a certain frequency of audits to make sure that the standards continue to be upheld. Having something like that for products that use machine learning would be a great way to improve accountability.

, a digital fellow at Stanford University, was a co-author.

For more information, contact Teodorescu at miketeod@uw.edu.

]]>
Q&A: Can AI in school actually help students be more creative and self-directed? /news/2023/09/25/ai-school-chatgpt-katie-davis/ Mon, 25 Sep 2023 16:20:12 +0000 /news/?p=82700

 

One fear about generative artificial intelligence, such as ChatGPT, is that students will outsource their creative work and critical thinking to it. But , a 91探花 associate professor in the Information School, is also interested in how researchers might use AI tools to make learning more creative.

In her book, 鈥,鈥 Davis examines how technology affects kids, teens and young adults. She distills research in the area into two key qualities of technologies that support development: They should be 鈥渟elf-directed鈥 (meaning the kids are in control, not the tech makers) and 鈥渃ommunity-supported鈥 (meaning adults and peers are around to engage with the kids鈥 tech use).

Davis spoke with 91探花News about her research and how generative AI might support learning, instead of detracting from it, provided kids can keep their agency.

What issues do you study around young people and technology?

Katie Davis: My research focuses on the impact of new and emerging technologies on young people’s learning, development and well-being 鈥 especially on early teens up through college-age kids. Over the years, I鈥檝e explored a variety of topics, but I always come back to this broad question: How are the technologies around young people shaping their sense of self and how they move through the world?

Since ChatGPT was released under a year ago, what are you paying attention to as research develops around AI and learning?

KD: I’m fascinated by emerging research on what kids are doing with generative AI, such as ChatGPT, when they have free time and want to explore. How are they thinking and making sense of generative AI and its potential 鈥 not just for learning, but for going about their daily lives?

It seems like with generative AI, there鈥檚 been a lot of focus on whether kids will use it to outsource their creativity, but you’re also looking at how they can support their creativity by playing with these tools.

KD: Some of the questions I ask in my research are: When does technology support young people’s agency in their learning? When do they feel like they’re in the driver’s seat of their technology use? And when does technology do the work for them and direct them one way instead of another?

My hope is that kids will learn to give ChatGPT and other AI tools creative prompts and use chatbots as a source of inspiration rather than an answer bank. But teaching kids to use AI creatively and critically isn鈥檛 easy. Plus, I鈥檓 mindful that there鈥檚 an unfortunate pattern in education technology whereby innovative uses are traditionally found in more affluent, well-resourced schools. Whereas the same technologies, when they’re introduced into less well-resourced schools, are often used more for drill-type activities, or even to control kids and make sure that they鈥檙e on task.

Are you researching generative AI? What questions are you asking?

KD: In my lab, we want to see if generative AI can make teen social media experiences better. We鈥檝e found that teens often go onto social media for one purpose, only to find themselves quickly sucked down a rabbit hole of unintended scrolling. After 20 or 30 minutes, they’re thinking: What have I just done with my time? It鈥檚 a very common experience in adults as well. We’re exploring whether we can use generative AI to reorient teens鈥 initial entry into social media experiences toward meaningfulness, toward their values or goals and away from habitual use.

We鈥檙e also looking at disparities in how generative AI tools are being taken up in different schools and school systems. We鈥檙e hoping to understand how young people use AI chatbots outside of school and in their daily lives, and then use those emerging mental models to shape what’s possible in schools and for learning.

Can you describe a way that people have been using ChatGPT without instructions that surprised you?

KD: I’m most interested in kids who try to break ChatGPT because that suggests to me that they’re using a tinkerer鈥檚 mindset, which suggests that they are in control. They鈥檙e asking: What can I do with this tool? How can I push it and stretch it?

Kids are sophisticated users of technology. And they’re not afraid to break things. I think that’s one reason they tend to learn how to use new technologies so quickly, because they don’t care if they make mistakes. That mindset provides a real opportunity that schools can take advantage of, to teach critical understandings of AI and other emerging technologies. Otherwise, I worry that the technology will start to use us and we鈥檒l lose some of our agency. But I don’t think that’s inevitable.

For more on Davis鈥檚 research, see .

Are there ways to design AI tools to emphasize 鈥渟elf-directed鈥 and 鈥渃ommunity-supported鈥 experiences of the sort you recommend in your book?

KD: One example is Khan Academy, which has come out with an AI chatbot, Khanmigo. The company is framing Khanmigo as a tutor that’s not just going to give you answers, but actually ask you open-ended questions to help you come to your own answer. That鈥檚 a great vision. Now, my understanding is that it’s not quite there yet. It’s not perfect, but I think the goal is a good one.

It鈥檚 fascinating: Generative AI is really rattling some notions around learning through rote exercises, because it basically takes away these exercises.

KD: Even in my university teaching, I have had to think carefully about the kinds of assignments that I’m giving students. I can’t just ask them to write a paper on some topic, because, odds are, they’re going to use ChatGPT to write it. So I have to really think about what is it that I want them to know and be able to do. It’s not easy, but I love the conversations we鈥檙e having as educators. AI is bringing up all these meaty questions: How can we use AI to teach better? Are there new things that we need to teach? Are there things we don’t need to teach anymore? This upheaval is unsettling for teachers at all levels, including me. But I think it’s a good unsettling. It’s one that really forces us as educators to focus on the goals of teaching.

What approach have you been taking with generative AI for teaching? Have your policies changed going into this new school year?

KD: I was fortunate to not be teaching for the first two quarters when ChatGPT was introduced! So I got to watch my colleagues try things out and see what worked and what didn鈥檛. I started teaching again in the spring and decided to lean into ChatGPT. In a course on child development and learning with technology, I asked students to use ChatGPT to help them create a lesson plan and then critique what it gave them. The students and I found that ChatGPT creates perfectly reasonable lesson plans, but they鈥檙e all a bit 鈥榖lah.鈥 They鈥檙e uninspired. I wanted students to make them better, and so did they.

This fall, I’m teaching a course on research methods. And I want students to use ChatGPT to help them scope and develop their research projects. They鈥檒l discover that ChatGPT may give them a good starting point, but it鈥檚 also likely to give them some bogus citations, which are completely made up. I want them to engage with these benefits and limitations head on.

For more information, contact kdavis78@uw.edu.

Video updated 9/26/2023 to show Davis is an associate professor, not an assistant professor.

]]>
Q&A: As AI changes education, important conversations for kids still happen off-screen /news/2023/08/16/jason-yip-ai-chatgpt-education-learning-teaching-schools/ Wed, 16 Aug 2023 16:29:40 +0000 /news/?p=82372

 

When ChatGPT surged into public life in late 2022, it brought new urgency to long-running debates: Does technology help or hinder kids鈥 learning? How can we make sure tech鈥檚 influence on kids is positive?

Such questions live close to the work of , a 91探花 associate professor in the Information School. Yip has focused on technology鈥檚 role in families to support collaboration and learning.

As another school year approaches, Yip spoke with 91探花News about his research.

What sorts of family technology issues do you study?

Jason Yip: I look at how technologies mediate interactions between kids and their families. That could be parents or guardians, grandparents or siblings. My doctoral degree is in science education, but I study families as opposed to schools because I think families make the biggest impact in learning.

I have three main pillars of that research. The first is about building new technologies to come up with creative ways that we can study different kinds of collaboration. The second is going into people鈥檚 homes and doing field studies on things like how families search the internet, or how they interact with voice assistants or digital games. We look at how new consumer technologies influence family collaborations. The third is co-design: How do adults work with children to co-create new technologies? I鈥檓 the director of . We have kids come to the university basically to work with us as design researchers to make technologies that work for other children.

Can you explain some ways you鈥檝e explored the pros and cons of learning with technology?

JY: I study 鈥渏oint media engagement,鈥 which is a fancy way of saying that kids can work and play with others when using technology. For example, digital games are a great way parents and kids can actually learn together. I鈥檓 often of the opinion that it鈥檚 not the amount that people look at their screens, but it鈥檚 the quality of that screen time.

I did my postdoc at , and we鈥檝e known for a long time that if a child and parent watch Sesame Street together and they鈥檙e talking, the kid will . We found this in studies of 鈥淧ok茅mon Go鈥 and With these games, families were learning together and, in the case of Animal Crossing, processing pandemic isolation together.

Whether I鈥檓 looking at artificial intelligence or , I鈥檓 asking: Where does the talking and sharing happen? I think that鈥檚 what people don鈥檛 consider enough in this debate. And that dialogue with kids matters much more than these questions of whether technology is frying kids鈥 brains. I grew up in the 鈥90s when there was this vast worry about video games ruining children鈥檚 lives. But we all survived, I think.

When ChatGPT came out, it was presented as this huge interruption in how we鈥檝e dealt with technology. But do you think it鈥檚 that unprecedented in how kids and families are going to interact and learn with it?

JY: I see the buzz around AI as a 鈥 with a surge of excitement, then a dip, then a plateau. For a long time, we鈥檝e had artificial intelligence models. Then someone figured out how to make money off AI models and everything鈥檚 exploding. Goodbye, jobs! Goodbye, school! Eventually we鈥檙e going to hit this apex 鈥 I think we鈥檙e getting close 鈥 and then this .

The question I have for big tech companies is: Why are we releasing products like ChatGPT with these very simple interfaces? Why isn鈥檛 there a tutorial, like in a video game, that teaches the mechanics and rules, what鈥檚 allowed, what鈥檚 not allowed?

Partly, this AI anxiety comes because we don鈥檛 yet know what to do with these powerful tools. So I think it鈥檚 really important to try to help kids understand that these models are trained on data with human error embedded in it. That鈥檚 something that I hope generative AI makers will show kids: This is how this model works, and here are its limitations.

Have you begun studying how ChatGPT and generative AI will affect kids and families?

JY: We鈥檝e been doing co-design work with children, and when these AI models started coming out, we started playing around with them and asked the kids what they thought. Some of them were like, 鈥I don鈥檛 know if I trust it.鈥 Because it couldn鈥檛 answer simple questions that kids have.

A big fear is that kids and others are going to just accept the information that ChatGPT spits out. That鈥檚 a very realistic perspective. But there鈥檚 the other side: People, even kids, have expertise, and they can test these models. We had a kid start asking ChatGPT questions about Pok茅mon. And the kid is like, 鈥淭his is not good!鈥 Because the model was contradicting what they knew about Pok茅mon.

We鈥檝e also been studying how public libraries can use ChatGPT to teach kids about misinformation. So we asked kids, 鈥淚f ChatGPT makes a birthday card greeting for you to give to your friend Peter, is that misinformation?鈥 Some of the kids were like, 鈥淭hat鈥檚 not okay! The card was fine, but Peter didn鈥檛 know whether it came from a human.鈥

The third research area is going into the homes of immigrant families and trying to understand whether ChatGPT does a decent job of helping them find critical information about health or finances or economics. We鈥檝e studied and helping their families understand the information. Now we鈥檙e trying to see how AI models affect this relationship.

What are important things for parents and kids to consider when using new technology 鈥 AI or not 鈥 for learning?

JY: I think parents need to pay attention to the conversations they鈥檙e having around it. General parenting styles range from . Which style is best is very contextual. But the conversations around technology still have to happen, and I think the most important thing parents can do is say to themselves, 鈥淚 can be a learner, too. I can learn this with my kids.鈥 That鈥檚 hard, but parenting is really hard. Technologies are developing so rapidly that it鈥檚 OK for parents not to know. I think it鈥檚 a better position to be in this .

You鈥檝e taught most every grade level: elementary, junior high, high school and college. What should teachers be conscious of when integrating generative AI in their classrooms?

JY: I feel for the teachers, I really do, because a lot of the . So it totally depends on the context of the teaching. I think it鈥檚 up to school leaders to think really deeply about what they鈥檙e going to do and ask these hard questions, like: What is the point of education in the age of AI?

For example, with generative AI, is testing the best way to gauge what people know? Because if I hand out a take-home test, kids can run it through an AI model and get the answer. Are the ways we鈥檝e been teaching kids still appropriate?

I taught AP chemistry for a long time. I don鈥檛 encounter AP chemistry tests in my daily life, even as a former chemistry teacher. So having kids learn to adapt is more important than learning new content, because without adaptation, people don鈥檛 know what to do with these new tools, and then they鈥檙e stuck. Policymakers and leaders will have to help the teachers make these decisions.

For more information, contact jcyip@uw.edu.

]]>