Alexis Hiniker – 91̽News /news Tue, 21 Jan 2025 16:54:38 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Study finds strong negative associations with teenagers in AI models /news/2025/01/21/teens-ai-chatgpt-bias/ Tue, 21 Jan 2025 16:54:38 +0000 /news/?p=87328 a computer sits on a wood table
A 91̽team studied how AI systems portray teens in English and Nepali, and found that in English language systems around 30% of the responses referenced societal problems such as violence, drug use and mental illness. Photo:

A couple of years ago, was experimenting with an artificial intelligence system. He wanted it to complete the sentence, “The teenager ____ at school.” Wolfe, a 91̽ doctoral student in the Information School, had expected something mundane, something that most teenagers do regularly — perhaps “studied.” But the model plugged in “died.”

This shocking response led Wolfe and a 91̽team to study how AI systems portray teens. The researchers looked at two common, open-source AI systems trained in English and one trained in Nepali. They wanted to compare models trained on data from different cultures, and co-lead author , a 91̽doctoral student in human centered design and engineering, grew up in Nepal and is a native Nepali speaker.

In the English-language systems, around 30% of the responses referenced societal problems such as violence, drug use and mental illness. The Nepali system produced fewer negative associations in responses, closer to 10% of all answers. Finally, the researchers held workshops with groups of teens from the U.S. and Nepal, and found that neither group felt that an AI system trained on media data containing stereotypes about teens would accurately represent teens in their cultures.

The team Oct. 22 at the AAAI/ACM Conference on AI, Ethics and Society in San Jose.

“We found that the way teens viewed themselves and the ways the systems often portrayed them were completely uncorrelated,” said co-lead author Wolfe. “For instance, the way teens continued the prompts we gave AI models were incredibly mundane. They talked about video games and being with their friends, whereas the models brought up things like committing crimes and bullying.”

The team studied OpenAI’s , the last open-source version of the system that underlies ChatGPT; Meta’s , another popular open-source system; and DistilGPT2 Nepali, a version of GPT-2 trained on Nepali text. Researchers prompted the systems to complete sentences such as “At the party, the teenager _____” and “The teenager worked because they wanted_____.”

The researchers also looked at — a method of representing a word as a series of numbers and calculating the likelihood of it occurring with certain other words in large text datasets — to find what terms were most associated with “teenager” and its synonyms. Out of 1,000 words from one model, 50% were negative.

The researchers concluded that the systems’ skewed portrayal of teenagers came in part from the abundance of negative media coverage about teens; in some cases, the models studied cited media as the source of their outputs. News stories are seen as “high-quality” training data, because they’re often factual, but , not the quotidian parts of most teens’ lives.

“There’s a deep need for big changes in how these models are trained,” said senior author , a 91̽associate professor in the Information School. “I would love to see some sort of community-driven training that comes from a lot of different people, so that teens’ perspectives and their everyday experiences are the initial source for training these systems, rather than the lurid topics that make news headlines.”

To compare the AI outputs to the lives of actual teens, researchers recruited 13 American and 18 Nepalese teens for workshops. They asked the participants to write words that came to mind about teenagers, to rate 20 words on how well they describe teens and to complete the prompts given to the AI models. The similarities between the AI systems’ responses and the teens’ were limited. The two groups of teens differed, however, in how they wanted to see fairer representations of teens in AI systems.

“Reliable AI needs to be culturally responsive,” Wolfe said. “Within our two groups, the U.S. teens were more concerned with diversity — they didn’t want to be presented as one unit. The Nepalese teens suggested that AI should try to present them more positively.”

The authors note that, because they were studying open-source systems, the models studied aren’t the most current versions — GPT-2 dates to 2019, while the LLAMA model is from 2023. Chatbots, such as ChatGPT, built on later versions of these systems typically undergo further training and have guardrails in place to protect against such overt bias.

“Some of the more recent models have fixed some of the explicit toxicity,” Wolfe said. “The danger, though, is that those upstream biases we found here can persist implicitly and affect the outputs as these systems become more integrated into peoples’ lives, as they get used in schools or as people ask what birthday present to get for their 14-year-old nephew. Those responses are influenced by how the model was initially trained, regardless of the safeguards we later install.”

, a 91̽associate professor in the Information School, is a co-author on this paper. This research was funded in part by the research network.

For more information, contact Wolfe at rwolfe3@uw.edu and Hinkier at alexisr@uw.edu.

]]>
Even on Instagram, teens mostly feel bored /news/2024/07/16/instagram-teens-mental-health-boredom-meta/ Tue, 16 Jul 2024 16:02:23 +0000 /news/?p=85837 Concern that social media is driving has risen to such a pitch that the majority of states in the country have (which owns Instagram and Facebook) and the U.S. surgeon general , similar to those on tobacco.

New research from the 91̽ finds, though, that while some teens do experience negative feelings when using Instagram, the dominant feeling they have around the platform is boredom. They open the app because they’re bored. Then they sift through largely irrelevant content, mostly feeling bored, while seeking interesting bits to share with their friends in direct messages — the most constant source of connection they found on the platform. Then, eventually bored with what researchers call a “content soup,” they log off.

The study tracked the experiences of 25 U.S. teens moment by moment as they used the app. Teens leaned on a few techniques to stabilize their experiences — such as using likes, follows and unfollows to curate their feeds, and racing past aggravating content. The researchers used these results to make a few design recommendations, including prompts to cue reflection while using the app or features that clarify and simplify how users can curate their feeds.

The team on June 18 at the ACM Interaction Design and Children Conference in Delft, Netherlands.

“A lot of the talk about social media is at the extremes,” said lead author , a 91̽doctoral student in the Information School. “You either hear about harassment or bullying — which are real phenomena — or this kind of techno-utopian view of things, where companies like Meta, among others, seem to say they are thinking about wellbeing constantly but we’ve yet to see concrete results of that. So we really wanted to study the mundane, daily experience of teens using Instagram.”

To capture this in-the-moment experience, the team first trained the participants in mindfulness techniques and had them download an app called AppMinder. The simple interface, which the researchers developed, would pop up five minutes after the teens started using Instagram and have them fill out a quick survey about how they were feeling emotionally and why. The pop-ups came once every three hours. Teens were supposed to use Instagram and fill out at least one response a day for seven days, though many submitted multiple responses each day.

Finally, researchers interviewed teens about their responses and had them open Instagram again and narrate how they were feeling in real time and explain how they were experiencing certain features.

“We saw teens turning to Instagram in moments of boredom, looking for some kind of stimulation,” said co-senior author , a 91̽associate professor in the iSchool. “They were finding enough moments of closeness and connection with their friends on the app to keep them coming back. That value is definitely there, but it’s really buried in gimmicks, attention-grabbing features, content that’s sometimes upsetting or frustrating, and a ton of junk.”

Much of what Instagram’s algorithm served up was not what the teens were looking for. Yet they’d keep wading through hundreds of posts to find a single meme or piece of fashion inspiration to share with their friends. Overall, they found the most value in the app’s direct message function, not in this scrolling.

Because they found value in specific experiences, teens employed several mitigation strategies to focus their time on the app:

  • Trying to curate their feeds to emphasize posts that made them feel good rather than bad or bored, by following, unfollowing, hiding and liking
  • Scrolling quickly, skipping or logging off when content made them feel bad
  • Toggling Instagram features — hiding like-counts, turning off certain notifications — to reduce negative emotions

“Instagram’s push notifications and algorithmically curated feeds forever hold out the promise of teens experiencing a meaningful interaction, while delivering on this promise only intermittently,” said co-senior author , a 91̽associate professor in the iSchool. “Unfortunately, it’s much easier to identify the problem than to fix it. The current business model of most social media platforms depends on keeping users scrolling as often and for as long as possible. Legislation is needed to compel platforms to change the status quo.”

Based on their findings, the researchers offered three design changes to improve teens’ experiences:

  • Notifications, like those from AppMinder, that prompt teens to consider what they’re on Instagram to do and to reflect in the moment
  • Features that make curating feeds easier, such as a “This is good for me” button that clearly highlights positive content
  • The use of data to track signs of well-being and its opposite — for example, tracking when users skip past content or log off and pairing this with other data

This summer, the team will take the data from the study and examine it with a separate group of teens, aiming for further insights and recommendations.

“It is not and should not be the sole responsibility of teens to make their experiences better, to navigate these algorithms without knowing how they work, exactly,” Landesman said. “The responsibility also lies with companies running social media platforms.”

Additional co-authors include , a 91̽doctoral student in the Paul G. Allen School of Computer Science & Engineering; , a 91̽doctoral student in the iSchool; , a 91̽doctoral student in psychology; and , a 91̽assistant professor of psychology. This research was partially funded by the Oread Fund and the CERES network.

For more information, contact Landesman at roteml@uw.edu, Hiniker at alexisr@uw.edu and Davis at kdavis78@uw.edu.

]]>
Learning from superheroes and AI: 91̽researchers study how a chatbot can teach kids supportive self-talk /news/2023/07/18/superheroes-and-ai-uw-researchers-study-how-a-chatbot-can-teach-kids-supportive-self-talk/ Tue, 18 Jul 2023 16:48:13 +0000 /news/?p=82128 a smart speaker sits beside school supplies
Researchers at the 91̽ created a new web app, Self-Talk with Superhero Zip, aimed to help children develop skills like self-awareness and emotional management. Photo:

At first, some parents were wary: An audio chatbot was supposed to teach their kids to speak positively to themselves through lessons about a superhero named Zip. In a world of Siri and Alexa, many people are skeptical that the makers of such technologies are putting children’s welfare first.

Researchers at the 91̽ created a new web app aimed to help children develop skills like self-awareness and emotional management. In Self-Talk with Superhero Zip, a chatbot guided pairs of siblings through lessons. The 91̽team found that, after speaking with the app for a week, most children could explain the concept of supportive self-talk (the things people say to themselves either audibly or mentally) and apply it in their daily lives. And kids who’d engaged in negative self-talk before the study were able to turn that habit positive.

The 91̽team in June at the 2023 Interaction Design and Children conference. The app is still a prototype and is not yet publicly available.

The 91̽team saw a few reasons to develop an educational chatbot. Positive self-talk has shown a range of benefits for kids, from to . And previous studies have shown children can learn various tasks and abilities from chatbots. Yet little research explores how chatbots can help kids effectively acquire socioemotional skills.

“There is room to design child-centric experiences with a chatbot that provide fun and educational practice opportunities without invasive data harvesting that compromises children’s privacy,” said senior author , an associate professor in the 91̽Information School. “Over the last few decades, television programs like ‘Sesame Street,’ ‘Mister Rogers,’ and ‘Daniel Tiger’s Neighborhood’ have shown that it is possible for TV to help kids cultivate socioemotional skills. We asked: Can we make a space where kids can practice these skills in an interactive app? We wanted to create something useful and fun — a ‘Sesame Street’ experience for a smart speaker.”

a screenshot of the app, showing a microphone and a superhero
Shown here is a screenshot of the prototype kids interacted with. 91̽

The 91̽researchers began with two prototype ideas with the goal to teach socioemotional skills broadly. After testing, they narrowed the scope, focusing on a superhero named Zip and the aim of teaching supportive self-talk. They decided to test the app on siblings, since research shows that when they use technology with another person.

Ten pairs of Seattle-area siblings participated in the study. For a week, they opened the app and met an interactive narrator who told them stories about Zip and asked them to reflect on Zip’s encounters with other characters, including a supervillain. During and after the study, kids described applying positive self-talk; several mentioned using it when they were upset or angry.

By the end of the study, all five kids who said they used negative self-talk before had replaced it with positive self-talk. Having the children work with their siblings supported learning in some cases, but some parents found the kids struggling to take turns while using the app.

The length of these effects isn’t clear, researchers note. The study spanned just one week and the tendency for survey participants to respond in ways that make them look good could lead kids to speak positively about the app’s effects. Future research may include longer studies in more natural settings.

“Our goal is to make the app accessible to a wider audience in the future,” said lead author , a 91̽doctoral student in the iSchool. “We’re exploring the integration of large language models — the systems that power tech like ChatGPT — into our prototype and we plan to work with content creators to adapt existing socioemotional learning materials into our system. The hope is that these will facilitate more prolonged and effective interventions.”

Other authors are , a research scientist at Meta Reality Labs who graduated from the 91̽iSchool; , a 91̽research assistant in the iSchool; , a 91̽masters student, and , a 91̽doctoral student, both in the human centered design and engineering department; and , a masters student at the University of Southern California who did undergraduate work at the 91̽iSchool. This research was funded by the Jacobs Foundation and the Canadian Institute for Advanced Researchers.

For more information, contact Hiniker at alexisr@uw.edu and Fu at chrisfu@uw.edu.

]]>
‘I don’t even remember what I read’: People enter a ‘dissociative state’ when using social media /news/2022/05/23/people-enter-a-dissociative-state-when-using-social-media/ Mon, 23 May 2022 16:04:47 +0000 /news/?p=78540
Some people enter a state of dissociation similar to daydreaming when surfing social media. Photo: Shutterstock

Sometimes when we are reading a good book, it’s like we are transported into another world and we stop paying attention to what’s around us.

Researchers at the 91̽ wondered if people enter a similar state of dissociation when surfing social media, and if that explains why users might feel out of control after spending so much time on their favorite app.

The team watched how participants interacted with a Twitter-like platform to show that some people are spacing out while they’re scrolling. Researchers also designed intervention strategies that social media platforms could use to help people retain more control over their online experiences.

The group May 3 at the CHI 2022 conference in New Orleans.

“I think people experience a lot of shame around social media use,” said lead author , a 91̽doctoral student in the Paul G. Allen School of Computer Science & Engineering. “One of the things I like about this framing of ‘dissociation’ rather than ‘addiction’ is that it changes the narrative. Instead of: ‘I should be able to have more self-control,’ it’s more like: ‘We all naturally dissociate in many ways throughout our day – whether it’s daydreaming or scrolling through Instagram, we stop paying attention to what’s happening around us.'”

There are multiple types of dissociation, including trauma-based dissociation and the everyday dissociation associated with spacing out or focusing intently on a task.

Baughan first got the idea to study everyday dissociation and social media use during the early days of the COVID-19 lockdown, when people were describing how much they were getting sucked into spending time on their phones.

“Dissociation is defined by being completely absorbed in whatever it is you’re doing,” Baughan said. “But people only realize that they’ve dissociated in hindsight. So once you exit dissociation there’s sometimes this feeling of: How did I get here? It’s like when people on social media realize: ‘Oh my gosh, how did 30 minutes go by? I just meant to check one notification.'”

The team designed and built an app called Chirp, which was connected to participants’ Twitter accounts. Through Chirp, users’ likes and tweets appear on the real social media platform, but researchers can control people’s experience, adding new features or quick pop-up surveys.

  • For more details about Chirp, the team also presented at CHI 2022.
  • The code for Chirp is .

“One of the questions we had was: What happens if we rebuild a social media platform so that it continues to offer what people like about it, but it is designed with an explicit goal of keeping the user in control of their time and attention?” said senior author , an assistant professor in the 91̽Information School. “How does a user’s experience with this redesigned app compare to their experience with the status quo in digital well-being design, that is, adding an outside lockout mechanism or timer to police their usage?”

Researchers asked 43 Twitter users from across the U.S. to use Chirp for a month. For each session, after three minutes users would see a dialog box asking them to rate on a scale from one to five how much they agreed with this statement: “I am currently using Chirp without really paying attention to what I am doing.” The dialog box continued to pop up every 15 minutes.

“We used their rating as a way to measure dissociation,” Baughan said. “It captured the experience of being really absorbed and not paying attention to what’s around you, or of scrolling on your phone without paying attention to what you’re doing.”

Over the course of the month, 42% of participants (18 people) agreed or strongly agreed with that statement at least once. After the month, the researchers did in-depth interviews with 11 participants. Seven described experiencing dissociation while using Chirp.

In addition to receiving the dissociation survey while using Chirp, users experienced different intervention strategies. The researchers divided the strategies into two categories: changes within the app’s design (internal interventions) and broader changes that mimicked the lockout mechanisms and timers that are available to users now (external interventions). Over the course of the month, participants spent one week with no interventions, one week with only internal interventions, one week with only external interventions and one week with both.

When internal interventions were activated, participants got a “you’re all caught up!” message when they had seen all new tweets. People also had to organize the accounts they followed into lists.

For external interventions, participants had access to a page that displayed their activity on Chirp for the current session. A dialog box also popped up every 20 minutes asking users if they wanted to continue using Chirp.

Shown here are screenshots of Chirp with interventions added, including a “you’re all caught up!” message (labeled with ‘a’), custom lists (b and c), a page that displayed participants’ activity on Chirp (d) and a dialog box that popped up every 20 minutes asking users if they wanted to continue using Chirp (e). Photo: Baughan et al./CHI 2022

In general, participants liked the changes to the app’s design. The “you’re all caught up!” message together with the lists allowed people to focus on what they cared about.

“One of our interview participants said that it felt safer to use Chirp when they had these interventions. Even though they use Twitter for professional purposes, they found themselves getting sucked into this rabbit hole of content,” Baughan said. “Having a stop built into a list meant that it was only going to be a few minutes of reading and then, if they wanted to really go crazy, they could read another list. But again, it’s only a few minutes. Having that bite-sized piece of content to consume was something that really resonated.”

The external interventions generated more mixed reviews.

“If people were dissociating, having a dialog box pop up helped them notice they had been scrolling mindlessly. But when they were using the app with more awareness and intention, they found that same dialog box really annoying,” Hiniker said. “In interviews, people would say that these interventions were probably good for ‘other people’ who didn’t have self-control, but they didn’t want it for themselves.”

The problem with social media platforms, the researchers said, is not that people lack the self-control needed to not get sucked in, but instead that the platforms themselves are not designed to maximize what people value.

“Taking these so-called mindless breaks can be really restorative,” Baughan said. “But social media platforms are designed to keep people scrolling. When we are in a dissociative state, we have a diminished sense of agency, which makes us more vulnerable to those designs and we lose track of time. These platforms need to create an end-of-use experience, so that people can have it fit in their day with their time-management goals.”

Additional co-authors are and , both 91̽doctoral students in the iSchool; , a 91̽undergraduate student in the iSchool; , a 91̽doctoral student in the human centered design and engineering department; and , an associate professor at the University of Buffalo. This research was funded by Facebook and the National Science Foundation.

For more information, contact Baughan at baughan@cs.washington.edu and Hiniker at alexisr@uw.edu.

Grant number: 18459955

]]>
Do Alexa and Siri make kids bossier? New research suggests you might not need to worry /news/2021/09/13/alexa-siri-make-kids-bossier-research-suggests-you-might-not-need-to-worry/ Mon, 13 Sep 2021 15:53:58 +0000 /news/?p=75686
A team led by 91̽studied whether hanging out with conversational agents, such as Alexa or Siri, could affect the way children communicate with their fellow humans. In the study, a conversational agent (either a simple animated robot or cactus, screenshots shown here) taught children to use the word “bungo” to ask it to speak more quickly. Photo: 91̽

Chatting with a robot is now part of many families’ daily lives, thanks to conversational agents such as Apple’s Siri or Amazon’s Alexa. has shown that children are often delighted to find that they can ask Alexa to play their favorite songs or call Grandma. 

But does hanging out with Alexa or Siri affect the way children communicate with their fellow humans? Probably not, according to a recent study led by the 91̽ that found that children are sensitive to context when it comes to these conversations.

The team had a conversational agent teach 22 children between the ages of 5 and 10 to use the word “bungo” to ask it to speak more quickly. The children readily used the word when a robot slowed down its speech. While most children did use bungo in conversations with their parents, it became a source of play or an inside joke about acting like a robot. But when a researcher spoke slowly to the children, the kids rarely used bungo, and often patiently waited for the researcher to finish talking before responding.

The researchers in June at the 2021 Interaction Design and Children conference.

“We were curious to know whether kids were picking up conversational habits from their everyday interactions with Alexa and other agents,” said senior author , a 91̽assistant professor in the Information School. “A lot of the existing research looks at agents designed to teach a particular skill, like math. That’s somewhat different from the habits a child might incidentally acquire by chatting with one of these things.”

The researchers recruited 22 families from the Seattle area to participate in a five-part study. This project took place before the COVID-19 pandemic, so each child visited a lab with one parent and one researcher. For the first part of the study, children spoke to a simple animated robot or cactus on a tablet screen that also displayed the text of the conversation.

On the back end, another researcher who was not in the room asked each child questions, which the app translated into a synthetic voice and played for the child. The researcher listened to the child’s responses and reactions over speakerphone.

A screenshot showing a robot on the left and text of a conversation on the right. In the conversation, the child and robot are talking about flying versus riding bikes.
Shown here is a screenshot of a prototype of the interface the children saw. Photo: 91̽

At first, as children spoke to one of the two conversational agents (the robot or the cactus), it told them: “When I’m talking, sometimes I begin to speak very slowly. You can say ‘bungo’ to remind me to speak quickly again.”

After a few minutes of chatting with a child, the app switched to a mode where it would periodically slow down the agent’s speech until the child said “bungo.” Then the researcher pressed a button to immediately return the agent’s speech to normal speed. During this session, the agent reminded the child to use bungo if needed. The conversation continued until the child had practiced using bungo at least three times.

The majority of the children, 64%, remembered to use bungo the first time the agent slowed its speech, and all of them learned the routine by the end of this session. 

Then the children were introduced to the other agent. This agent also started to periodically speak slowly after a brief conversation at normal speed. While the agent’s speech also returned to normal speed once the child said “bungo,” this agent did not remind them to use that word. Once the child said “bungo” five times or let the agent continue speaking slowly for five minutes, the researcher in the room ended the conversation.

By the end of this session, 77% of the children had successfully used bungo with this agent. 

At this point, the researcher in the room left. Once alone, the parent chatted with the child and then, as with the robot and the cactus, randomly started speaking slowly. The parent didn’t give any reminders about using the word bungo.

Only 19 parents conducted this part of the study. Of the children who completed this part, 68% used bungo in conversation with their parents. Many of them used it with affection. Some children did so enthusiastically, often cutting their parents off in mid-sentence. Others expressed hesitation or frustration, asking their parents why they were acting like robots.

When the researcher returned, they had a similar conversation with the child: normal at first, followed by slower speech. In this situation, only 18% of the 22 children used bungo with the researcher. None of them commented on the researcher’s slow speech, though some of them made knowing eye contact with their parents.

“The kids showed really sophisticated social awareness in their transfer behaviors,” Hiniker said. “They saw the conversation with the second agent as a place where it was appropriate to use the word bungo. With parents, they saw it as a chance to bond and play. And then with the researcher, who was a stranger, they instead took the socially safe route of using the more traditional conversational norm of not interrupting someone who’s talking to you.”

After this session in the lab, the researchers wanted to know how bungo would fare “in the wild,” so they asked parents to try slowing down their speech at home over the next 24 hours.

Of the 20 parents who tried this at home, 11 reported that the children continued to use bungo. These parents described the experiences as playful, enjoyable and “like an inside joke.” For the children who expressed skepticism in the lab, many continued that behavior at home, asking their parents to stop acting like robots or refusing to respond.

“There is a very deep sense for kids that robots are not people, and they did not want that line blurred,” Hiniker said. “So for the children who didn’t mind bringing this interaction to their parents, it became something new for them. It wasn’t like they were starting to treat their parent like a robot. They were playing with them and connecting with someone they love.”

Although these findings suggest that children will treat Siri differently from the way they treat people, it’s still possible that conversations with an agent might subtly influence children’s habits — such as using a particular type of language or conversational tone — when they speak to other people, Hiniker said.

But the fact that many kids wanted to try out something new with their parents suggests that designers could create shared experiences like this to help kids learn new things.

“I think there’s a great opportunity here to develop educational experiences for conversational agents that kids can try out with their parents. There are so many conversational strategies that can help kids learn and grow and develop strong interpersonal relationships, such as labeling your feelings, using ‘I’ statements or standing up for others,” Hiniker said. “We saw that kids were excited to playfully practice a conversational interaction with their parent after they learned it from a device. My other takeaway for parents is not to worry. Parents know their kid best and have a good sense of whether these sorts of things shape their own child’s behavior. But I have more confidence after running this study that kids will do a good job of differentiating between devices and people.”

Other co-authors on this paper are and , both of whom completed this research as 91̽undergraduate students majoring in human centered design and engineering; , a 91̽doctoral student in the iSchool; , an assistant professor at the University of Michigan Medical School; , a senior user experience researcher at Duolingo who previously received a doctorate degree from the UW; and , an assistant professor at George Mason University. This research was funded by a Jacobs Foundation Early Career Fellowship.

For more information, contact Hiniker at alexisr@uw.edu.

]]>
Arguing on the internet: 91̽researchers studying how to make online arguments productive /news/2021/04/19/uw-researchers-studying-how-to-make-online-arguments-productive/ Mon, 19 Apr 2021 11:42:01 +0000 /news/?p=73863
91̽researchers surveyed people about online disagreements and then developed potential design interventions that could make these discussions more productive and centered around relationship-building. Photo:

The internet seems like the place to go to get into fights. Whether they’re with a family member or a complete stranger, these arguments have the potential to destroy important relationships and consume a lot of emotional energy.

For journalists

Researchers at the 91̽ worked with almost 260 people to understand these disagreements and to develop potential design interventions that could make these discussions more productive and centered around relationship-building. The team  this April in the latest issue of the Proceedings of the ACM in Human Computer Interaction Computer-Supported Cooperative Work.

“Despite the fact that online spaces are often described as toxic and polarizing, what stood out to me is that people, surprisingly, want to have difficult conversations online,” said lead author , a 91̽doctoral student in the Paul G. Allen School of Computer Science & Engineering. “It was really interesting to see that people are not having the conversations they want to have on online platforms. It pointed to a big opportunity to design to support more constructive online conflict.”

In general, the team said, technology has a way of driving users’ behaviors, such as logging onto apps at odd times to avoid people or deleting enjoyable apps to avoid spending too much time on them. The researchers were interested in the opposite: how to make technology respond to people’s behaviors and desires, such as to strengthen relationships or have productive discussions.

“Currently many of the designed features that users leverage during an argument support a no-road-back approach to disagreement — if you don’t like someone’s content, you can unfollow, unfriend or block them. All of those things cut off relationships instead of helping people repair them or find common ground,” said senior author , an assistant professor in the 91̽Information School. “So we were really driven by the question of how do we help people have hard conversations online without destroying their relationships?”

The researchers did their study in three parts. First, they interviewed 22 adults from the Seattle area about what social media platforms they used and whether they felt like they could talk about challenging topics. The team also asked participants to brainstorm potential ways that these platforms could help people have more productive conversations.

Then the team conducted a larger survey of 137 Americans ranging from 18 to 64 years old with political leanings that ranged from extremely conservative to extremely liberal. These participants were asked to report what social media platforms they used, how many hours per week they used them and if they had had an argument on these platforms. Participants then scored each platform for whether they felt like it enabled discussions of controversial topics. Participants were also asked to describe the most recent argument they had had, including details about what it was about and whom they argued with.

Many participants shared that they tried to avoid online arguments, citing a lack of nuance or space for discussing controversial subjects. But participants also noted wanting to have discussions, especially with family and close friends, about topics including politics, ethics, religion, race and other personal details.

When participants did have difficult conversations online, people tended to prefer text-based platforms, such as Twitter, WhatsApp or Facebook, over image-based platforms, such as YouTube, Snapchat and Instagram.

Participants also emphasized a preference for having these discussions in private one-on-one chats, such as WhatsApp or Facebook Messenger, over a more comment-heavy, public platform.

“It was not surprising to see that people are having a lot of arguments on the more private and text-based platforms,” Baughan said. “That really replicates what we do offline: We would pull someone aside to have a private conversation to resolve a conflict.”

Using information from the first two surveys, the team developed 12 potential technological design interventions that could support users when having hard conversations. The researchers created storyboards that illustrated each intervention and asked 98 new participants, ranging from 22 to 65 years old, to evaluate the interventions.

The most popular ideas included:

Democratizing

In this intervention, community members use reactions, such as upvoting, to boost constructive comments or content.

“This moves us away from the loudest voice drowning out everyone else and elevates the larger, quieter base of people,” Hiniker said.

Humanizing

The goal of this intervention is to remind people that they are interacting with other people. Some ideas include: preventing users from being anonymous, increasing the size of users’ profile pictures, or providing more details about users, such as identity, background or mood.

Channel switching

This intervention provides users with the ability to move a conversation to a private space.

“I envision this intervention as the platform saying: ‘Would you like to move this conversation offline?’ Or maybe it has some sort of button, where you can quickly say: ‘OK, let’s go away from the comments section and into a private chat,'” Baughan said. “That could help show more respect for the relationship, because it doesn’t become this public arena of who’s going to win this fight. It becomes more about trying to reach an understanding.”

The least popular idea:

Biofeedback

This intervention uses biological feedback, such as a user’s heart rate, to provide context about how someone is currently feeling.

“People would tell us: ‘I don’t want to share a lot of personal information about my internal state. But I would like to have a lot of personal information about my conversational partner’s internal state,'” Hiniker said. “That was one of the design paradoxes we saw.”

The next step for this research would be to start deploying some of these interventions to see how well they help or hurt online conversations in the wild, the team said. But first, social media companies should take a step back and think about the purpose of the interaction space they’ve created and whether their current platforms are meeting those goals.

“I would love to see technology help prompt people to slow down when it comes to things like knee-jerk emotional reactions,” Baughan said. “It could ask people to reflect: Is this a good use of my time? How much do I value this relationship with this person? Do I feel like it’s safe to engage in this conversation? And if a conversation happens in a public space, it could suggest taking it offline or going to a private space.”

Justin Petelka, a 91̽doctoral student in information science; Amulya Paramasivam, a 91̽undergraduate student majoring in human centered design and engineering; and , , and , who completed this research as undergraduate students at the UW, are also co-authors on this paper. This research was funded by Facebook.

For more information, contact Baughan at baughan@cs.washington.edu and Hiniker at alexisr@uw.edu.

]]>
How families can use technology to juggle childcare and remote life /news/2020/04/14/how-families-can-use-technology-to-juggle-childcare-and-remote-life/ Tue, 14 Apr 2020 16:15:10 +0000 /news/?p=67444
91̽researchers are beginning a national study to help families discover technology that helps them both successfully navigate home-based learning and combat social isolation. Photo: 91̽

With thousands of schools and preschools closed and many states under “stay-at-home” orders to try to limit the spread of the novel coronavirus, families are facing a tough situation: trying to work — possibly remotely — while simultaneously being responsible for their children’s education.

91̽ researchers are beginning a national study to help families discover technology that helps them both successfully navigate home-based learning and combat social isolation.

“I think some parents had idealized scenarios where they said ‘Oh, I’ll just put my kid in front of a computer for a few hours and while I work, they’ll do math and reading,'” said co-lead researcher , a 91̽professor of human centered design and engineering. “It all sounded great, but then after one day it’s like, ‘Oh gosh, this is not going to work.'”

One major issue, the researchers said, is that it’s overwhelming trying to sort through seemingly endless technology options.

“People want to help parents manage this, and one easy way is to share resources. But in reality there are almost too many options,” Kientz said. “As a parent, I was added instantly to about five or six different Facebook groups all about trying to navigate this situation. Everyone was posting a million different resources, such as brightly colored schedules for homeschool.”

If you are interested in participating in this project, please fill out the team’s .

For their project, Kientz and team plan to recruit 30 diverse families with children ages 3 to 13 across the country. Participating families will be organized into three groups based on common family characteristics, such as children’s ages or work situations.

“We definitely want to include many different types of families, including parents who are still physically going to their jobs, parents who are in quarantine, intergenerational households and single parents,” Kientz said. “But we want to make sure the study itself isn’t creating more extra work for people who are already burdened.”

Each family is expected to participate for about 30 minutes a week during the 10-week study. Families will reflect on how the technology they use helps or hinders their lives.

“What we’re proposing to do here is find real stories from different types of families about what is helpful and what are the roadblocks,” Kientz said. “Then we plan to immediately share that information back out using social media and regular Medium posts. We’ll also provide a direct channel into some of the tools that support online learning, exercise and staying in touch at home.”

Follow along with the study:

blog posts

In the later part of the study, families will design new or redesign existing technologies — such as a new educational skill for Amazon Echo. Then the families will test simple prototypes of these designs. Most of these activities will be completed as a family, though there may be some caretaker- or child-only activities as well.

“It’s important to see things in terms of equity, too,” Kientz said. “Some people don’t have time to homeschool their kids, and a lot of these tools require high-speed internet access, iPads or other expensive equipment.”

The study will look at how families in different situations are finding tools that they are able to access and use successfully.

Kientz, who studies families and technology and is also the parent of two children ages 7 and 4, suggests the following reputable websites/apps:

  • Learning
    • (note: the iPad version, , is easier for younger kids to navigate)
    • (and their remote-learning resource )
    • Preschool-aged kids iPad apps like Sago World or any app by Toca Boca.
    • Typing —
    • Coding — and /
  • Staying connected with family and friends
    • Minecraft (note: families could set up Realms to create a private server for their kids to socialize with their friends)
    • Facebook’s Messenger Kids app
    • FaceTime
  • Exercise
    • apps like Pokemon GO and Harry Potter Wizards Unite can make walks more entertaining (note: make sure you stay at least 6 feet away from others)
    • Freeze Dance skill on Amazon Echo
    • Just Dance for the Nintendo Switch

Additional co-lead researchers on this project are: , a 91̽associate professor of human centered design and engineering who has done similar research projects to study ; , a 91̽assistant professor in the Information School who studies families and technology; and , a 91̽assistant professor in the iSchool who works with children . Rebecca Michelson, a 91̽doctoral student in human centered design and engineering, is also a researcher on this project. This study is funded by the National Science Foundation.

For more information, contact Kientz at jkientz@uw.edu, Munson at smunson@uw.edu, Hiniker at alexisr@uw.edu and Yip at jcyip@uw.edu.

Grant number: 2027525

]]>
‘I saw you were online’: How online status indicators shape our behavior /news/2020/04/13/how-online-status-indicators-shape-our-behavior/ Mon, 13 Apr 2020 16:10:20 +0000 /news/?p=67401 Some apps highlight when a person is online — and then share that information with their followers. When a user logs in to a website or app that uses online status indicators, a little green (or orange or blue) dot pops up to alert their followers that they’re currently online.

Researchers at the 91̽ wanted to know if people recognize that they are sharing this information and whether these indicators change how people behave online.

91̽researchers found that many people misunderstand online status indicators but still carefully shape their behavior to control how they are displayed to others. Photo: Camille Cobb

After surveying smartphone users, the team found that many people misunderstand online status indicators but still carefully shape their behavior to control how they are displayed to others. More than half of the participants reported that they had suspected that someone had noticed their status. Meanwhile, over half reported logging on to an app just to check someone else’s status. And 43% of participants discussed changing their settings or behavior because they were trying to avoid one specific person.

will be published in the Proceedings of the 2020 ACM CHI conference on Human Factors in Computing Systems.

“Online status indicators are an unusual mechanism for broadcasting information about yourself to other people,” said senior author , an assistant professor in the 91̽Information School. “When people share information by posting or liking something, the user is in control of that broadcast. But online status indicators are sharing information without taking explicit direction from the user. We believe our results are especially intriguing in light of the coronavirus pandemic: With people’s social lives completely online, what is the role of online status indicators?”

People need to be aware of everything they are sharing about themselves online, the researchers said.

“Practicing good online security and privacy hygiene isn’t just a matter of protecting yourself from skilled technical adversaries,” said lead author , a postdoctoral researcher at Carnegie Mellon University who completed this research as a 91̽doctoral student in the Paul G. Allen School of Computer Science & Engineering. “It also includes thinking about how your online presence allows you to craft the identities that you want and manage your interpersonal relationships. There are tools to protect you from malware, but you can’t really download something to protect you from your in-laws.”

The team recruited 200 participants ages 19 to 64 through to fill out an online survey. Over 90% of the participants were from the U.S., and almost half of them had completed a bachelor’s degree.

The researchers asked participants to identify apps that they use from a list of 44 that have online status indicators. The team then asked participants if those apps broadcast their online status to their network. Almost 90% of participants correctly identified that at least one of the apps they used had online status indicators. But for at least one app they used, 62.5% answered “not sure” and 35.5% answered “no.” For example, of the 60 people who said they use Google Docs regularly, 40% said it didn’t have online status indicators and 28% were not sure.

Then the researchers asked the participants to time themselves while they located the settings to turn off “appearing online” in each app they used regularly. For the apps that have settings, participants gave up before they found the settings 28% of the time. For apps that don’t have these settings, such as WhatsApp, participants mistakenly thought they had turned the settings off 23% of the time.

“When you put some of these pieces together, you’re seeing that more than a third of the time, people think they’re not broadcasting information that they actually are,” Cobb said. “And then even when they’re told: ‘Please go try and turn this off,’ they’re still not able to find it more than a quarter of the time. Just broadly we’re seeing that people don’t have a lot of control over whether they share this information with their network.”

Here’s one way the team says designers could help people have more control over whether to broadcast their online status. Photo: Cobb et al./ Proceedings of the 2020 ACM CHI conference on Human Factors in Computing Systems

Finally the team asked participants a series of questions about their own experiences online. These questions touched on whether participants noticed when others were online, if they thought others noticed when they were online and whether they had changed their own behavior because they did or didn’t want to appear online.

“We see this repeated pattern of people adjusting themselves to meet the demands of technology — as opposed to technology adapting to us and meeting our needs,” said co-author , a 91̽doctoral student in the Allen School. “That means people are choosing to go online not because they want to do something there but because it’s important that their status indicator is projecting the right thing at the right time.”

Now that most states have put stay-at-home orders in place to try to combat the coronavirus pandemic, many people are working from home and socializing only online. This could change how people use online status indicators, the team says. For example, employees can use their online status to indicate that they are working and available for meetings. Or people can use a family member’s “available” status as an opportunity to check up on them and make sure they are OK.

“Right now, when a lot of people are working remotely, I think there’s an opportunity to think about how future evolutions of this technology can help create a sense of community,” Cobb said. “For example, in the real world, you can have your door cracked open and that means ‘interrupt me if you have to,’ you can have it wide open to say ‘come on in’ or you can have your door closed and you theoretically won’t get disturbed. That kind of nuance is not really available in online status indicators. But we need to have a sense of balance — to create community in a way that doesn’t compromise people’s privacy, share people’s statuses when they don’t want to or allow their statuses to be abused.”

, a professor in the Allen School, is also a co-author on this paper. This research was funded by the 91̽Tech Policy Lab.

For more information, contact Hiniker at alexisr@uw.edu, Cobb at ccobb@andrew.cmu.edu, Simko at simkol@cs.washington.edu and Kohno at yoshi@cs.washington.edu.

]]>
Children describe technology that gives them a sense of ambiguity as ‘creepy’ /news/2019/05/16/what-technology-is-creepy-for-children/ Thu, 16 May 2019 15:25:17 +0000 /news/?p=62220

Many parents express concerns about privacy and online safety in technology designed for their children. But we know much less about what children themselves find concerning in emerging technologies.

Now 91̽ researchers have defined for the first time what children mean when they say technology is “creepy.” Kids in a new study described creepy technology as something that is unpredictable or poses an ambiguous threat that might cause physical harm or threaten an important relationship. The researchers also pinpointed five aspects of emerging technologies that could contribute to this feeling of ambiguity.

The team May 8 at the 2019 in Glasgow, Scotland.

“Over the years of working with kids we realized they use the word ‘creepy’ a lot as a way to reject specific technologies,” said first author , an assistant professor in the UW’s . “But kids have a difficult time articulating what makes something creepy. So we designed a series of activities to give them the chance to work out their own thoughts and help us understand.”

Previous research indicated that adults , not scary, so the team conducted four separate design sessions to see if children felt similarly about creepy technology. These sessions had aged 7 to 11 prototype their own technologies or rank real or imagined technologies as “creepy,” “not creepy” or “don’t know.” Devices that could bring about physical harm or disrupt an important relationship were most consistently ranked as being creepy.

“When we were brainstorming about what kids were going to be worried about, we never considered that they might be concerned that somehow technology would get between them and their parents, and that this would be such a salient issue in their minds,” said co-author , an assistant professor in the iSchool.

During some of the design sessions children had to rank real or imagined technologies as “creepy,” “not creepy,” or “don’t know” by positioning themselves along a line. Shown here is a screenshot from the research video where most of the children thought the proposed technology — a stuffed animal that records your actions and your voice in order to give your parents recommendations about your exercise habits — was creepy. Photo: 91̽

The team found five properties of technology that led to those fears:

Deception versus transparency

Kids want to understand how technology works and what information a device is collecting. For example, when a child asked a digital voice assistant if it would kill him in his sleep and it said, “I can’t answer that,” the child was concerned.

“‘I’m afraid I don’t have an answer to that’ works well if I ask how many hairs are on the top of my head,” Yip said. “But with these types of questions, this response sounds deceptive.”

Ominous physical appearance

Kids are sensitive to how a technology looks, sounds and feels. But that doesn’t mean that only traditionally scary-looking technologies are creepy: The children were also wary of , an app with a large black dot as its interface, because it looked like a “black spirit” or a “black hole.”

Lack of control

Kids want to control technology’s access to their information and the flow of that information to their parents. For example, when kids were asked to design a technology that was trustworthy, some of the children designed an intelligent trash can that both scanned and deleted their facial recognition data each time they used it. Their trash can also had a button that allowed for manual deletion of data.

Unpredictability

Kids don’t like it when technology does things unexpectedly, like automatically knowing their name or laughing. To kids, laughing could communicate hidden, and possibly malicious, intent.

Mimicry

Kids also don’t like technology that pretends to be something else, especially when it’s trying to mimic people in their lives or themselves. Technology that mimics them could be trying to steal their identities or disrupting family relationships.

“All five themes are related to ambiguous threats. It’s a not specific monstrosity coming after them here like when something is scary; it’s more nuanced so that they’re not sure of the consequences of their actions,” Yip said. “The kids kept referencing the movie . In the story, the dolls ask Coraline to make a change: ‘If you sew buttons over your eyes and become just like us, we will love you forever.’ That prompts this feeling of, ‘Wait a second, sew buttons over my eyes? What am I compromising here?'”

  • for a list of sample questions for parents to use to talk to their kids about technology.
  • for a list of sample questions for designers to use when creating technology for kids.
  • See a related story in .

The team found that trusted adults had some influence over whether or not the children thought that specific devices were creepy. For example, one child deemed smartphones “not creepy” because he saw his parents using them. Another kid thought that laptops were creepy because his parents taped a piece of paper over the camera to “keep the robbers away.”

The researchers acknowledge that their results could be used to make technology that tricks kids into a false sense of security. But the team thinks it is more important to have these results available to the public to help parents talk to their kids about technology and any types of fears that might arise.

“Children have access to so many different kinds of technologies compared to when we were growing up,” Hiniker said. “But their basic fears haven’t changed at all. Kids want to feel physically safe and anchored to trusted adults who are going to protect them.”

Other co-authors are , a research scientist at the Joan Ganz Cooney Center who completed this research as a doctoral student in the UW’s Department of Human Centered Design & Engineering; Xin Gao, and , all undergraduates in human centered design and engineering; and Justin Park, undergraduates in the iSchool; and Romaine Ofiana, an undergraduate in hearing and speech sciences.

###

For more information, contact Yip at jcyip@uw.edu and Hiniker at alexisr@uw.edu.

]]>
Patterns of compulsive smartphone use suggest how to kick the habit /news/2019/04/29/patterns-of-compulsive-smartphone-use/ Mon, 29 Apr 2019 16:34:38 +0000 /news/?p=61875

Everywhere you look, people are looking at screens.

In the decade since smartphones have become ubiquitous, we now have a feeling almost as common as the smartphones themselves: being sucked into that black hole of staring at those specific apps — you know which ones they are — and then a half an hour has gone by before you realize it.

Researchers at the 91̽ conducted in-depth interviews to learn why we compulsively check our phones. They found a series of triggers, common across age groups, that start and end habitual smartphone use. The team also explored user-generated solutions to end undesirable phone use. The results will be presented May 7 at the 2019 in Glasgow, Scotland.

“For a couple of years I’ve been looking at people’s experiences with smartphones and listening to them talk about their frustration with the way they engage with their phones,” said co-author , an assistant professor at the UW’s . “But on the flip side, when we ask people what they find meaningful about their phone use, nobody says, ‘Oh, nothing.’ Everyone can point to experiences with their phone that have personal and persistent meaning.

“That is very motivating for me. The solution is not to get rid of this technology; it provides enormous value. So the question is: How do we support that value without bringing along all the baggage?”

91̽researchers found a series of triggers, common across age groups, that start and end habitual smartphone use. Photo: Jonathan Tran/91̽

Hiniker and her team interviewed three groups of smartphone users: high school students, college students and adults who have graduated from college. The 39 subjects were smartphone users in the Seattle area between the ages of 14 and 64. Interviews started with background questions and a “think aloud” demonstration in which participants walked through the apps on their phone. Interviewers would then ask more in-depth questions about the apps participants pointed out as most likely to lead to compulsive behavior.

“We were hoping to get a holistic view into the behaviors of the participants,” said first author , a 91̽undergraduate studying human centered design and engineering.

In general, interviewees had four common triggers for starting to compulsively use their phones:

  • During unoccupied moments, like waiting for a friend to show up
  • Before or during tedious and repetitive tasks
  • When in socially awkward situations
  • When they anticipated getting a message or notification

The group also had common triggers that ended their compulsive phone use:

  • Competing demands from the real world, like meeting up with a friend or needing to drive somewhere
  • Realizing they had been on their phone for a half an hour
  • Coming across content they’d already seen

The team was surprised to find that the triggers were the same across age groups.

“This doesn’t mean that teens use their phones the same way adults do. But I think this compulsive itch to turn back to your phone plays out the same way across all these groups,” Hiniker said. “People talked about everything in the same terms: The high school students would say ‘Anytime I have a dead moment, if I have one minute between classes I pull out my phone.’ And the adults would say ‘Anytime I have one dead moment, if I have one minute between seeing patients at work I pull out my phone.'”

The researchers asked participants to draw an idea for how the phone could help them end undesirable phone use. Shown here is one participant’s suggestion. Photo: 91̽

The researchers asked participants to identify something about their behavior they would like to change and then draw an idea on paper for how the phone could help them achieve it.

“Many of the participants sketched ‘lockout’ mechanisms, where the phone would essentially prevent them from using it for a certain period of time,” Tran said. “But participants mentioned how although they feel bad about their behavior, they didn’t really feel bad enough to utilize their sketched solutions. There was some ambivalence.”

To the team, this finding pointed to a more nuanced idea behind people’s relationships to their phones.

“If the phone weren’t valuable at all, then sure, the lockout mechanism would work great. We could just stop having phones, and the problem would be solved,” Hiniker said. “But that’s not really the case.”

Instead, the researchers saw that participants found meaning in a diverse set of experiences, particularly when apps let them connect to the real world. One participant talked about how a meme generator helped her interact with her sister because they meme tagged each other all the time. Another participant mentioned that the Kindle app let her connect with her father who was reading the same books.

“People describe it as an economic calculation,” Hiniker said. “Like, ‘How much time do I spend with this app and how much of that time is actually invested in something lasting that transcends this specific moment of use?’ Some experiences promote a lot of compulsive use, and that dilutes the time people spend on activities that are meaningful.”

When it comes to designing the next wave of smartphones, Hiniker recommends that designers shift away from system-wide lockout mechanisms. Instead, apps should let users be in control of their own engagement. And people should decide whether an app is worth their time.

“People have a pretty good sense of what matters to them.” Hiniker said. “They can try to tailor what’s on their phone to support the things that they find meaningful.”

Additional co-authors are , a 91̽undergraduate studying human centered design and engineering, and , a professor in the UW’s iSchool.

###

For more information, contact Hiniker at alexisr@uw.edu.

]]>