Martin Saveski – 91̽News /news Wed, 03 Dec 2025 16:35:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Social media research tool can reduce polarization — it could also lead to more user control over algorithms /news/2025/12/03/social-media-research-tool-can-reduce-polarization-it-could-also-lead-to-more-user-control-over-algorithms/ Wed, 03 Dec 2025 16:35:46 +0000 /news/?p=90003 Icons for social media apps on a smartphone.
A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of the platform, has the potential to give users more say over what they see on social media. Photo:

A new tool shows it is possible to turn down the partisan rancor in an X feed — without removing political posts and without the direct cooperation of the platform.

The study, from researchers at the 91̽, Stanford University and Northeastern University, also indicates that it may one day be possible to let users take control of their social media algorithms.

The researchers created a seamless, web-based tool that reorders content to move posts lower in a user’s feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters of the opposing party.

Researchers Nov. 27 in Science.

“Social media algorithms direct our attention and influence our moods and attitudes, but until now, only platforms had the power to change their algorithms’ design and study their effects,” said co-lead author , a 91̽assistant professor in the Information School. “Our tool gives that ability to external researchers.”

In an experiment, about 1,200 volunteer participants used the tool over 10 days during the 2024 election. Participants who had anti-democratic content downranked showed more positive views of the opposing party. The effect was also bipartisan, holding true for people who identified as liberals or conservatives.

“Previous studies intervened at the level of the users or platform features — demoting content from users with similar political views, or switching to a chronological feed, for example. But we built on recent advances in AI to develop a more nuanced intervention that reranks content that is likely to polarize,” Saveski said.

For this study, the team drew from previous sociology research identifying categories of antidemocratic attitudes and partisan animosity that can be threats to democracy. In addition to advocating for extreme measures against the opposing party, these attitudes include statements that show rejection of any bipartisan cooperation, skepticism of facts that favor the other party’s views, and a willingness to forgo democratic principles to help the favored party.

The researchers tackled the problem from a range of disciplines including information science, computer science, psychology and communication.

The team created a web extension tool coupled with an artificial intelligence large language model that scans posts for these types of antidemocratic and extreme negative partisan sentiments. The tool then reorders posts on the user’s X feed in a matter of seconds.

Then, in separate experiments, the researchers had a group of participants view their feeds with this type of content downranked or upranked over seven days and compared their reactions to a control group. No posts were removed, but the more incendiary political posts appeared lower or higher in their content streams.

The impact on polarization was clear.

“When the participants were exposed to less of this content, they felt warmer toward the people of the opposing party,” said co-lead author , an assistant professor at Johns Hopkins University. “When they were exposed to more, they felt colder.” 

Before and after the experiment, the researchers surveyed participants on their feelings toward the opposing party on a scale of 1 to 100. The attitudes among the participants who had the negative content downranked improved on average by two points — equivalent to the estimated change in attitudes that has occurred among the general U.S. population over a period of three years.

The researchers are now looking into other interventions using a similar method, including ones that aim to improve mental health. The team has also made the code of the current tool available, so other researchers and developers can use it to create their own ranking systems independent of a social media platform’s algorithm.

“In this work, we focused on affective polarization, but our framework can be applied to improve other outcomes, including well-being, mental health and civic engagement,” Saveski said. “We hope that other researchers will use our tool to explore the vast design space of potential feed algorithms and articulate alternative visions of how social media platforms could operate.”

Additional co-authors on this study include of Northeastern and , and of Stanford.

This work was supported in part by the National Science Foundation, the Swiss National Science Foundation and a Hoffman-Yee grant from the Stanford Institute for Human-Centered Artificial Intelligence.

For more information, contact Saveski at msaveski@uw.edu.

This story was adapted from a by Stanford University.

]]>
Community Notes help reduce the virality of false information on X, study finds /news/2025/09/18/community-notes-x-false-information-viral/ Thu, 18 Sep 2025 18:00:02 +0000 /news/?p=89249 Icons for social media apps on a smartphone.
A 91̽-led study of X found that posts with Community Notes attached were less prone to going viral and got less engagement. After getting a Community Note, on average, reposts dropped 46% and likes dropped 44%. Photo:

In 2022, after Elon Musk bought what’s now X, the company laid off 80% of its content moderation team and made the platform’s main form of fact-checking. Previously a pilot program at Twitter, Community Notes lets users propose attaching a comment to a specific post — usually to add context or correct an inaccurate fact. If other users with diverse views vote that the comment is useful, as measured by X’s algorithm, then the note is appended to the post. Other .

A 91̽-led study of X found that posts with Community Notes attached were less prone to going viral and got less engagement. After getting a Community Note, on average, reposts dropped 46% and likes dropped 44%. 

“We found that Community Notes are effective when attached, especially in reducing engagement that signals support for the content, such as reposts and likes,” said senior author , a 91̽assistant professor in the Information School. “But the spread of misinformation on social media is complex and multifaceted, and it requires multiple approaches working together to effectively curb it. Systems like Community Notes are an important addition to the platforms’ toolbox.”

The team Sept. 18 in Proceedings of the National Academy of Sciences of the United States of America.

Between March and June of 2023, researchers tracked 40,000 posts for which a note was suggested. Of those, 6,757 notes were deemed helpful and were attached. The team tracked posts for 48 hours after getting a note attached and compared posts with notes to those without on two key aspects: engagement, such as likes and reposts, and diffusion.

Diffusion accounts for how a post spreads through the social network — essentially its virality. For example, do only people who follow an account engage with a post?

“We know from other studies that false information typically spreads faster, broader and more virally, than true information does,” said lead author , a 91̽doctoral student in the Information School. “We found that Community Notes significantly change the way information spreads through a network. People who are distant in the social network from the person that posted the misinformation are much less likely to interact with the post. But people close to the source — followers, for instance — tend to be less affected by the note.”

On average, the team found that after notes were added, engagement dropped 46% for reposts, 44% for likes, 22% for replies and 14% for views. Over posts’ whole lifespans, including engagement before notes were attached, the drops were 12% for reposts, 13% for likes, 7% for replies and 6% for views.

“We think views were less affected because what users see is mostly decided by X’s feed algorithm,” Saveski said. “From the public release of the algorithm, we know that X does not explicitly deemphasize posts with notes attached, but that could change in the future.

The study was also able to get granular data on what affected posts’ spread. Notes added to altered media, like fake photos and videos, affected those posts more than they did text-based posts. Notes on very popular posts led to greater reductions in engagement. And getting notes appended quickly was vital.

“Content spreads rapidly across X, and if a note comes too late, few users will get a chance to see it,” Slaughter said. “Notes that take 48 hours or so to go up have almost no effect.”

Saveski’s lab at 91̽is now developing potential tools to speed up how quickly notes can be attached to posts to increase their effectiveness.

The authors only looked at posts that had notes proposed in early 2023, and X has significantly updated its Community Notes methods since then. But , making further academic studies infeasible. The paper also looked only at X, not at other social media platforms.

“Whether this kind of moderation is sustainable as many separate systems across different platforms, as it’s now being used, is really an open question,” Saveski said. “If someone is adding notes on X, does that make them less likely to do so on TikTok or Instagram? There’s also the question of how much platforms should collaborate and share data, which could help this scale. X has made its code and data available, but none of the other platforms have committed to opening up their systems yet.”

Co-authors include of Stanford University and of Yale University. This research was funded in part by a 91̽Information School Strategic Research Fund award and an Army Research Office Multidisciplinary University Research Initiative award.

For more information, contact Saveski at msaveski@uw.edu and Slaughter at is28@uw.edu.

]]>
How can social media be better? Four 91̽researchers compare strategies /news/2023/10/24/better-social-media-research-twitter-bluesky-linkedin-threads/ Tue, 24 Oct 2023 16:25:40 +0000 /news/?p=83297 A silhouette of a person looking at a phone.
The turmoil at large tech platforms has many people reconsidering what they want out of social media. Four researchers at the 91̽ are exploring different approaches to improve people’s experiences. Photo:

Major platform social media is in an upheaval. Bluesky and Meta’s Threads want to be Twitter. LinkedIn’s . Meanwhile, Twitter has become X. And X wants to be — possibly including job listings, payment and ride-hailing — even as . Amid this, after announces the impending death of social media.

The turmoil has many people reconsidering what they want out of social media at scale: Can it be better? Four researchers at the 91̽ have approached this question from different angles.

, a 91̽assistant professor in the Paul G. Allen School of Computer Science & Engineering, was the senior author of two papers at the in Minneapolis last week. One about where articles on social media came from, with the aim of curbing misinformation; the other looks at affects social media users.

For the last couple years, , a 91̽doctoral student in the Information School who researches online harassment, has written a that often focuses on the trouble with social media. , a 91̽doctoral student in the Allen School, studies how people enter dissociative states on social media. , a 91̽assistant professor in the iSchool, is researching tools to on social media.

91̽News talked with the four of them about what’s wrong with social media and how it might improve.

What are some significant problems you see with major social media platforms?

Amy X. Zhang:  A big problem to me is the centralization of power — that the platforms can decide what content should be shown and what should get posted to the top of one feed for millions of people. That brings up issues of accountability and of localization to specific communities or cultures. A singular perspective — oftentimes coming from, for example, workers in Silicon Valley — won’t fit for lots of people. Alongside this is the homogenization of our digital social experiences, which don’t come close to the richness and vividness of our actual social lives.

Katherine Cross: Amy is quite right. I would add that open platforms — which anyone can join and on which everyone talks to everyone, constantly — allow for the most rapid acceleration of virality, far beyond anything that has existed previously. It also means that if someone is trying to start a harassment campaign, they can easily spread it virally to thousands of users. Those of us who remember LiveJournal know that earlier iterations of the Internet were no stranger to drama and harassment. But the design of earlier platforms provided a great many speed bumps for toxicity and abuse. A lot of that friction has gone away as a condition of the design of open platforms. So whether it’s Tumblr or Twitter or Facebook, these platforms allow for the most rapid acceleration of the worst aspects of our internet use.

Amanda Baughan: Some other problems are the many mechanisms that seek to draw people in and keep them on a site. These can be notifications that are personalized to the content that you like or the time you normally open the app; the infinite feeds that keep you scrolling; and the rewards structure that keeps you on the hunt for content that might scratch your brain in the way that you find most appealing. Even though social media could be a great tool for connection or self-expression, people are often in an adversarial relationship with these interfaces that are trying to keep them stuck.

Martin Saveski: I will add that these platforms are designed for very shallow connections. Right now, I’m asking: How can we design platforms with scale but still provide an environment where people can communicate and connect more deeply? After Twitter open-sourced its feed algorithm and many of the Facebook files were released, we know what we’d previously guessed: They primarily optimize for engagement. So how do we do that better? It’s clear that there is value in engagement. But perhaps there are other things that we could be thinking about when designing the experience.

How are you trying to make large social media platforms better for the people using them?

KC: My work is trying to do at least two things, practically. I’m looking at the lives and travails of content moderators, the people whose jobs it is to make the internet more usable for ordinary people. They deserve better working conditions and more mental health support. The second part is — I hate to make it seem so simple — almost an exhortation to spend less time on open platforms. As long as we have open platforms, the only effective solution for a number of problems is to simply get people to use these platforms less.

AB: I’ve been thinking a lot about our experiences online as dissociative, rather than addictive. Dissociation can be part of healthy cognitive functioning. Daydreaming, for example, is considered dissociation. But when you combine people’s reduced self-reflection and self-monitoring on a platform designed to keep them on a site, people start to sink more time into the platform than they really want to. This explains part of why people have these fraught relationships with their social media — neither satisfied, nor willing to quit. So I’ve looked at designs that might help people re-engage their self-monitoring and disrupt dissociation. For example, platforms could separate content into smaller chunks, which is currently available on X; add a “you’re all caught up” label; or tell users they’ve been scrolling for a certain amount of time.

AXZ: I’ve been looking at what it would mean to decentralize these major platforms’ power by building tools for users or communities who don’t have lots of time and resources. For instance, if you are getting harassed and you’re developing word lists and blocking harassers, can we that lets you share that with people in a similar situation? I’m also really interested in , like WhatsApp or Signal. Right now, because of encryption, nobody’s moderating content. The platform can’t do it, and there aren’t tools for users or communities to do it. So you just have massive issues with abuse on these platforms.

MS: Recently I’ve worked with collaborators at Stanford to think about how to . Intentionally or not, algorithms reflect values. We found that if we encode democratic values in platforms’ algorithms, we see a reduction in polarization, but people are still reasonably engaged. Now we’re launching a larger field experiment to study how people are affected if we sort their feeds differently or remove some types of information from them?

What do you see as the potential for large social media?

AXZ: I’ve always had a love-hate relationship with Twitter. It has been great for my career in many ways. I used to spend lots of time sharing my research, hearing about other people’s research, sometimes even starting collaborations. Twitter has been the de facto place for academic sharing and conversation, but should it be? Is it a place where junior scholars feel welcome to participate? Is it inclusive of everyone’s voices? Is it what we really want out of a forum for scholarly communication? In some ways, yes. But in many ways, no. Twitter has had so many problems over the years with harassment. If we were to design something that reflects the values of an academic community, which does want to be inclusive and to share its research with the world, what could that look like? I don’t know exactly, but I do think it takes some rethinking.

KC: Again, I agree completely with Amy. Twitter could, in theory, be good for sharing articles. Occasionally, when an article of mine really caught fire, it was partially because it was getting shared a lot on a platform like Twitter. But I’ve watched online harassment dynamics play out between journalists or academics. For example, I followed a lot of epidemiologists and public health experts, all of whom had expertise on COVID-19. And I watched as their excessive use of Twitter led them to degenerate into these warring camps. I’ve spoken to many of these people privately, and they said that it corroded actual academic relationships. That’s where I feel that the professional benefits are sometimes overstated.

These platforms can also be good for interpersonal relationships. I’ve made a lot of friends through Twitter. It has occasionally helped my career. It’s useful for networking in very small minority communities, like the transgender community, or any number of other groups of people who make up 1% of the population. It’s also been great for private crowdfunding because of the ease of virality on an open platform. But I still think that there is something to be said for recouping some of these benefits on smaller, more closed platforms.

Given all the turmoil with major platforms lately, are you hopeful about any of the changes you’re seeing either in platforms or in how the public is relating to these platforms?

MS: In an interesting way, the fact that Musk closed Twitter’s data access has encouraged researchers to think beyond Twitter. I’m personally very excited about new social media platforms — especially Bluesky, because people can own their data and also control what they see in their feeds without it being so centralized. Hopefully, that will lead to a better version of whatever we’ve had.

AB: The recent changes of Twitter have shown how much platform design and governance can have a huge impact on people’s experiences. I’ve seen the quality of my feed get much worse, and it’s led me to log off much more quickly. So I hope that this has led people — who aren’t just social media researchers — to question how these platforms are made and how they want to use them.

KC: I effectively stopped using Twitter when Musk took over, but earlier this year, I gave up on it completely. I think that, like Amanda, I take hope from the fact that a lot of people are clearing away their preconceptions about social media being inevitable and fixed. I always try to teach my students that no technology’s form is inevitable. We have a say over its shape.

AXZ: When I started grad school, Facebook was the dominant thing. It was so hard for me to imagine a world without it, or without the social networking paradigm of people following each other. I just assumed that this was the future. Now we’re in this fragmented landscape. People are leaving Facebook for other platforms, then leaving those platforms for even other platforms. We lose something with that fragmentation, for sure. When Twitter first appeared, there was some excitement about its role for democracy, that it could be “the global town square.” It was perhaps naive of us to think that, and we’ve learned the downsides. Now we’re correcting toward a fragmented landscape, which is maybe more reflective of how we interact socially and is perhaps healthier.

KC: In my dissertation, I argue that social media has often been anti-political. During the previous in 2009, for instance, there was so much hope that Twitter and open platforms like it were going to be self-organizing networks that could change the world. What we began to get were things like the , during which , but not the endurance of democracy, because the latter requires a public to be able to deliberate. In theory, Twitter can get masses of people out onto the streets, which is extraordinarily important. But it gives them no mechanism for deciding what to do with all that power that they have gained. And it’s why these movements often dissolve. These platforms are very good at provoking internecine conflict, but not good at providing a space for safe, effective deliberation to do or become something new as a collective.

For more information, contact Baughan at baughan@cs.uw.edu, Cross at kcross1@uw.edu, Saveski at msaveski@uw.edu and Zhang at axz@cs.uw.edu.

]]>