Miranda Wei – 91̽News /news Thu, 08 Aug 2024 15:01:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Many survey respondents rated seeking out sexually explicit ‘deepfakes’ as more acceptable than creating or sharing them /news/2024/08/08/sexually-explicit-deepfakes-synthetic-media-public-opinion/ Thu, 08 Aug 2024 15:01:23 +0000 /news/?p=85984 A computer keyboard is illuminated by the screen in a dark space.
In a survey of 315 people, respondents largely found creating and sharing sexually explicit “deepfakes” unacceptable. But far fewer respondents strongly opposed seeking out these media. Photo:

Content warning: This post contains details of sharing intimate imagery without consent that may be disturbing to some readers.

While much attention on sexually explicit “deepfakes” has , these non-consensual sexual images and videos generated with artificial intelligence . As text-to-image AI models grow more sophisticated and easier to use, the . The escalating problem led Google to announce last week that it will , and the allowing victims to seek legal damages from deepfake creators.

Given this rising attention, researchers at the 91̽ and Georgetown University wanted to better understand public opinions about the creation and dissemination of what they call “synthetic media.” In a survey, 315 people largely found creating and sharing synthetic media unacceptable. But far fewer respondents strongly opposed seeking out these media — even when they portrayed sexual acts.

Yet has shown that other people viewing image-based abuse, such as nudes shared without consent, harms the victims significantly. And , , creating and sharing such nonconsensual content is a crime.

“Centering consent in conversations about synthetic media, particularly intimate imagery, is key as we look for ways to reduce its harms — whether that’s through technology, public messaging or policy,” said lead author , who was a 91̽master’s student in the Paul G. Allen School of Computer Science & Engineering while completing this research. “In a synthetic nude, it’s not the subject’s body — as we’ve typically considered it — that’s being shared. So we need to expand our norms and ideas about consent and privacy to account for this new technology.”

The researchers Aug. 13 at the 20th Symposium on Usable Privacy and Security in Philadelphia.

“In some sense, we’re at a new frontier in how people’s rights to privacy are being violated,” said co-senior author , a 91̽professor in the Allen School. “These images are synthetic, but they still are of the likeness of real people, so seeking them out and viewing them is harmful for those people.”

The survey, which the researchers conducted online through , a site that pays people to respond on a variety of topics, asked U.S. respondents to read vignettes about synthetic media. The team altered variables in these scenarios like who created the synthetic media (an intimate partner, a stranger), why they created it (for harm, entertainment or sexual pleasure), and what action was shown (the subject performing a sexual act, playing a sport or speaking).

The respondents then ranked various actions around the scenarios — creating the video, sharing in different ways, seeking it out — from “totally unacceptable” to “totally acceptable” and explained their responses in a sentence or two. Finally, they filled out surveys on consent and demographic information. The respondents were over the age of 18 and were 50% women, 48% men, 2% non-binary and 1% agender.

The survey respondents ranked various actions around the synthetic media scenarios. The responses to each are graphed above.

Overall, respondents found creating and sharing synthetic media unacceptable. Their median totally unacceptable or somewhat unacceptable ratings were 90% for creating these media and 94% for sharing them. But the median of unacceptable ratings for seeking out synthetic media was only 53%.

Men were more likely than respondents of other genders to find creating and sharing synthetic media acceptable, while respondents who had favorable views of sexual consent were more likely to find these actions unacceptable.

“There has been a lot of policy talk about preventing synthetic nudes from getting created. But we don’t have good technical tools to do that, and we need to simultaneously protect consensual use cases,” said co-senior author , an assistant professor of computer science at Georgetown University. “Instead, we need to change social norms. So we need things like deterrence messaging on searches — we’ve seen that be effective at — and consent-based education in schools focused on this content.”

Respondents found scenarios in which intimate partners created synthetic media of people playing sports or speaking for the intent of entertainment the most acceptable. Conversely, nearly all respondents found it totally unacceptable to create and share sexual deepfakes of intimate partners with the intent of harm.

Respondents’ reasoning varied. Some found synthetic media unacceptable only if the outcome was harmful. For example, one respondent wrote, “It’s not harming me or blackmailing me… [a]s long as it doesn’t get shared I think it’s okay.” Others, though, centered their right to privacy and right to consent. “I feel it’s unacceptable to manipulate my image in such a way — my body and how it looks belongs to me,” wrote another.

The researchers note that future work in this space should explore the prevalence of non-consensual synthetic media, the pipelines for how it’s created and shared, and different methods to deter people from creating, sharing and seeking out non-consensual synthetic media.

“Some people argue that AI tools for creating synthetic images will have benefits for society, like for the arts or human creativity,” said co-author , a doctoral student in the Allen School. “However, we found that most people thought creating synthetic images of others in most cases was unacceptable — suggesting that we still have a lot more work to do when it comes to evaluating the impacts of new technologies and preventing harms.”

This research was funded in part by the National Science Foundation and the Google PhD Fellowship Program.

For more information, contact Brigham at nbrigham@uw.edu, Wei at weimf@cs.washington.edu, Kohno at yoshi@cs.washington.edu and Redmiles at elissa.redmiles@georgetown.edu.

]]>
Political ads during the 2020 presidential election cycle collected personal information and spread misleading information /news/2021/11/08/political-ads-2020-presidential-election-collected-personal-information-spread-misleading-information/ Mon, 08 Nov 2021 18:13:21 +0000 /news/?p=76414  91̽researchers found that political ads during the 2020 election season used multiple concerning tactics, including posing as a poll to collect people's personal information or having headlines that might affect web surfers' views of candidates.
91̽researchers found that political ads during the 2020 election season used multiple concerning tactics, including posing as a poll to collect people’s personal information or having headlines that might affect web surfers’ views of candidates. Photo: 91̽

Online advertisements are found frequently splashed across news websites. Clicking on these banners or links provides the news site with revenue. But these ads also often use manipulative techniques, researchers say.

91̽ researchers were curious about what types of political ads people saw during the 2020 presidential election. The team looked at more than 1 million ads from almost 750 news sites between September 2020 and January 2021. Of those ads, almost 56,000 had political content.

Political ads used multiple tactics that concerned the researchers, including posing as a poll to collect people’s personal information or having headlines that might affect web surfers’ views of candidates.

The researchers Nov. 3 at the ACM Internet Measurement Conference 2021.

“The election is a time when people are getting a lot of information, and our hope is that they are processing it to make informed decisions toward the democratic process. These ads make up part of the information ecosystem that is reaching people, so problematic ads could be especially dangerous during the election season,” said senior author , 91̽associate professor in the Paul G. Allen School of Computer Science & Engineering.

The team wondered if or how ads would take advantage of the political climate to prey on people’s emotions and get people to click.

“We were well positioned to study this phenomenon because of our previous research on misleading information and manipulative techniques in online ads,” said , 91̽professor in the Allen School. “Six weeks leading up to the election, we said, ‘There are going to be interesting ads, and we have the infrastructure to capture them. Let’s go get them. This is a unique and historic opportunity.'”

The researchers created a list of news websites that spanned the political spectrum and then used a to visit each site every day. The crawler scrolled through the sites and took screenshots of each ad before clicking on the ad to collect the URL and the content of the landing page.

The team wanted to make sure to get a broad range of ads, because someone based at the 91̽might see a different set of ads than someone in a different location.

“We know that political ads are targeted by location. For example, ads for Washington candidates will only be featured to viewers browsing from the state of Washington. Or maybe a presidential campaign will have more ads featured in a swing state,” said lead author , 91̽doctoral student in the Allen School.

“We set up our crawlers to crawl from different locations in the U.S. Because we didn’t actually have computers set up across the country, we used a to make it look like our crawlers were loading the sites from those locations.”

The researchers initially set up the crawlers to search news sites as if they were based in Miami, Seattle, Salt Lake City and Raleigh, North Carolina. After the election, the team also wanted to capture any ads related to the Georgia special election and the Arizona recount, so two crawlers started searching as if they were based in Atlanta and Phoenix.

The team continued crawling sites throughout January 2021 to capture any ads related to the Capitol insurrection.

Four screenshots of example poll ads in a square. Starting in the top left is a poll asking if Trump should concede. In the top right is an ad asking people to sign a thank you card for Dr. Fauci, in the bottom right is an ad that says "Sign the petition that Nancy Pelosi hates," and in the bottom left is a poll about whether illegal immigrants should get unemployment benefits
Some political ads posed as a poll to collect people’s personal information. Photo: 91̽

The researchers used natural language processing to classify ads as political or non-political. Then the team went through the political ads manually to further categorize them, such as by party affiliation, who paid for the ad or what types of tactics the ad used.

“We saw these fake poll ads that were harvesting personal information, like email addresses, and trying to prey on people who wanted to be politically involved. These ads would then use that information to send spam, malware or just general email newsletters,” said co-author , 91̽doctoral student in the Allen School. “There were so many fake buttons in these ads, asking people to accept or decline, or vote yes or no. These things are clearly intended to lead you to give up your personal data.”

Ads that appeared to be polls were more likely to be used by conservative-leaning groups, such as conservative news outlets and nonprofit political organizations. These ads were also more likely to be featured on conservative-leaning websites.

The most popular type of political ad was click-bait news articles that often mentioned top politicians in sensationalist headlines, but the articles themselves contained little substantial information. The team observed more than 29,000 of these ads, and the crawlers often encountered the same ad multiple times. Similar to the fake poll ads, these were also more likely to appear on right-leaning sites.

“One example was a headline that said, ‘There’s something fishy in Biden’s speeches,'” said Roesner, who is also the co-director of the . “I worry that these articles are contributing to a set of evidence that people have amassed in their minds. People probably won’t remember later where they saw this information. They probably didn’t even click on it, but it’s still shaping their view of a candidate.”

Three screenshots of example clickbait ads. The first shows Pence making an "eyebrow raising declaration after DC siege." The second says "Joe Biden goes on head-turning rant, fires off at reporter." The third shows Ted Cruz making a "head turning statement to Trump about the riot"
Click-bait news articles often mentioned top politicians in sensationalist headlines, but the articles themselves contained little substantial information. Photo: 91̽

The researchers were surprised and relieved, however, to find a lack of ads containing explicit misinformation about how and where to vote, or who won the election.

“To their credit, I think the ad platforms are catching some misinformation,” Zeng said. “What’s getting through are ads that are exploiting the gray areas in content and moderation policies, things that seem deceptive but play to the letter of the law.”

The world of online ads is so complicated, the researchers said, that it’s hard to pinpoint exactly why or how certain ads appear on specific sites or are viewed by specific viewers.

 

  • This paper was one of three runners-up for the best paper award at the ACM Internet Measurement Conference.
  • Related story:

 

“Certain ads get shown in certain places because the system decided that those would be the most lucrative ads in those spots,” Roesner said. “It’s not necessarily that someone is sitting there doing this on purpose, but the impact is still the same —  people who are the most vulnerable to certain techniques and certain content are the ones who will see it more.”

To protect computer users from problematic ads, the researchers suggest web surfers should be careful about taking content at face value, especially if it seems sensational. People can also limit how many ads they see by getting an ad blocker.

, a 91̽undergraduate student studying computer science is also a co-author on this paper. This research was funded by the National Science Foundation, the , and the John S. and James L. Knight Foundation.

For more information, contact badads@cs.washington.edu.

Grant number: CNS-2041894

]]>