Research Compendium

This research compendium (still in development!) curates and translates University of Pennsylvania research on the quantitative study of the information ecosystem and its impact on democracy. It brings together empirical work examining issues such as information integrity, media ecosystems, political speech, and democratic resilience, with the aim of making rigorous academic research more accessible and actionable.

The compendium features research from seven schools and centers at Penn—the Annenberg School for Communication, the Annenberg Public Policy Center, the Wharton School, SAS Political Science, SEAS Computer and Information Science, the School of Social Policy and Practice, and the Carey Law School—and is designed to serve journalists, media leaders, civil society organizations, and policymakers seeking evidence-based insights into the functioning and governance of digital information systems.

Author
Sort 35 entries
Theme
TitlePenn Faculty LeadDateKey Takeaway
NewHow deceptive online networks reached millions in the US 2020 elections
Social Media & PlatformsMisinformationPolitical Polarization2020 Meta Election Project
Sandra González-Bailón & Deen Freelon
Key Takeaway
At least 40 million American Facebook and Instagram users were exposed to deceptive online networks in the lead up to the 2020 election. However, it constituted a very small share of overall content consumption (only 0.3% for people who were exposed) and a lot of the exposure to such deceptive content was driven by everyday users rather than large consolidated networks.
Full summary & details

Overview

This latest research paper from the 2020 Meta election project examined the scale, reach, and effects of deceptive online networks that targeted US users on Facebook and Instagram during the 2020 election. The authors define deceptive online networks as "coordinated efforts that take place online where audiences are misled about the identity of the people behind the network." The dataset included 49 deceptive networks found to be active between 26 June 2020 and 15 February 2021. Here’s what they found:

  1. Deceptive networks achieved substantial reach, driven by a small number of actors. At least 37 million Facebook and three million Instagram users were exposed to content from deceptive networks during the study period. Over 70% of all users who viewed content from deceptive online networks were exposed to it via only three Facebook networks. Overall, the authors find that exposure to deceptive online networks “accounts for a very small share of users’ overall political content consumption.”
  2. During the study period, for deceptive networks engaging in political discourse, Meta identified three times as many financially motivated networks as only politically motivated ones. The financially motivated networks also reached more users and appeared to share a lot of political content to further their financial motivations.
  3. Most participants saw deceptive content indirectly, that is, by viewing content from regular user accounts “unaffiliated with the networks resharing deceptive network content,” often unknowingly amplifying such content in audiences beyond the networks’ direct reach.
  4. The study found that users who were older, more conservative, previously exposed to false news and spent more time on Facebook were especially susceptible to these deceptive online networks.

Why Is This Important?

This paper points to the importance of understanding the motivations of everyday social media users in sharing deceptive content and can help inform targeted interventions that better understand individual user preferences and behaviors as a means to curb misinformation and uphold election integrity.

Read Paper →
NewAnimosity is for the Audience: How Social Context Shapes Expressions of Political Hostility
Political Polarization
Yphtach Lelkes
Key Takeaway
Expressions of partisan animosity are sometimes used as a tool for forging ties and building trust within groups that share similar political views.
Full summary & details

Overview

The authors examine whether partisan animosity reflects deep-rooted hatred or serves as a means to signal "loyalty and conformity" to one's social group. They conducted two studies to test this claim: first, they analyzed interview data from the 2012 American National Election Studies (ANES) to see whether respondents adjusted their feelings of animosity based on the partisanship of the interviewer. Second, they conducted a real-time online news-sharing experiment with 1,510 participants to see if animosity was amplified when participants were randomly partnered with someone with similar political leanings. Here are some key findings:

  1. The analysis of the ANES data revealed that respondents expressed more hostility toward the opposing party when they believed the interviewer shared their political views. This suggests that partisan animosity is partially based on who the audience is and serves a social signal rather than only reflecting political beliefs.
  2. In the online news-sharing experiment, respondents were more likely to express negative views about an opposing political party if their "partner's partisanship was disclosed and matched their own." However, the researchers were unable to identify whether the respondent was exaggerating their beliefs based on their partner's or felt more comfortable expressing them.

Why Is This Important?

This research suggests that online expressions of partisan animosity depend on who is watching—people express hostility in front of an audience that rewards it, using it as a way to signal trust and belonging within their political group. This study potentially suggests the anger we see online may not only reflect what people believe, but also what they feel socially motivated to express.

Read Paper →
The Effects of Political Advertising on Facebook and Instagram before the 2020 US Election
Political PolarizationSocial Media & Platforms2020 Meta Election Project
Sandra González-Bailón & Deen Freelon
Key Takeaway
Political ads in the final weeks before the 2020 U.S. presidential election—when voter opinions are already largely fixed—have little to no effect on political beliefs.
Full summary & details

Overview

The researchers looked at the impact of political advertising on 36,906 Facebook users and 25,925 Instagram users at the time of the 2020 US election. This paper analyses internal data from Meta for a period of 6 weeks before the US elections to see if removing political ads from participants' feeds had any effect on electoral outcomes. Here are the key findings:

  1. Political Ads are Targeted towards Party Supporters: They found that political ads during the 2020 presidential election were not primarily aimed at persuading undecided voters: 46% sought donations, while 26% intended to persuade, and 17% sought to collect voter information.
  2. No effect of Political Ads on Political Attitudes: Removing political ads entirely had no detectable effect on candidate favorability, political knowledge, polarization, or confidence in the legitimacy of the election—even though they had substantial presence on a user's feed (23 ads per week for the average user).

Why Is This Important?

Political advertisements faced a lot of scrutiny, "over three fourths of Americans said that it is not acceptable for social media companies to use data to target political ads, and over half said that social media companies should not allow political ads at all," around the time of the 2020 election. This paper provides evidence to suggest that political advertisements preceding a major electoral race are primarily catered to their own party supporters as a fundraising effort as at this point voter opinions are largely fixed.

Read Paper →
NewA large-scale evaluation of commonsense knowledge in humans and large language models
LLMs & Civic DiscourseGuiding the Field
Duncan Watts
Key Takeaway
Common sense in humans is subjective and traditional benchmarks aimed at evaluating common sense in LLMs mainly focus on measuring accuracy relative to some decided ground truth.
Full summary & details

Overview

Aligning LLMs to mirror how humans think and process information is quite challenging. LLM developers tend to evaluate this “common sense” knowledge using answers that human annotators have marked as true. However, this assumes that common sense in humans is homogenous. However, what is common sense to one person, may not apply to the larger population. In this paper, the researchers propose a way to evaluate common sense knowledge in LLMs without assuming some “ground truth” that may only reflect the views of the model developer. They do so in two ways.

  1. First, the researchers treat each LLM like an individual survey respondent, asking it to give yes or no answers to thousands of statements. They measure first whether the model's responses align with the human majority, and second whether it can accurately predict what most people would say. On these metrics, more than two-thirds of models scored below the median human, and the best-performing model only exceeded the commonsense aptitude of 64% of human participants. Surprisingly, smaller open-weight models, such as Mistral-7B, outperformed larger frontier models, such asGPT-4.
  2. The researchers also tested whether LLMs could simulate how a large group of humans thinks about common sense. The underlying idea is that the more frequently a statement is endorsed by humans in an LLM's training data, the more likely the model is to accept it, meaning LLMs may effectively represent “the average human contributor of that data.” To test this, they repeatedly prompted each LLM to get a range of responses (which the authors call a “silicon sample”) and compared their responses to those of real human participants. They found that humans tend to agree more on factual, literal statements than on figures of speech, and most LLMs also reflected this pattern. However, most LLMs tended to rate statements as either commonsensical or not, with little middle ground. The authors attribute this to extensive fine-tuning, which pushes models toward confident, decisive answers and away from the more uncertain responses that better reflect how humans actually think.

Why Is This Important?

Assuming a single “ground truth” and evaluating LLMs based on it might be a more straightforward way to evaluate progress in AI alignment. However, this “one truth myth” has misguided AI research and creates an “illusion of success” when LLMs pass these tests that don't account for the subjectivity in how humans process cultural information. These LLMs have been deployed in many contexts that are subjective, like toxicity detection and image classification. This can have unintended consequences and this paper makes an important contribution in showing how model developers can employ “human judgments as a normative standard for human-like AI benchmarking.”

Read Paper →
How Well Do Large Language Models Understand African American Language? Causes and Implications
LLMs & Civic Discourse
Desmond Patton
Key Takeaway
This review paper shows that LLMs by virtue of how they are trained and deployed, “are in part beholden to what language—and importantly, whose language—is being modeled in ways that have effects on the real speaker communities who use the technology."
Full summary & details

Overview

This review paper discusses empirical research examining how LLMs interpret African-American Language (AAL) and whether they “distort the communicative intent” of its speakers. First, the authors synthesize research explaining reasons for why LLMs are worse at interpreting AAL compared to White Mainstream English (WME) and its downstream impact on African-American users. The authors identify three sources of bias that may explain the disparity in LLM performance on AAL vs WME. They are:

  1. Data: Research demonstrates that AAL is severely underrepresented in LLM training data. Studies of the widely-used C4 corpus found AAL constituted only 0.07% compared to 97.8% WME. Even with documents representing AAL, research found that it frequently “reinforced stereotypes, and/or represented appropriated speech.” Research attributes this imbalance to automated quality filters that disproportionately remove AAL texts, filtering out 42% of AAL documents versus only 6.2% of WME documents. LLMs also perform poorly on AAL because pretraining data includes what is easily available on the web which for AAL, is often performative online contexts like Twitter and hip hop lyrics rather than naturalistic speech.
  2. Annotations: How content is labeled is shaped by the characteristics of the annotator. This is relevant for AAL because when data has been annotated without consideration of the annotators’ race, “the resulting labeled data may not reflect the views of African Americans.” For instance, research found that conservatives who also tend to “hold racist beliefs” are more likely to label AAL posts as offensive, clearly showing that the political views of the annotators can act as a source of bias against AAL.
  3. Model: The authors note that data quality alone doesn't explain bias—the models themselves also introduce problems. Better training data can't fully solve the issue because language constantly changes, and biased models can worsen existing biases when retrained. Research shows models have more trouble understanding AAL texts compared to WME texts and rate AAL documents as having lower quality. The inability for LLMs to accurately interpret AAL has significant impacts: AAL speakers have to work harder to be understood by LLMs which can make some Black users “feel that their culture and language are not valued.”

Why Is This Important?

This paper shows that LLMs are trained and deployed in ways that don't account for the linguistic traditions of African Americans. LLMs should aim to be better at interpreting AAL, especially when LLMs are used in essential sectors like healthcare or financial services. Conversely, the authors raise concerns about LLMs that understand AAL engaging in cultural appropriation:“LLMs capable of generating AAL risk enabling malicious users to impersonate Black people online and potentially further perpetuate stereotypes of AAL.”

Read Paper →
Perceiving Politics on TikTok: A User-Centered Approach to Understanding Political Content on TikTok
Social Media & Platforms
Danaé Metaxa
Key Takeaway
A majority of political content on TikTok is associated with positive sentiments and news content on TikTok is rarely from official accounts. If researchers use news content as a proxy for examining political content, it may be "challenging to implement and miss the bigger picture if applied to TikTok."
Full summary & details

Overview

Tiktok's popularity has grown substantially in recent years, with recent Pew research suggesting that four in ten young adults in the US get their news from Tiktok. Yet, little is known about the prevalence and impact of political content on the platform. In this paper, the authors used a "novel browser-based tool to track users' exposure to and perceptions of political content," across 368 U.S.-based participants. Using a combination of annotations by users and natural language processing, the authors identify and categorize political topics encountered by TikTok users. They find that most of the users have a positive experience on TikTok and do not see political content. Other key findings include:

  1. When users were initially asked to describe political content, they defaulted to broad terms like "politician" or "election." But when reacting to actual videos in their feeds, they identified specific topics like race, gender rights, and policing.
  2. Most political topics evoked positive sentiment and fewer negative responses (anger or sadness). However, the strongest positive and negative sentiments were about content related to "COVID" and "TRUMP." Thus, overall political content on TikTok "often leans positive," but polarizing topics are still portrayed with strong sentiments across party lines.
  3. The authors found that even when users encountered content that appeared to be from news broadcasts, it was rarely posted by official news media accounts. The authors suggest this indicates that "traditional news media accounts have yet to fully breach the TikTok platform."

Why Is This Important?

Public and legislative anxieties about TikTok do not accurately represent what users actually experience in terms of content. This study matters because it grounds that debate in real behavioral data. By capturing what ordinary users saw and flagged as political in real time, it reveals a meaningful gap between how people describe political content in the abstract versus what they actually perceive as political when watching—a distinction with direct implications for how platforms and policymakers define and regulate political content.

Read Paper →
The Effect of Deactivating Facebook and Instagram on Users' Emotional State
Social Media & Platforms2020 Meta Election Project
Sandra González-Bailón & Deen Freelon
Key Takeaway
Deactivating Facebook and Instagram in the weeks before the 2020 US presidential election modestly improved people's mental health—the effect was more pronounced for Facebook.
Full summary & details

Overview

This paper examines whether deactivating Facebook or Instagram improves users' emotional wellbeing. In an experiment 20 times larger than previous experiments, the researchers recruited 19,857 Facebook users and 15,585 Instagram users and paid a subset of participants to deactivate their accounts for the six weeks leading up to the 2020 US presidential election. They then measured self-reported levels of happiness, depression, and anxiety at the start and end of the study. Here are some of the findings:

  1. Deactivating Facebook improved emotional wellbeing: Users who deactivated their accounts reported a small improvement in overall emotional state compared to those who did not, with the largest effects seen in users over 35.
  2. Deactivating Instagram improved wellbeing for young women: While there was no overall change in users' state, women aged 18-24 reported modest improvements in emotional wellbeing.
  3. The improvements were not explained by increased offline time: The study found that users migrated to other platforms rather than going offline entirely, suggesting the improvement in mental health was not solely a consequence of reduced screen time.

Why Is This Important?

Despite growing concern over social media's impact on mental health, evidence remains mixed. This paper contributes to the debate by isolating the effects of Facebook and Instagram deactivation, revealing that their emotional impact varies by age and gender—with Facebook affecting 35+ users and Instagram affecting young women most.

Read Paper →
Computational Social Science: Past, Present, and Future
Guiding the Field
Duncan Watts
Key Takeaway
Computational social science is a "a collection of compelling and consequential problems, grounded in the rise of socio-technical systems: systems of devices, platforms, algorithms, and data that are equally social and computational, and that cannot be understood or managed through either lens in isolation of the other."
Full summary & details

Overview

This introductory chapter lays out Duncan Watts and David Lazer's "subjective appraisal of the field of computational social science (CSS)." They first trace the field's history beginning in the late 90s and its growth in popularity during the Web 2.0 revolution which allowed researchers access to unprecedented amounts of online data because of the growing reliance on digital tools. In 2007, Duncan Watts shared that if handled well, data about online communications could "revolutionize our understanding of collective human behavior." Incidentally, 2007 was also the year where people began interpreting CSS as a field that could "produce an understanding of the global network on which many global problems exist: SARS and infectious disease, global warming, strife due to cultural collisions, and the livability of our cities.” What is the current state of CSS? Watts and Lazer argue that computational social science (CSS) has produced a substantial number of empirical insights that would have been difficult or impossible in the pre-digital era. Examples include large-scale validation of network science theories using online social networks, measuring “structural virality” across billions of social media cascades and predicting poverty levels from mobile phone metadata. But has the field's impact extended beyond the realm of scientific publishing? The authors conclude that potential exists but the evidence thus far is "inconclusive." Looking to the future, the authors identify 5 key challenges that would shape the field in the next decade. First, the field's overreliance on industry data creates a power imbalance wherein platforms get to decide which data to share with researchers and what data remains "locked up." Second, CSS researchers will continue grappling with "the difference between what is measured and what would ideally be measured." Third, CSS should pursue solution-oriented research intended to tackle real-world problems. Finally, embracing open science and developing ethical frameworks for digital data will be essential to ensure research is reliable, transparent, and socially responsible.

Read Paper →
The Post-API Age of Social Media Data Access: Past, Present, and Future
Social Media & PlatformsGuiding the Field
Deen Freelon
Key Takeaway
The last 20 years has witnessed a trend toward reduced researcher access for social media data, which is also "permanently contingent on factors over which researchers have little or no control."
Full summary & details

Overview

This research article provides a historical overview of social media data access and how it has evolved in the last twenty years, beginning in the early 2000s. The authors identify four key periods starting in 2006: a "laissez-faire" era (2006–2011) when platforms offered relatively open, free access but was not extensively used by researchers; an "authentication period" (2011–2018) when platforms began tightening restrictions and requiring credentials which happened to coincide with social scientists realizing the value of this data for studying online communications; a "limited options period" (2018–2020) triggered by Meta shutting down academic API access following the Cambridge Analytica scandal; and an "academic cooperation period" (2020–2023) wherein social media platforms "implemented academic-only data sources in response to public scrutiny about their role in society." Based on this historical assessment, the paper evaluates the “data access options” currently provided by the six major technology companies. Here’s what they found:

1. Laissez-Faire API: The oldest and most open data access regime, offering free, on-demand access with no application or institutional affiliation required. Currently, YouTube and Reddit operate under this model.
2. Academic API: Emerged after a decade of the laissez-faire approach, these APIs impose more control on who can access platform data through lengthy and manually reviewed applications. Currently, TikTok and Reddit's enhanced tier fall in this category, requiring lengthy applications and academic affiliation.
3. Walled Garden: Free and restricted to academics, but limits what data can be exported. Facebook and Instagram only allow data to be downloaded above certain visibility thresholds, with lower-visibility content confined to Meta's online "clean room."
4. Pay-to-play API: Access to data requires payment. X/Twitter is currently the only platform in this category, having eliminated its free academic API after Elon Musk acquired the company. The authors close with recommendations urging platforms to make data more freely downloadable, outsource access decisions to independent academic bodies, and revisit data management rules that were written with commercial developers in mind and which may compromise research rigor in some cases.

Why Is This Important?

Data access is and will continue to be shaped by policy shifts by companies, public scandals and legislation. Thus, researchers cannot only rely on platform data for their research. As the authors note, the ability for researchers to “analyze future platform data,” will depend as much on innovation from the research community as on the “whims of the platforms’ corporate owners.”

Read Paper →
A Megastudy Of Behavioral Interventions To Increase Voter Registration Ahead Of The 2024 U.S. Presidential Election
Political PolarizationPersuasion & Behavior Change
Emily B. Falk
Key Takeaway
This megastudy points to the “intention-behavior gap” for voter registration and shows that when prior motivation is low, “interventions that successfully boost intentions may be insufficient to prompt action.”
Full summary & details

Overview

This paper tests the efficacy of ten “expert-crowdsourced, theoretically-based psychological interventions” in bolstering electoral participation with a sample of eligible unregistered US voters ahead of the 2024 presidential election. These interventions drew on established behavioral science principles—such as correcting misperceptions about registration difficulty; emphasizing the moral basis for civic participation; and using escalating commitment techniques that combine multiple forms of social pressure, such as telling people that voting records are public. The authors measure the impact of these interventions across different stages of electoral participation, from stated voting intentions to clicking on voter registration websites to actual registration and turnout. The control condition in this case was an intervention unrelated to voting. Here are some key findings:

  1. Eight out of ten interventions significantly increased intentions to vote and the “Escalating Commitments" intervention had the strongest effect, boosting voting intentions by eight percentage points.
  2. Five interventions led to increased click rates for voter registration websites compared to the control. Intuitively, interventions were less effective on participants who reported low political interest and voting intention prior to the study, showing that how people feel about voting from the outset is a stubborn determinant of electoral participation.
  3. Despite having positive effects on increasing voting intentions and in some cases, click rates, none of the interventions “had a significant impact on voter registration.” The same applies for voter turnout, in that, no interventions significantly improved actual turnout compared to the control condition.

Why Is This Important?

A large proportion of America’s eligible electorate is unregistered. In fact in the last two presidential elections, “nearly a quarter of eligible Americans were unregistered and therefore did not participate.” Most prior research has looked at how to improve voter turnout among registered voters rather than examining what makes people register to vote. This paper addresses this research gap and shows that in order to encourage voter turnout and participation, interventions must “pair efforts to increase motivation with efforts to simplify registration and voting processes, such that motivation can be more easily translated into action.”

Read Paper →
Rethinking news framing with large language models
LLMs & Civic DiscoursePersuasion & Behavior Change
Duncan Watts
Key Takeaway
This study shows that the “selective presentation of truthful information” can influence both the feelings and opinions regarding important news events. Moreover, by using an LLM to create a set of articles that are factually accurate but differ in tone(i.e., based on what content is included and what is left out) this paper highlights the “ease with which bias can be introduced to otherwise typical news coverage as well as its impact on readers.” Thus, grasping the role of misinformation in society must go beyond addressing falsehoods by also addressing “biased, yet factually accurate, reporting practices prevalent in mainstream media.”
Full summary & details

Overview

This paper employs LLMs to generate “synthetic news articles” in order to study the effects of biased news coverage across a range of events pertaining to politics, the economy and culture more broadly. The LLM-generated news articles incorporate changes in “selection and tone of the content while holding factual accuracy and other features constant.” In other words, the generated news articles mimic the ways in which different media outlets may employ a positive, neutral, or negative tone; and selectively present certain facts when reporting the news. The authors conducted a randomized experiment to evaluate the impact of the “alternative framings” of the news articles on how participants evaluated its tone and informativeness along with how it made them feel about the subject of the news article. Here are some key findings:

  1. Both positively and negatively framed news articles “significantly influence” how participants feel about the subject of the article. However, news stories with a negative framing had a substantially larger impact on how respondents felt about the subject. Specifically, the negative framing led to an “average treatment effect of – 18.5 percentage points." This indicates that negatively biased news coverage has a greater impact on how participants feel about the subject of the story.
  2. The authors also examined whether these biased news articles shaped deeply held opinions about the content presented and found that articles with negative framing significantly shifted opinion while there was “no significant effect in the positive direction.” This result indicates that biased news articles not only “alter feelings toward the subject”, but can “also influence how people perceive the facts associated with these events.”
  3. The negative framing in news articles presented to participants had a “substantially larger effect” among those who self-reported as less informed. This indicates that people who don't consider themselves well versed about the news cycle are more susceptible to selective reporting and tonal shifts in news coverage.

Why Is This Important?

Traditional research about the impact of media bias on public opinion has faced two important limitations. First, researchers have tended to manually draft news articles, making it difficult to change one factor (e.g., tone or content) while keeping everything else (e.g., writing style and factual accuracy) the same. Second, prior work has focused on single issues such as immigration, gun control, or civil protests, limiting the ability to determine whether findings apply more broadly. This study addresses both limitations by using LLMs, which can “discern subtle variations in tone, emphasis, and narrative structure” in news articles, allowing for more generalizable findings about how media framing affects public opinion. The study also speaks to the growing role of automated journalism in news production. Decisions about which data are included and how models are prompted "can have significant downstream effects on how events are framed."

Read Paper →
Using a mental model approach to undercut the effects of exposure to mRNA vaccination misconceptions: Two randomized trials
MisinformationPersuasion & Behavior Change
Kathleen Hall Jamieson
Key Takeaway
This paper showed that presenting conceptual scientific knowledge that counters vaccine misinformation without directly refuting false claims can be more effective than merely correcting false claims.
Full summary & details

Overview

This research paper tested whether teaching people how mRNA vaccines and cellular DNA protection actually work could reduce susceptibility to vaccine misinformation. Rather than directly correcting false claims (as in traditional fact-checking), the researchers conducted two studies that employed a "mental model approach", providing detailed explanations about how vaccines work. Two preregistered experiments tested this approach with U.S. adults. The first study graphically displayed how mRNA vaccines work (the vaccine model) and how human cells protect themselves from foreign DNA (the cell protection model), along with additional material on vaccine safety. The second study used an animated video to explain the cell protection model, either by itself or combined with the first study’s materials. Both experiments intentionally avoided directly refuting false claims about the vaccines. The researchers also examined if exposure to misinformation would increase misconceptions, specifically using Florida Surgeon General Joseph Ladapo's false claim that DNA fragments in mRNA vaccines could integrate into recipients' DNA. They further tested whether the proposed “mental model approach” could protect against this false claim. Here are some key findings:

  1. Participants exposed to Ladapo’s false claims regarding DNA integration reduced accurate responses from subjects in both studies, showing that vaccine misinformation from seemingly authoritative sources can have a negative impact. Similarly, subjects who did not see this content and were only exposed to either mental models led to more accurate responses compared to the control group.
  2. Participants exposed to the mental models along with Ladapo's problematic claims showed more accurate responses than those who only saw Ladapo's claims, regardless of presentation order.
  3. Preemptive positioning may be slightly more effective than correction with authors finding that presenting the mental models before exposure to misconceptions was somewhat more protective than presenting the models as a rebuttal, that is, after exposure to vaccine misinformation.

Why Is This Important?

This study addresses some of the issues with traditional fact-checking regarding vaccine misinformation by introducing a “mental model approach” that shows how vaccines actually work, thereby contradicting false claims without directly responding to them. As the authors note, this approach has practical applications, as it can be “introduced in a live debate or in educational, clinical, or public health settings long before misconception exposure.”

Read Paper →
Efficiency and effectiveness of net neutrality rules in the mobile sector: Relevant developments and state of the empirical literature
Guiding the Field
Christopher Yoo
Key Takeaway
There is a lack of empirical evidence to support net neutrality regulations, suggesting that policy decisions in Europe and the US about net neutrality "have largely been driven by ideological views and political considerations rather than economics and evidence."
Full summary & details

Overview

Net neutrality has been debated for nearly three decades, with the US, EU, and UK adopting different regulatory approaches. However, little empirical research exists on the efficacy of net neutrality rules and its economic impact. This paper fills that gap by reviewing the economic and regulatory landscape of net neutrality with a specific focus on mobile broadband, assessing both whether the rules achieve their intended goals and whether their benefits justify their costs. Here are some key findings and concrete policy recommendations:

  1. Net neutrality rules are increasingly ineffective: Large technology firms including Google and Meta have built private networks and content delivery systems (CDS) that bypass regulated public networks entirely, meaning a growing share of internet traffic already operates outside net neutrality rules. By 2023, 70% of all internet traffic was delivered through CDS that fall outside the scope of net neutrality rules.
  2. The empirical evidence does not support net neutrality's claimed benefits: The authors conduct a comprehensive review of all available empirical studies and conclude that "no empirical study supports the arguments of proponents" of net neutrality. However, empirical evidence does suggest that net neutrality regulations have led to "significant reductions in telecom investment."
  3. The costs of net neutrality regulation are substantial: Beyond reduced investment, the authors identify the loss of innovative services, "high transaction costs" from monitoring and compliance that weigh particularly heavily on smaller providers, and inefficiencies as companies resort to costly technological workarounds like private CDNs. In the US, the on-again/off-again nature of net neutrality rules has "greatly complicated industry participants' ability to engage in long-term planning."
  4. Policy recommendations: The "first-best" recommendation is to remove net neutrality rules altogether, because costs outweigh demonstrated benefits. Where full removal is not politically feasible, they recommend giving ISPs greater flexibility in pricing and network management, subject to existing competition law and consumer protections, or adopting a principles-based framework with limited ex-ante obligations. The authors point to the UK regulator Ofcom's 2023 revised guidelines as a concrete step in this direction.

Why Is This Important?

Net neutrality regulation represents a substantial market intervention whose economic consequences remain poorly understood. This paper provides a comprehensive review of both the effectiveness and efficiency of net neutrality rules in the mobile sector, at a moment when the EU, UK, and US are all actively reconsidering their approaches. The authors' findings on net neutrality's negative economic impact and their two concrete policy recommendations offer policymakers an evidence-based foundation for these decisions.

Read Paper →
Brain activity explains message effectiveness: A mega-analysis of 16 neuroimaging studies
Persuasion & Behavior Change
Emily B. Falk
Key Takeaway
This study shows that neural indicators of reward, language and emotions processing are associated with the persuasiveness of messages both for individuals and at scale.
Full summary & details

Overview

This paper uses a large-scale mega-analysis of 16 fMRI studies to examine the neural mechanisms underlying why some messages are more persuasive than others across domains such as health, marketing, and political communication. By pooling raw neuroimaging data across 16 separate studies, the authors test whether the neural coorelates for persuasive messaging applies at the individual level and among large groups "of message receivers who did not undergo neuroimaging." Here are some key findings:

  1. Messages that elicit greater activation in brain systems associated with reward and social cognition are more likely to be effective. Moreover, message effectiveness at scale was corelated with greater activation in the VTA, a part of the dopaminergic reward system, related to "anticipation and receipt of personal rewards" and social conformity.
  2. Mentalizing, which refers to the process by which "people understand themselves and the minds of others," was associated with strong effects in the "dorsomedial prefrontal cortex, and cerebellar regions."
  3. Supplimentary analyses found that brain regions associated with language processing and emotions had a positive effect on message effectiveness.

Why Is This Important?

Persuasive messaging shapes outcomes in public health, advertising and political communications, yet research on what makes messages effective is often siloed by domain or method. This paper contributes approach of pooling available data from 16 different neuroimaging studies shows how "certain basic mechanisms may be active across different messaging contexts and may inspire novel strategies targeting these mechanisms."

Read Paper →
Identity-related Speech Suppression in Generative AI Content Moderation
LLMs & Civic Discourse
Danaé Metaxa
Key Takeaway
This paper provides a methodological framework to test automated moderation systems for incorrect speech suppression and in doing so, shows that there will be a tradeoff between “filtering out undesired content and ensuring that other speech is allowed.”
Full summary & details

Overview

This paper examines how automated content-moderation systems may incorrectly suppress identity-related speech. Using both traditional short-form user-generated text and “longer generative-AI-focused data” introduced in the paper, the researchers created a benchmark to measure speech suppression for nine identity groups. Here’s what they found:

  1. Identity-related speech is more frequently suppressed than other forms of speech across both traditional and automated moderation services.
  2. Identity-related speech has a higher likelihood of being suppressed for both marginalized and non-marginalized groups, except for those identified as “straight” and/or “Christian.”
  3. The reasons for speech suppression by the content moderation systems tested in this paper differ based on the stereotypes and text associations for specific identity groups. Eg: non-Christian content had a higher likelihood of being incorrectly flagged as hateful.

Why Is This Important?

Gen AI is being rapidly integrated into platform moderation. Companies like Meta and Tiktok have laid off trust and safety workers and contracted moderators in favor of automated systems, potentially to cut costs and improve efficiency while also reducing the reliance on human moderators that experience severe psychological distress for being exposed to streams of problematic content. Given these incentives, it is crucial to understand the gaps and potential pitfalls of relying on LLM based content moderation. While a lot of AI safety research has looked into how to prevent these systems from producing “undesired outcomes”, less attention has been paid to “making sure appropriate text can be generated.” This paper addresses this research gap by providing “the first comprehensive bias audit of generative AI speech suppression across five automated content moderation APIs” and shows how identity-based stereotypes may permeate LLMs and inadvertently suppress permissible speech.

Read Paper →
Experimental evidence of the effects of large language models versus web search on depth of learning
LLMs & Civic Discourse
Shiri Melumad
Key Takeaway
LLMs reduce the effort it takes to find information about a topic but often at the expense of developing a deeper understanding of the topic. This in turn, makes LLM-assisted analyses less informative.
Full summary & details

Overview

In this research paper, the authors examined “how one’s ability to learn about a topic” is impacted by the use of LLMs versus traditional web search and found that while LLMs may make accessing information easier, it may come at the cost of reducing the “depth of knowledge users may develop.” In a series of experiments, the researchers had participants use different search methods (ChatGPT vs. Google, LLM summaries vs. linked articles, or Google's standard results vs. AI Overviews) to learn about a topic and draft advice based on what they learned. Independent evaluators, blind to the search method used, rated the participant-generated advice to assess downstream consequences of using LLMs vs traditional search. Here are some key findings:

  1. Those who used ChatGPT spent less time on the task and reported that “they learnt fewer new things about the subject.” Participants learning via ChatGPT also put less effort in creating advice and thus felt less “personal ownership” on what they created.
  2. Even when the same information was presented as an LLM summary rather than a series of web links, participants reported putting less effort in learning and developed only a “shallow understanding of the topic.”
  3. Evaluators found advice based on learning from Google's AI Overviews (vs. web links) to be less helpful and less informative. They also believed less effort had been put into writing the advice, found it less trustworthy, and were less willing to adopt it themselves or recommend it to others.

Why Is This Important?

The rise of ChatGPT and Google’s AI Overviews reflects how LLMs are reshaping how we search for information. According to a recent Pew research report, users are less likely to click through webpages when presented with Google’s AI summary. While a lot of attention has gone into examining the accuracy of these summaries and its potential for spreading falsehoods, this paper examines its adverse effects on learning. If LLM-mediated search is the future, this paper argues that it may make learning a more “passive” activity.

Read Paper →
Persuasion and dissuasion in political campaigns: Communication and media coverage in senate races
Political PolarizationPersuasion & Behavior Change
Pinar Yildirim
Key Takeaway
The effects of campaign speeches on electoral performance are constrained by what media organizations choose to amplify and this research finds that the media prefers polarizing content.
Full summary & details

Overview

Politicians tend to switch between partisan and centrist rhetoric based on perceived electoral gains. Which type of rhetoric yields votes also depends on what media platforms choose to amplify. In this paper, the researchers set out to examine (1) how candidates’ incentives to target audiences to win votes and (2) how the media’s incentives to cover political campaigns “interact to shape campaign speech and campaign performance.” This is empirically challenging because the electorate is not homogenous, meaning that the same campaign remark can persuade some but dissuade others. Thus, most previous research has looked at “net effects”, limiting the understanding of “the underlying trade-offs that candidates face.” To address this, the authors developed a method to separately measure persuasion and dissuasion effects. They used text analysis to classify more than 200,000 newspaper stories from U.S. Senate races (1980-2012) as either partisan voter-oriented or swing voter-oriented, linking those to high-frequency polling data. Here's what they found:

  1. Candidates prefer for partisan rhetoric to “fly under the radar” whereas the media prefers more polarizing content. Conversely, candidates would like rhetoric aimed at swing voters (centrist appeals) to be covered by the media but the media tends to ignore it.
  2. Democrat appeals to their voter base are “four times more effective” at mobilizing voter turnout compared to Republican appeals. However, this increase in partisan rhetoric by Democrats triggers “twice the level of backlash” from swing voters. On average, 56% remarks by Democrat candidates and 45% remarks by Republican candidates aim to mobilize partisans.
  3. Media coverage for both parties is balanced. Because media outlets prefer partisan content but Democrats produce less of it (due to swing-voter penalties), both parties receive similar overall coverage rates.
  4. In states where Democrats make up a smaller share of the electorate, swing voters reward Democratic centrist appeals more strongly. Competitiveness also matters: as polling gaps narrow, Republican speech targeting swing-voters becomes more effective.

Why Is This Important?

Algorithms curate content based on engagement and not newsworthiness. The authors contend that by allowing candidates to microtarget campaign passages via social media, these platforms may lower the swing-voter backlash that once disciplined Democratic rhetoric. At the same time, if social media-driven polarization is shrinking the pool of persuadable swing voters, future campaigns may become even more partisan and weaken the “moderating role of the press.”

Read Paper →
Culturally-Aware Conversations: A Framework & Benchmark for LLMs
LLMs & Civic DiscourseGuiding the Field
Lyle Ungar
Key Takeaway
LLMs developed in the West struggle to adapt to cultural nuances in specific conversational settings. The framework proposed in this paper can help guide future “AI systems that better understand, respect, and adapt to diversity in communication.”
Full summary & details

Overview

AI chatbots are used by people from diverse cultural backgrounds but are they “culturally aware?” In this paper, the authors introduce the “first framework and benchmark designed to evaluate LLMs in realistic, multicultural conversational settings.” To do so, the authors, in consultation with cultural experts, designed six conversational situations where the LLM’s ideal response should be culturally sensitive. For example, when discussing personal accomplishments, some cultures may view celebration as confidence while others may view it as arrogance. The authors used an OpenAI model to generate a dataset of 48 conversations, each with five possible responses that vary stylistically while conveying the same underlying message. They recruited 24 annotators from eight countries to determine which responses are most culturally appropriate in each conversational situation. After evaluating five models from OpenAI, Google and Anthropic based on this framework, the researchers found that all models perform best in the Western cultural context. Across the board, the highest accuracy scores were for America and the Netherlands. This is worrying because LLMs have become quite popular in non-Western contexts and this research suggests that LLMs are “less likely to align with local users’ communication practices.”

Why Is This Important?

Most cultural benchmarks for LLMs are “factual”, lacking focus on conversational style. This paper provides a way to assess LLMs in “realistic, multicultural conversational settings.”

Read Paper →
Learning Human-Perceived Fakeness in AI-Generated Videos Via Multimodal LLMs
LLMs & Civic DiscourseMisinformation
Chris Callison-Burch and Dan Roth
Key Takeaway
Existing AI models are reasonably able to classify videos as either real or fake. That being said, most models have a bias towards real videos, meaning they have higher accuracy in correctly identifying real videos compared to fake ones.
Full summary & details

Overview

The rise and growing sophistication of AI-generated videos has made identifying what is real, rather than AI generated, more challenging. The authors argue that despite advances in these video generation models, the question of whether humans can identify and provide reasons for why a video is machine generated has been largely overlooked. To address this, the researchers introduce DEEPTRACEREWARD, a benchmark built on a dataset of 4,300 expert annotations across 3,300 AI-generated videos from seven state-of-the-art video generators. Each annotation identifies the portion of the video containing the sign, or trace, that the video was fake, the time it occurs, and a natural-language explanation of what gave it away. Based on these annotations, the researchers identify 9 categories through which humans identified a video as AI-generated. Reasons for why annotators identified videos as fake included object and background distortions, sudden blurring, and objects unnaturally disappearing mid-scene. The researchers then evaluated 13 existing AI models on their ability to identify these human-perceived deepfake traces, asking: do current AI models possess human-level visual intelligence to identify deepfake traces, and if not, can they be taught to do so using DEEPTRACEREWARD? Here are some key results:

  1. While frontier models such as GPT-5 and Gemini 2.5 Pro achieved over 70% accuracy when classifying videos as fake or real, their performance on identifying exactly where and why a video is fake was below 36%.
  2. The researchers also fine-tuned an existing AI model (LLaMa 3) on their benchmark, which achieved an overall score of 70.2%, outperforming GPT-5 by nearly 35 percentage points. Across all models, a consistent pattern emerged: binary classification was easiest, followed by natural language explanation. Identifying the location and time of the deepfake trace was much harder. spatial localization, with temporal localization being hardest.

Why Is This Important?

This paper shows that existing methods to evaluate AI-generated videos ignore the “crucial role of human perception” in determining authenticity. The researchers help address this research gap by introducing DEEPTRACEREWARD, the first large-scale benchmark with expert human annotations explaining how and why a video is machine generated, with the goal of helping future AI video generation models better reflect how humans see and judge authenticity.

Read Paper →
Why Depolarization is Hard: Evaluating attempts to decrease partisan animosity in America
Political Polarization
Yphtach Lelkes
Key Takeaway
Online interventions aimed at “depolarization” by themselves are not a “scalable solution for reducing societal conflict” and efforts must be made to study “elite behaviors and structural incentives that fuel partisan conflict.”
Full summary & details

Overview

This paper introduces two new experiments and provides a meta analysis that looks into the efficacy of online interventions designed to reduce “partisan animosity.” Here’s what they found:

  1. Depolarization is difficult to implement and scale. In their meta-analysis, the researchers find that these interventions have small effects, “a 5-point shift on a 101-point scale,” and become weaker with time.
  2. Efficacy of depolarization does not improve with repeated exposure, indicating that solutions to online polarization require going beyond user-level interventions.

Why Is This Important?

Partisan divides on both traditional and online media platforms are often framed as a consequence of “echo chambers” wherein people are less likely to view/engage with disagreeable content. The tendency for homogeneity in social discourse has led some to tout depolarization interventions as an effective solution to reduce partisan divides. But does exposure to contrary views bring people closer together? This is an empirical question with important policy implications. This paper challenges this assumption, providing empirical evidence suggesting that the gains of depolarization may be overstated and suggests shifting focus toward understanding the societal level incentives that encourage online partisanship.

Read Paper →
Changing beliefs or changing behavior? Understanding the belief-to-behavior process and intervening to curb the impact of misinformation
MisinformationPersuasion & Behavior Change
Dolores Albarracín
Key Takeaway
To effectively combat misinformation, interventions must go beyond describing what is true or false. If the goal is to change behavior in this regard, it is critical to recognize "when, how, and why a belief matters for behavior, and when behavior must be addressed in a direct way."
Full summary & details

Overview

Prior research has shown that the relationship between beliefs and behavior is often weak, variable, and highly context-dependent. This paper examines this "belief-to-behavior inference model" and challenges the assumption that "misinformation must be corrected to change behavior." The authors look at both individual and societal level interventions to examine if and to what extent belief change can prompt behavior change, especially when it comes to combating misinformation. Here are a few important findings from their review:

  1. Beliefs are more likely to drive behavior change when specific goals are activated and "inferential paths are short." Based on the author's proposed framework, interventions that combat misinformation like prebunking will be better at changing behaviors by focusing on outcome beliefs or include "direct calls to action."
  2. The authors also provide promising alternatives to prebunking and fact-checking for combating misinformation with behavior change in mind. For instance, they find that self affirmations reduce defensiveness, especially when misinformation targets a social identity and "bypassing" which highlights alternative viewpoints without directly confronting misinformation to be effective. The important thing to keep in mind is that interventions must "shift the locus of change from belief accuracy to behavior change."
  3. Legal and administrative sanctions with the goal of increasing trust in institutions have "negligible effects" on behavior change while broader societal interventions like providing support networks, and upholding social norms are important.

Why Is This Important?

Combating misinformation is a priority for journalists, platforms and policymakers. Strategies like fact-checking is often framed as a solution to the problem. But as this paper shows, if the goal is to target behavior change, merely educating people about what is true or false may not be enough. Through this paper, the authors show that interventions are more likely to shape behavior if "beliefs are behaviorally engaged, inferentially accessible, and contextually relevant". Overall this paper provides the groundwork to ensure that future interventions to combat misinformation are "better calibrated to the realities of human.

Read Paper →
The persistence of cross-cutting discussion in a politicized public sphere
Political Polarization
Diana C. Mutz
Key Takeaway
There has been a significant increase in political discussions in the United States, largely due to “increases in like-minded political discussion partners.” However, conversations across the political aisle (cross-cutting conversations) are "no less common than they were 25 years ago," and the paper concludes that elites rather than the mass public may be greater contributors to widespread political intolerance.
Full summary & details

Overview

In this research paper, Diana Mutz uses "two identical pre-election surveys" from 1996 and 2020 respectively to examine changes in the political discussion networks of Americans in the 25 years between both surveys and its implications for political participation and partisan tolerance. Here are some key findings:

  1. When comparing results from both pre-election surveys (1996 and 2020), the study finds that political discussants, which refers to the number of people someone talks to about politics, increased by 22% among all respondents. However, to account for changes in survey best practices and ensure uniformity, the paper also compared the 1996 results with a subset of "respondents randomly assigned to take the survey by telephone in 2020, as was originally done in 1996" and found an even larger increase in political discussants (33%) compared to 1996.
  2. Despite the increase in political conversations among Americans, the study found no significant increase in political discussions with those that have opposing views. When looking at the composition of Americans' political discussion networks, the study finds that the number of "like-minded discussants" increased the most and "oppositional discussants were largely unchanged." This indicates that on average, political networks of Americans have only grown more homogenous over time.
  3. In 1996, women had greater cross cutting conversations, which refers to conversations with people who hold different political views, but in 2020, "this relationship had completely reversed, with women now reporting systematically less cross-cutting discussion than men."
  4. The study finds a 9% decrease in political tolerance from 1996 to 2020. Political participation however increased given the greater homogeneity in political discussion networks with the study suggesting that the number of "like-minded discussants" in one's political networks is a salient predictor for political participation, or put more simply, that people are more likely to engage in political action when they feel validated in their views.

Why Is This Important?

A lot of interventions in recent years aimed at curbing polarization have focused on increasing cross-cutting political discussions with the assumption that "more cross-cutting contact in the mass public could stem the tide of rising polarization and violations of democratic norms." However, this paper shows that Americans dislike engaging in such discussions and even so, the prevalence of such "cross cutting discussions" has remained largely unchanged. Thus, solutions for curbing political polarization that are based on incentivizing communication with "out-partisans" may not be highly effective.

Read Paper →
Model-Dependent Moderation: Inconsistencies in Hate Speech Detection Across LLM-based Systems
LLMs & Civic Discourse
Yphtach Lelkes
Key Takeaway
Automated moderation systems based on out-of-the-box LLMs, despite their sophistication, may perpetuate and not mitigate "existing social inequities in online spaces."
Full summary & details

Overview

Social media platforms are using large language models in content moderation, especially for hate speech detection. But is hate speech classification consistent across models? Using a novel synthetic dataset of over 1.3 million sentences spanning 125 demographic groups, the researchers tested how seven leading automated content moderation systems evaluate and classify hate speech. Here's what they found: Here are some key findings:

  1. Content moderation systems are inconsistent: The authors found substantial differences in how different models evaluated and classified hate speech. The Mistral moderation endpoint had high detection value and consistent classifications, meaning that it classified more posts as hate speech compared to its competitors. Open AI's moderation endpoint was found to be more inconsistent in hate speech classification, while OpenAI's GPT 4 and Google's Perspective API were the "most measured", meaning they had comparatively lower rates of finding hate speech violations.
  2. Demographic variations: Across models, the researchers found that hate speech targeting race, gender and sexual orientation were more easily detected by the models, compared to education, showing that the models "generally recognize hate speech targeting traditional protected classes more readily than content targeting other groups." Content targeting "woke people" was flagged as hate speech by some models but not by others.. For content targeting Christians, the Mistral Moderation Endpoint classified it as hate speech, while the Perspective API assigned the identical content a substantially lower score. For content containing the most severe anti-Black slur, some models assigned the maximum possible hate speech score, while others classified the same content as less hateful.
  3. Implicit hate speech detection: The researchers also tested how systems handled sentences that paired positive language with slurs, such as "All [slur] are great people." Positive sentences containing anti-Black slur received the highest average hate speech scores across models, while positive statements about ideological groups like "commies" received "lower hate values" despite identical sentence structure. The disagreements were starkest for statements about "alt-right members," where the Mistral Moderation Endpoint classified the content as near-maximum hate speech while GPT-4o assigned it a score of zero. This reveals a fundamental disagreement between systems: some, like Claude 3.5 Sonnet, treat slurs as harmful regardless of positive context, while "less sensitive systems" prioritize "overall positive sentiment."

Why Is This Important?

As platforms increasingly delegate content moderation to automated AI systems, the inconsistencies across models in hate speech classification revealed in this paper raise serious concerns about fairness and accountability. Marginalized groups may receive uneven protection depending on the models used for moderation. The authors call for standardized benchmarks, greater transparency in how these systems are built, and industry-academic collaboration to establish more consistent and equitable moderation standards.

Read Paper →
Talking Point based Ideological Discourse Analysis in News Events
LLMs & Civic Discourse
Daniel J. Hopkins
Key Takeaway
This paper shows that LLMs can be a powerful tool to analyze news discourse at scale, and may lead to a better understanding of ideological competition in the information ecosystem. The proposed framework addresses limitations regarding the ability for LLMs to "integrate contextual information required for understanding abstract ideological views."
Full summary & details

Overview

This paper introduces and evaluates an LLM-based framework for analyzing the ideological discourse pertaining to news events. The authors do so by representing news articles based on their talking points, capturing how the media frames the particular topic of discussion. The framework then uses an LLM to extract "prominent talking points (PTPs)" from news events. Each PTP is "infused with ideological information," which reveal left-leaning vs right-leaning viewpoints, referred to as partisan perspectives. The researchers develop and release a dataset including 6,141 news articles sourced from 126 outlets covering 24 events related to 4 politically contested topics. For each event, the framework generates a PTP and the aggregate partisan perspectives. The researchers evaluated the framework's ability to generate these perspectives via both automated tasks and human validation. The paper then seeks to assess whether the LLM-based framework can predict the partisan leaning (left or right) of news articles related to the event, but not part of the initial dataset. For each new article, the framework identified the three most similar left- and right-leaning partisan perspectives and asked an LLM to determine which group the article aligned with most closely. The paper compares this method with simply prompting an LLM to provide ideological labels. Here are some key findings:

  1. The author’s classification approach outperformed directly prompting an LLM, suggesting the framework effectively captures ideological signals across many articles about the same event.
  2. When the researchers used the partisan perspectives as training data to fine-tune a model, the fine-tuned model outperformed the base model on ideology classification which indicates that the framework's generated partisan perspectives "encodes ideology-specific nuances."
  3. The researchers also conducted a human evaluation to measure the "quality of generated partisan perspectives" and found that these viewpoints can be incorrect, especially when the LLM produces inaccurate summaries of news articles.

Why Is This Important?

Through this paper, the authors provide an LLM-based framework to analyze ideological discourse about news events. Moreover, by analyzing "highly contested repeating themes," the paper offers fresh insights regarding areas of consensus and polarization. The authors have also released the dataset and model from this paper to the broader community to facilitate further research.

Read Paper →
Unraveling a “Cancel Culture” Dynamic: When, Why, and Which Americans Sanction Offensive Speech
Political Polarization
Matt Levendusky
Key Takeaway
Whether “cancel culture” is perceived as harmful or beneficial depends on how citizens prioritize competing values. That is, a commitment to “unfettered speech” versus the goal of protecting “marginalized groups in the public sphere.”
Full summary & details

Overview

This paper empirically examines how often Americans actually sanction (or “cancel”) others for offensive speech, why they do so, and how accurately they perceive others' canceling behavior. The researchers administered a nationally representative survey (N=1,752), asking if respondents engaged in specific “cancelling behaviors” and the extent to which they perceived others to do so. Subsequently, respondents participated in an experiment, reading four hypothetical scenarios wherein speakers made potentially offensive statements. Each scenario randomly varied the speaker's partisanship or race, their role, and what they said. Respondents rated their likelihood of engaging in different canceling behaviors for each scenario. Here are some key findings:

  1. The paper finds that Americans tend to inflate by at least a factor of two “how often their fellow citizens cancel others.” In particular, the research found that respondents were “10 times more likely” to witness someone else engage in doxing than to have done it themselves, showing that people’s perception about the prevalence of cancelling behavior is far greater than the number of people who do it.
  2. Contrary to popular belief and media coverage regarding cancel culture, this research suggests that Americans tend to cancel offensive speech that counters their "ideological leanings, regardless of who says them.” As the authors note, “generally, citizens do not care who makes offensive statements.”
  3. Democrats and Republicans are “similarly likely to engage in cancelling behavior” due to “ideologically disagreeable ideas.” But why does this finding contradict reports suggesting Democrats cancel more often than Republicans? The authors contend that the “supply of offensive statements” during data collection likely had a “right leaning bias.”

Why Is This Important?

This research paper is “among the first to empirically investigate the prevalence and motives of canceling among the American public.” Moreover, the paper’s findings regarding the politicized misperceptions about who cancels whom and why “could exacerbate partisan animus and discourage cross-party dialogue.”

Read Paper →
More platforms, less attention to news? A multi-platform analysis of news exposure across TV, web, and YouTube in the United States
Social Media & Platforms
Sandra González-Bailón
Key Takeaway
This study shows that in a world where most people tend to avoid news, a "committed minority" of news consumers have a disproportionate influence on how news is disseminated.
Full summary & details

Overview

This paper explores how news consumption is shaped by a "multi-platform media environment" and whether exposure to multiple media platforms "alleviates or exacerbates observed inequalities in attention to news." To do so, the researchers tracked and analyzed news exposure across TV, the web and YouTube for 55000 unique panelists across 39 months. Results from this study provide insights into the fraction of news consumers specific to each platform and their demographic profiles. Further analyses looks at whether multi-platform news exposure affects time spent engaging with news sources. Here are some key findings:

  1. TV has the largest reach when it comes to news exposure, with 80% of panelists reporting seeing at least one news channel. Less than half of the panelists reported accessing news via the web while only 5% of overall visits to YouTube were to access news sources.
  2. When looking at time spent accessing news, the study finds that the "the skewness of the distribution becomes more prominent as we move from TV to the web and to YouTube." This trend indicates the possibility of "news fatigue" on TV and web searchers while YouTube news consumption is on the rise despite being a very small component of overall YouTube traffic.
  3. Young people are comparatively less interested in news consumption and multi-platform news consumers tend to be older and more educated.
  4. Lastly, exposure to more media platforms only increases news consumption for the "the unrepresentative minority of news consumers, who generate most engagement with online news."

Why Is This Important?

Based on their findings, the researchers importantly conclude that "online media is amplifying the already high levels of interest of cross-platform consumers, setting them farther apart from the average citizen." Moreover, with results showing that TV is still the most common avenue to access news, this research makes the case for why the amount of scholarly interest on online news exposure is disproportionate to the relative impact these sources have compared to TV."

Read Paper →
Listen for a change? A longitudinal field experiment on listening’s potential to enhance persuasion
Persuasion & Behavior Change
Erik Santoro
Key Takeaway
When it comes to conversations about policy-level disagreements, “listening may not reliably enhance persuasion efforts.” Thus, from a practical standpoint, “adding listening to persuasive appeals may not be worth the added costs if persuasion is the goal.”
Full summary & details

Overview

This paper empirically investigates whether actively listening to someone's views on a topic before trying to convince them about an alternative viewpoint is effective. Or put simply, does listening enhance persuasion? To address this question, the researchers conducted a field experiment with 1,485 participants who had 10-minute video conversations with trained canvassers about unauthorized immigration policy. Participants were randomly assigned to four conditions: 1) canvassers listened to participants' views; 2) canvassers listened then shared a persuasive narrative; 3) canvassers shared a persuasive narrative without listening; or 4) a placebo where canvassers did neither. The main outcome of interest was whether participants reported a “reduction in exclusionary attitudes”, that is, lower “prejudice toward undocumented immigrants and support of anti-undocumented immigrant policies.”Here are some key findings:

  1. Persuasive narratives changed attitudes substantially, whether or not canvassers listened first. Participants who heard a persuasive narrative showed reductions in anti-immigrant prejudice and opposition to pro-immigrant policies, with effects persisting five weeks later. Thus, while a persuasive appeal “effectively changed prejudice and policy attitudes,” “adding listening to the persuasive appeal did not change attitudes any further.”
  2. Contrary to popular belief, this research found little evidence to support the claim that listening alone can change political attitudes, with the study only finding “marginal differences” between the “listening only condition” and the placebo.
  3. Participants who were listened to before being persuaded reported feeling less defensive and had more favorable views of the canvasser, but this did not significantly change their attitudes. This shows that people tend to change their attitudes in response to persuasive appeals even from those they dislike.

Why Is This Important?

Politicians and advocates alike have touted the role of listening as a means to facilitate common ground and bridge political divides. The idea being that when people feel heard, they are more likely to be receptive to opposing viewpoints. This research paper tests this claim empirically and finds that the role of listening in enhancing persuasion may be overstated. Thus, these findings can help guide future interventions designed to bridge divides.

Read Paper →
Short-term exposure to filter-bubble recommendation systems has limited polarization effects: Naturalistic experiments on YouTube
Political PolarizationSocial Media & Platforms
Dean Knox
Key Takeaway
While recommendation algorithms can shape what users choose to engage with, this paper finds no evidence that these algorithms radicalize users by pushing extreme content to them in the short-term.
Full summary & details

Overview

Academic research and media coverage alike have argued that recommendation algorithms used by companies like YouTube are optimized to drive engagement and have thus driven political polarization by creating “filter bubbles” and “rabbit holes.” Rabbit holes differ from filter bubbles in that recommendations become extreme over time. In this paper, the researchers address this question by creating an interface that mimics how YouTube presents videos and recommendations to users. They simulate filter bubbles and rabbit holes by presenting participants with ideology balanced and more partisan content. The goal here is to see whether the recommendations “alter users’ media consumption decisions and, indirectly, their political attitudes.” Here are some key findings:

  1. The study finds that while changes to recommendation algorithms shapes “user demand” by changing the types of videos consumed and time spent on the platform, it did not have substantial effects in changing political attitudes in the short-term. For example, when the algorithm recommended more videos matching the ideology of what users had just watched (rather than showing balanced recommendations from both sides), the share of liberal videos chosen increased by 6 percentage points among liberals and decreased by 12 percentage points among conservatives, yet these changes in viewing behavior did not translate into meaningful shifts in political attitudes.
  2. On the question of whether recommendation algorithms put users in rabbit holes where they see more extreme content over time, the authors found no significant effects. As the authors note, “any algorithmic effect for rabbit holes that exists is likely far smaller than simply watching conservative or liberal video sequences.”

Why Is This Important?

This paper provided participants with choices about media consumption, based on actual YouTube recommendations through “9000-person randomized controlled trials” which according to the authors, “represents the most credible test of the phenomenon to date.” Given its empirical rigor, this paper challenges notions about the potential of platforms like Youtube to radicalize users. If, as this paper suggests, these effects are overstated, it can have significant implications for how policymakers and civil society view algorithmic curation as a driver of polarization.

Read Paper →
The Diffusion and Reach of (Mis)Information on Facebook during the US 2020 Election
Social Media & PlatformsMisinformation2020 Meta Election Project
Sandra González-Bailón & Deen Freelon
Key Takeaway
During the 2020 election, misinformation spread slowly on Facebook, "powered by a tiny minority of users who tend to be older and more conservative."
Full summary & details

Overview

The paper, the most ambitious analysis to date on how content propagates on social media, examined how misinformation spread during the U.S. 2020 Election on Facebook. The authors studied the diffusion of more than one billion posts between July 2020 and February 2021. They further explored the extent to which Facebook's content moderation policies were effective in dealing with misinformation in that time period. They found that:

  1. On Facebook, most information spreads through public pages that broadcast to a larger audience at once. However, misinformation is more likely to spread through private exchanges between users, and was most often found among users who were older and conservative.
  2. Viral misinformation on Facebook takes longer to gain traction and is likelier for content classified as "political news." Most reshares take place 24 hours after the original post from a relatively small subset of users—"even if the views it accumulates still amount to millions."
  3. Content moderation efforts, particularly emergency "break the glass" measures implemented in the weeks before the election, proved highly effective at curtailing the spread of misinformation. The authors found that the number of misinformation posts reaching large audiences declined steadily from July onward, and viewership "plummeted to near zero" in the two weeks before Election Day.

Why Is This Important?

Concerns about misinformation on social media were widespread in the leadup to the 2020 election. This study showed that on average, misinformation on Facebook has "higher virality" but gathers fewer reshares over time, "contrary to what past research has claimed about misinformation."

Read Paper →
Sanctioning political speech on social media is driven by partisan norms and identity signaling
Political PolarizationSocial Media & Platforms
Matt Levendusky & Yphtach Lelkes
Key Takeaway
This paper shows that sanctioning offensive speech on social media has a performative element: it allows users to signal their party allegiance and loyalty.
Full summary & details

Overview

Social media platforms are rife with online firestorms, wherein groups of users shame people for offending political sentiments. In this paper, the authors empirically examine why certain individuals partake in this sanctioning behavior. While shaming people with opposing political beliefs is not novel, social media enables people to get approval from fellow partisans for such behavior (via likes, comments and reshares). Despite these incentives, past research shows that relatively few people engage in such behavior. Through three experiments, the researchers tested whether sanctioning political speech online signals partisan identity, how group approval shapes willingness to sanction, and whether people are more likely to engage in this behavior if they are not the first to do so. Here’s what they found:

  1. Tweets that signal partisan identity without explicitly stating party affiliation affects attitudes “nearly as strongly as” explicitly partisan tweets. For example, a Democrat reading a tweet critical of transgender people is almost as likely to assume the author is a Republican as if they had read a tweet actively campaigning for a Republican candidate. While it's expected that tweets explicitly stating party affiliation signal partisanship, it is worth noting that implicit messaging is just as impactful.
  2. Participants believe that publicly criticizing political opponents online is something their fellow partisans both approve of and expect them to do.
  3. People are reluctant to be the first to sanction someone online, but will readily pile on once others have acted. Respondents shown an offensive tweet accompanied by existing critical replies were significantly more willing to sanction the speaker than those shown the same tweet with no replies.

Why Is This Important?

This paper helps shed light on the dynamics and prevalence of online cancel culture. The authors found that most people will readily “pile on” to existing online criticism, but are more reluctant to be the first to criticize. Likes, comments and reshares are visible on social media platforms, and algorithms tend to amplify this moralized content, which is why cancel culture seems to spread rapidly online, even if relatively few people engage in such behavior.

Read Paper →
Bypassing Versus Correcting Misinformation: Efficacy and Fundamental Processes
Misinformation
Dolores Albarracín
Key Takeaway
Bypassing is an effective method to mitigate the impact of misinformation and is more effective than correction in a fast-paced information environment where short-form content's presence dominates.
Full summary & details

Overview

This paper presents a new method to attenuate the impact of misinformation, as an alternative to the predominant method of "correction" or fact-checking (which directly refutes misinformation). Through a series of six iterative experiments the paper demonstrates that "bypassing"—presenting an accurate alternate statement that does not directly refute the false claim—is a more effective method. An example of this would be, when a reader encounters a headline "genetically modified foods have health risks," then the counter (bypassing) statement would be, "genetically modified foods help the bee population." The three key advantages that the paper highlights of bypassing over correction are:

  1. Subverts the Discomfort of Confrontation: Corrections are inherently adversarial, putting the reader at odds with a previously held belief, resulting in defensiveness. Bypassing overcomes this by not engaging with the belief.
  2. Reduces Cognitive Load: Correction requires the reader to consciously revisit the original statement of misinformation from memory and then alter perception. Bypassing overcomes this by presenting the reader with alternative statements to rely on, without the added task of deliberative reasoning.
  3. Why Bypassing Works: Bypassing outperforms correction when the focus of the reader is on the accuracy of the newly provided information, as opposed to changing already formed attitudes, in which case bypassing remains as ineffective as correction.

A limitation of bypassing lies in the fact that it did not prove to be as effective in combating misinformation when the reader was presented with real headlines, yet still produced better results than correction even within that setting. Notably, neither correction nor bypassing changed attitudes or intended behavior on specific policies, when it came to the real headlines.

Why Is This Important?

Correction as the only line of defense against misinformation is not enough. Correction requires deliberate, open-minded engagement which is less feasible in an overwhelming information ecosystem. It is critical to identify alternative strategies to combat the consequences of misinformation, and this research provides bypassing misinformation as a potential solution. The practical implication is significant: presenting the public with accurate positive statements, without explicitly correcting false claims, is a better way to inoculate users against misinformation.

Read Paper →
Megastudy testing 25 treatments to reduce antidemocratic attitudes and partisan animosity
Political Polarization
Matt Levendusky
Key Takeaway
This megastudy reveals that reducing partisan division and protecting democratic norms are related but distinct challenges—and that conflating them leads to the wrong interventions. Reducing partisan animosity through effective interventions remains critical, because it has broader implications for participatory democracy, and shapes phenomena like polarization, erosion of trust, and political segregation.
Full summary & details

Overview

This paper tests the efficacy of 25 crowdsourced, behavioural interventions aimed at reducing partisan animosity and antidemocratic attitudes among US voters. The interventions drew on a wide range of strategies—from showing people relatable individuals across party lines, to correcting wildly exaggerated beliefs about the other side, to presenting footage of democratic collapse. Researchers measured impact across three distinct fronts: how much people dislike the opposing party, whether they support actions that undermine democratic norms and/or endorse political violence. Here are some key findings:

  1. Most interventions reduced partisan hostility: 23 of 25 treatments significantly reduced how much participants (up to 10.5 percentage points) disliked the opposing party, with the strongest effects coming from humanizing the other side (short videos of relatable people with different political views) and emphasizing shared identities ("we're all Americans").
  2. Reduced support for undemocratic practices and candidates are highly correlated: 6 of 25 treatments significantly reduced support for undemocratic norms (up to 5.8 percentage points) largely driven by correcting the misconceptions of beliefs held by the other party, for example, a video highlighting that supporters of the opposing party did not dehumanize them. The second effective strategy highlighted real-world consequences of democratic backsliding, operationalized through a video compilation of violent forms of civic unrest from around the world.
  3. A few treatments weren't as effective and some had adverse effects: 4 of 25 of interventions that had decreased partisan animosity increased support for undemocratic practices,, for example, describing a likable member of the opposing party. Similarly, one intervention, a video compilation of civic unrest, reduced support for undemocratic norms, also resulted in an increase in endorsement for political violence. The authors suggest this is because it ended with a video of the January 6th unrest which some people believed was a legitimate form of protest.

Why Is This Important?

This paper details two important aspects of the participants' democratic and partisan beliefs: that two weeks after the experiment any change was effectively reversed across all fronts. Durable impact likely requires structural and institutional interventions: changes to the media environment, elite rhetoric, and civic education, not just individual-level nudges. The second, it offered insight on a long held belief by practitioners and scholars alike that reduction in partisan animosity leads to reduced support for undemocratic practices, is false. This reveals the need to tackle these two issues as distinct categories as opposed to under a singular construct.

Read Paper →
The Rise of and Demand for Identity-Oriented Media Coverage
Political PolarizationSocial Media & Platforms
Daniel J. Hopkins
Key Takeaway
News outlets are increasingly covering stories through the lens of social identities and that is in part due to growing audience demand for such coverage.
Full summary & details

Overview

This paper examines whether news content highlighting core social identities like race, gender, religion, and political affiliation is more likely to generate audience engagement, and whether that audience demand helps explain why such news coverage has grown in recent years. The researchers analyzed tweets from 19 major media outlets (2008–2021) and 553,078 news URLs shared on Facebook. They used AI text classifiers to track identity language over time and further measure its relationship to audience engagement. Here's what they found:

  1. Identity-oriented news coverage has grown substantially: The share of tweets referencing social identities increased steadily over time—rising from 6.6% of tweets before 2015 to 10.3% in 2015 and beyond. Mentions of racial and partisan identities roughly tripled since 2007, with notable spikes around the 2016 election, #MeToo, and the killing of George Floyd.
  2. Identity oriented content drives engagement: Both Facebook and Twitter data show that posts featuring identity-related language consistently received more likes, shares, and retweets than those without, across nearly every outlet studied.
  3. The relationship between identity oriented coverage and engagement is causal: Through causal experiments that randomized exposure to identical news stories "with and without identity cues," the authors found that stories received significantly more clicks if the headline referenced a social identity.

Why Is This Important?

The findings reveal a significant shift in the political information environment. Media outlets now have both the means to learn about audience preferences through real-time engagement metrics and the economic motive to cater to them. As identity-oriented content proves more engaging, outlets have an incentive to produce more of it. Identity-focused coverage may exacerbate polarization by invoking primal "us vs them" notions, making people more reliant on group stereotypes while making decisions,"even if these frames do not make explicit racist, sexist, or partisan appeals."

Read Paper →
Causally estimating the effect of YouTube’s recommender system using counterfactual bots
Social Media & Platforms
Duncan Watts
Key Takeaway
While recommendation algorithms may shape content exposure and user preferences on online platforms like YouTube, this paper suggests that narratives about widespread algorithmic manipulation may be overstated.
Full summary & details

Overview

This research paper seeks to causally estimate the effect of YouTube’s recommendation algorithm on consumption of “partisan content” on the platform. The authors accomplish this by comparing bots that replicate the YouTube consumption patterns of real users with what they call, “counterfactual bots” wherein consumption preferences “rely exclusively on recommendations” from YouTube’s algorithm. Here are some key findings:

  1. Relying solely on Youtube’s recommendation system “results in a more moderate experience on YouTube relative to the real user.”
  2. When Youtube users shift from consuming partisan content to more moderate content, the sidebar is quick to reflect the change in content preferences while “homepage recommendations react more slowly.”

Why Is This Important?

With over 2.5 billion active monthly users, YouTube is one the biggest online platforms in the world. While in some ways, the platform democratized video sharing and consumption, it's also been criticized for hosting radical content, much like other Big Tech companies like Meta. But is the consumption and proliferation of radical content a consequence of user choice or recommendation algorithms that are optimized to drive engagement? Disentangling the effects of algorithmic amplification from user intentions is difficult and this paper provides a framework to do so. By causally estimating the role of recommendation algorithms in driving partisan content consumption, this paper’s findings have important implications for policymakers seeking to hold platforms accountable for the content they host.

Read Paper →
Asymmetric ideological segregation in exposure to political news on Facebook
Social Media & PlatformsPolitical PolarizationMisinformation2020 Meta Election Project
Sandra González-Bailón & Deen Freelon
Key Takeaway
News that a Facebook user saw on their feed during the US 2020 election depended on their own political leanings, and, overall political news on Facebook leaned conservative.
Full summary & details

Overview

This paper analyzes whether the political ideology of Facebook users shapes the news content they see in their feed. The researchers looked at all political news available on Facebook during the US 2020 election compared to what the 208 million US Facebook users actually saw on their own feeds. These are few of their findings:

  1. News on Facebook exists in silos: The news users see on their feeds depends on their ideology which is amplified both by the platform's algorithm and the content users choose to engage with. Ideological segregation is more visible on Facebook Groups and Pages compared to individual user posts.
  2. More conservative domains circulate on Facebook: When looking at news websites and articles shared within political bubbles on Facebook, far more of those sources existed within conservative circles than liberal ones — a pattern consistent with findings from other social media platforms.
  3. Higher levels of misinformation are present in conservative circles: Most content flagged as misinformation by Meta's Third-Party Fact-Checking program was found in conservative circles, meaning "that conservative audiences are more exposed to unreliable news."

Why Is This Important?

This study examines whether political polarization extends to social media feeds by analyzing individual users' feeds and the news content they were exposed to. It confirms that user feeds are curated to an individual's political leanings, while also revealing a conservative lean in both Facebook's content overall and the content flagged as false.

Read Paper →
No entries match your filters.