A MEDIATED Feed

News, announcements, and insights from the
Center on Media, Technology, and Democracy.

President Emerita Amy Gutmann Joins Penn MEDIATED as Faculty Advisor

Penn MEDIATED is honored to welcome one of the foremost scholars of democratic governance, Penn President Emerita Amy Gutmann, as a faculty advisor. The longest-serving president in the University of Pennsylvania's history and an appointee of both President Barack Obama and President Joe Biden.

Penn MEDIATED is honored to welcome one of the foremost scholars of democratic governance, Penn President Emerita Amy Gutmann, as a faculty advisor. The longest-serving president in the University of Pennsylvania's history and an appointee of both President Barack Obama and President Joe Biden, Professor Gutmann brings decades of civic leadership, as well as a rare combination of scholarly rigor and real-world impact, to Penn MEDIATED.

Executive Director Alex Engler reflects:

"From seminal scholarship on democratic deliberation; to advancing global dialogue as Penn's President; to strengthening alliances and combatting violent extremism as the U.S. Ambassador to Germany—the information ecosystem has been a guiding theme of her exemplary career. We are thrilled to welcome President Gutmann to the Center's leadership."

In joining Penn MEDIATED, Professor Gutmann says:

"The health of our democracy depends on the health of our information ecosystem. Seeking truth and understanding is not a utopian ideal; it is a prerequisite for a functioning republic. I am proud to join Penn MEDIATED and contribute to its vision of translating our rigorous research into democratic well-being."

As a scholar, Gutmann has made seminal contributions on democratic education, deliberative democracy, and the necessity of compromise for governance. As a member of the Knight Commission on Trust, Media, and Democracy, Gutmann contributed to the Commission's flagship report diagnosing the collapse of a shared information foundation as a core threat to American self-governance.

Penn MEDIATED Co-Director and Penn Integrates Knowledge Professor Duncan Watts observes:

"President Emerita Gutmann has a lifetime of experience transforming academic research into real world change—we are grateful for her help now, when it is more essential than ever to foster a healthier information ecosystem."

From 2022–2024, Gutmann served as the U.S. Ambassador to Germany, where she strengthened the U.S.-German relationship through expanded support for Ukrainian defense, increased trade, and bolstered resistance to extremism. She has since rejoined Penn and currently holds the Christopher H. Browne Distinguished Professorship in Political Science and is a Professor of Communication at the Annenberg School for Communication.

Christopher Yoo, Penn MEDIATED Co-Director and Imasogie Professor in Law and Technology, adds:

"Amy Gutmann has always been an authoritative voice and unflinching champion for a stronger and more deliberative democracy. We look forward to Amy opening our inaugural Penn MEDIATED conference at the end of August."

About the Faculty Advisor
Amy Gutmann
Amy Gutmann
Christopher H. Browne Distinguished Professor of Political Science, School of Arts and Sciences
Professor of Communication, Annenberg School for Communication
President Emerita, University of Pennsylvania

Amy Gutmann is the Christopher H. Browne Distinguished Professor of Political Science at the School of Arts and Sciences and Professor of Communication at the Annenberg School for Communication. She is also President Emerita, having had a transformative impact as the longest-serving president of the University of Pennsylvania. From 2022–2024, Gutmann served as the U.S. Ambassador to Germany, where she strengthened the U.S.-German relationship through expanded support for Ukrainian defense, increased trade, and bolstered resistance to extremism.

As a scholar, Gutmann has made seminal contributions on democratic education, deliberative democracy, and the necessity of compromise for governance. Gutmann served as a member of the Knight Commission on Trust, Media, and Democracy (2017–2019), informing a groundbreaking report unpacking the crisis of democratic backsliding in the United States and recommendations for restoring public trust. Further, in 2009, President Obama appointed Gutmann as chair of the Presidential Commission for the Study of Bioethical Issues, where she helped guide national conversations on pressing public health challenges in the United States. Gutmann served as an Executive Committee member of the National Constitution Center and in 2018, she was named in Fortune Magazine's "World's 50 Greatest Leaders."

Introducing the Updated LLMs and Civic Discourse Research Dashboard

We're launching a project on LLMs and Civic Discourse — and a new research dashboard featuring 71 papers on how large language models are reshaping the online information ecosystem.

Large Language Models (LLMs) are playing an increasingly substantial role in the information ecosystem and in civic discourse—that is, how LLMs discuss political topics, refer to politicians, and relate election information. LLMs relate political information in chatbots and search engines; moderate content on social media; are being tested as factcheckers—collectively LLMs likely regularly reach billions of people.

We believe the growing influence of LLMs will be central to the public discourse in the coming decades, which is why we’re launching a project on LLMs and Civic Discourse.

We kicked off the project on March 4th, when Penn MEDIATED hosted an online convening of prominent academic researchers and civil society organizations to share results from, and approaches to, monitoring and evaluating LLM politics. As part of this, we’ve documented and categorized much of the research in this field in the Center’s new LLMs and Civic Discourse Research Dashboard. As of today, it features 71 papers (with more to come!), with a focus on LLM chatbots, content moderation, and fact-checking. This resource is designed to help researchers, advocates, and policymakers keep up with the latest computational research on how LLMs are reshaping the online information ecosystem.

Each entry is organized by theme and paired with a plain-language summary of key findings to help distill complex research insights on this topic in an accessible manner. The dashboard covers research on how LLMs handle social and political topics, their role in content moderation and fact-checking, their potential to influence political beliefs and behavior, and their implications for democratic participation more broadly.

This convening and dashboard is one part of Penn MEDIATED’s broader effort to bridge the research-to-impact gap on how LLMs are shaping civic discourse. We are also supporting novel research projects in this space through our grants program. We are also working closely with Penn MEDIATED faculty advisor Danaé Metaxa, and collaborating with partners at Princeton University, to build the technical infrastructure for large-scale, longitudinal monitoring of LLMs on political issues. Given that LLM responses are constantly evolving and in some cases, based on undisclosed policy updates by platforms, supporting longitudinal LLM monitoring is critical to ensure that researchers have the data needed to tackle these issues in real time.

Explore the dashboard here → https://infodem.upenn.edu/llm-civic-discourse/

Penn MEDIATED's Partnership with the Atlantic Council's Digital Forensics Research Lab for the 2026 Digital Sherlocks

Penn MEDIATED is excited to officially announce our partnership with the Atlantic Council's Digital Forensics Research Lab for the 2026 Digital Sherlocks program — training the practitioners on the front lines of information integrity and democratic resilience.

Penn MEDIATED Atlantic Council Digital Forensics Research Lab

In 2026, the most influential indices of democratic health—the Varieties of Democracy Index and the Freedom in the World Report—found continued declines in democracy globally. 74% of people worldwide now live in autocracies, at the expense of electoral and liberal democracies, including the U.S., which slipped from a stable to an eroding democracy.

Penn MEDIATED believes it is essential for universities to step up in the active defense of democracy—and those who work to protect and advance democratic values. To that end, Penn MEDIATED is excited to officially announce our partnership with the Atlantic Council's Digital Forensics Research Lab for the 2026 Digital Sherlocks program.

The DFRLab, incubated at the Atlantic Council in 2016, maintains technical and policy expertise on disinformation, connective technologies, democracy, and the future of digital rights, while also conducting investigations exposing influence operations and emerging digital threats. The group's Digital Sherlocks program furthers this mission by training practitioners—members of civil society, journalists, researchers, and human rights defenders—on foundational concepts and frameworks to understand online influence operations, as well as open-source investigative techniques, tools, and methodologies.

Penn's faculty, including Professors Danaé Metaxa and Jane Esberg, will contribute to the training, directly informing practitioners on the front lines of information integrity and democratic resilience. Professor Metaxa focuses on the bias and representation in sociotechnical systems and has conducted research and development of longitudinal generative AI auditing to track how models discuss (or decline to discuss) social issues. Professor Esberg's scholarship delves into authoritarian repression and censorship, particularly in Latin America and Spain, exploring the purpose of authoritarian censorship of non-political content and opponent repression in appealing to their supporters. Drawing from their empirical research and expertise, each will teach a module that offers attendees a deeper understanding of the information ecosystem and its manipulation by autocratic governments.

Digital Sherlocks Program

"Since its inception in 2019, the Digital Sherlocks Program has grown to support a global community of more than 3,500 alumni from over 140 countries. These Digital Sherlocks, committed to monitoring and protecting information environments in their respective regions, are the heart of our movement."

Penn MEDIATED's partnership with the DFRLab offers a chance for practitioners to hear directly from academics with deep subject matter expertise, while offering Penn an ability to learn and hear what challenges, limits, and concerns defenders on the front lines face. Furthermore, DFRLab and Penn MEDIATED share a common vision: to develop and expand a network of expertise focused on the information ecosystem that can have real-world impact. Working toward this goal by bringing expert academics and practitioners together under the trusted Digital Sherlocks umbrella is an opportunity we are elated to see come to fruition.

New Research Demonstrates Extent of Deceptive Online Networks

At least 40 million American Facebook and Instagram users were exposed to deceptive online networks ahead of the 2020 election — plus how we're addressing the limits of quantifying falsehoods.

Plus how we're addressing the limits of quantifying falsehoods

At least 40 million American Facebook and Instagram users were exposed to deceptive online networks in the lead up to the 2020 election—however this constituted only 0.3% of these users' total content consumption. This new paper, How deceptive online networks reached millions in the US 2020 elections, contains the newest findings from the Meta 2020 Election project, by a prominent research team including Penn MEDIATED affiliated faculty Sandra Gonzalez-Bailón and Deen Freelon.

The study identified and tracked 49 deceptive online networks that reached 15% and 2% of active adult users in the U.S. on Facebook and Instagram, respectively. For context, this reach was less than a third of the estimated 126 million Facebook users reached by the Russian Internet Research Agency over two years.

Of these 49 networks, Meta identified 13 as politically motivated and 36 as financially motivated, and found the financially motivated networks reached more users. Mirroring past research, the study found that older, more conservative users, and users previously exposed to false news were more susceptible to these deceptive online networks. Moreover, a significant number of users were exposed to these deceptive networks indirectly, that is, by viewing reshares from regular user accounts who often unknowingly amplified inauthentic content to audiences beyond the networks' direct reach.

Despite these concerns, the authors found that it "accounts for a very small share of users' overall political content consumption," mirroring other studies, such as one from MEDIATED Co-Director Duncan Watts finding that fake news comprises only 0.15% of Americans' daily media diet.

Findings from the Meta 2020 Election Project

The Meta 2020 Election project is an expansive collaborative effort between Meta and external academics to critically examine political attitudes and behaviors on the platform and its impact on the 2020 U.S. election. Professors Sandra Gonzalez-Bailón and Deen Freelon have been contributing to this project for years, leading to essential research results including:

  • Studying the diffusion of more than one billion posts during the 2020 election, misinformation was found to spread slowly on Facebook, "powered by a tiny minority of users who tend to be older and more conservative."
  • News that a Facebook user saw on their feed during the US 2020 election depended on their own political leanings, and, overall political news on Facebook leaned conservative.
  • Deactivating Facebook and Instagram in the weeks before the 2020 US presidential election modestly improved people's mental health—the effect was more pronounced for Facebook.

The breadth of this research demonstrates what cooperation between academics and the private sector companies can enable, but this model is threatened by both concerns that Meta has selectively emphasized some results in a self-preferencing manner, and further that platforms have restricted researcher access to platform data, as thoroughly documented by Professor Freelon.

Advancing New Computational Measures of False Narratives

Also published on April 6th, a new paper from the Center for Information, Technology, and Public Life provides an alternative framework to measure and assess the impact of disinformation. According to this view, studying disinformation must go beyond looking at false facts and view it more as narratives that permeate across platforms. These narratives are deeply linked to social identities and hold emotional resonance for those who believe it. Thus, countering disinformation requires understanding its socio-cultural and moral context, which can often be obscured when counter-disinformation efforts narrowly focus on refuting false claims or limiting its exposure on a single platform.

This framing on the importance of narratives to understand the dynamics of online disinformation mirrors our perspective at Penn MEDIATED. Already, Professor Watts has found that the selective use of facts can be as effective as false information in changing political beliefs. We're investing in new research to expand on this, by developing new computational approaches to measure media narratives using large language models. The new research from the Meta 2020 Election project provides valuable insight into the extent of the impact of deceptive online networks—showing that their reach is limited, and also that we need to develop new methods to understand disinformation narratives at scale and across platforms.

Launching the Research Compendium

We're excited to formally launch our Research Compendium — an ongoing effort to comprehensively curate Penn's research on the information ecosystem and its impact on democracy.

Research Compendium preview

We're excited to formally launch our Research Compendium—an ongoing effort to comprehensively curate Penn's research on the information ecosystem and its impact on democracy. With categorical tags and search, we want to make it easier for you to find essential research about polarization, misinformation, persuasion, social media, LLMs, and more. For each paper, we provide an overview of the research, its key findings, and a note on why it matters beyond academia. Check it out here!

Announcing our Affiliated Faculty

We're proud to introduce Penn MEDIATED's affiliated faculty members — distinguished by their rigorous empirical research advancing our collective understanding of the information ecosystem and its impact on democracy.

Affiliated Faculty

We're proud to introduce Penn MEDIATED's affiliated faculty members—distinguished by their rigorous empirical research advancing our collective understanding of the information ecosystem and its impact on democracy. This required real consideration of what exactly we do at Penn MEDIATED—what does it mean to study the information ecosystem? Our faculty are distinguished by their rigorous empirical and computational work, but also by a focus on those mediating actors that make up the information landscape: especially individuals from peers to elites; institutions of media and government; and technologies from podcast platforms to large language models.

Emily Falk's (Annenberg School) research looks at how messages affect individuals, for instance, using neuroimaging to understand why some messages are more persuasive than others. Dolores Albarracín (Annenberg Public Policy) demonstrates what interventions to change belief can actually impact people's behavior. Diana Mutz (Annenberg School) discovered a significant increase in political discussions in the U.S., but no increase in conversations across the political aisle. Eric Santoro (Wharton School) examines this peer-to-peer dialogue more closely, with recent research challenging the notion that listening to those who disagree with you helps to persuade them. Matt Levendusky (Political Science) finds that disagreeable statements and opinions motivate cancel culture. Chris Callison-Burch (Computer and Information Science) is working with Duncan Watts to monitor the evolution of TV news bias. Deen Freelon (Annenberg School) is documenting how social media platforms enable or stymie research—like analyses from Dean Knox (Wharton School), whose results find no evidence that YouTube's algorithms radicalize users in the short-term. Shiri Melumad's (Wharton School) work asks whether relying on LLMs adversely impacts learning outcomes as compared to using search engines.

Penn MEDIATED also welcomes John Lapinski (Political Science); Blake Miller (Political Science); Pinar Yildirim (Wharton School); Michael Morse (Carey Law); Lyle Ungar (Computer & Information Science); Dennis Culhane (Social Policy and Practice); and Dan Roth (Computer & Information Science). These esteemed faculty join the Center's two faculty directors and six faculty advisors—you can read more on our new affiliated faculty page.

Learn more →

Penn MEDIATED Research Grants

In the inaugural year of the Penn MEDIATED Research Grant Program, the Center funded 12 grants totaling $160,000 — including projects that will build essential datasets, develop new software tools, and experimentally test new interventions. The next RFP will open in late summer or early fall 2026.

The 2025 Penn MEDIATED Research Grants have been awarded. Please check back for our next RFP in late summer or early fall, 2026.

12
Grants Awarded
$160K
Total Funding
50%
Cross-School Collaborations

In the inaugural year of the Penn MEDIATED Research Grant Program, the Center funded 12 grants for a total of $160,000, including projects that will build essential datasets, develop new software tools, define novel taxonomies, apply emerging computational and AI methods, and experimentally test new interventions.

These projects demonstrate our Center's commitment to understanding and strengthening our information ecosystem at a moment of profound technological change and extraordinary democratic crisis. They make critical contributions across three dimensions of information and democracy research: unpacking how media ecosystems shape public understanding, examining AI's expanding role as an information intermediary, and investigating communication strategies that enable persuasion and common ground. The grant program has already been successful in its goal of promoting interdisciplinary collaboration at Penn: half of the grants are jointly led by two or more researchers at different Penn schools.

Unpacking the Media Ecosystem

These research projects seek to examine systematic patterns in media coverage, from what gets reported to who controls the outlets doing the reporting. One project, led by Center Director Duncan Watts and Knight Postdoctoral Fellow Amir Tohidi, investigates which crimes receive media attention and how coverage compares to actual crime statistics, helping explain the troubling disconnect between declining crime rates but elevated and persistent public concern about crime. A second funded project looks into how media ownership structures and takeovers influence civic coverage and orientation toward incumbent governments, revealing when economic interests may be shaping democratic discourse.

We also funded projects centered on developing data infrastructure. For instance, a project led by Professors Marc Meredith and Matthew Levendusky will create a comprehensive and searchable dataset of digital ads that ran in the United States since 2018. Similarly, The Election Administration Media Dashboard project will build a comprehensive repository of media coverage on election administration, offering scholars, policymakers, and citizens critical data to understand how media narratives affect public confidence in elections.

When AI Mediates Information

Four projects included in this year's grant contend with the rise of LLMs as information gatekeepers. Assistant Professor and Center Advisor Danaé Metaxa's "AI Watchman" project provides an open-source monitoring system that tracks when and how AI systems refuse to answer questions on politically sensitive topics. Another proposed project from Knight PhD Fellow Elliot Pickens maps how users discuss political topics with chatbots, tracing conversations back to original news articles to document how information is fabricated, overgeneralized, or selectively emphasized during these interactions. A third project examines which factors enable users to bypass protections designed to prevent AI systems from generating harmful political content. Finally, researchers are investigating whether the conversational design of LLMs encourages confirmation bias.

Persuasion and Common Ground

Four projects focus on addressing conversational divides and analyzing the foundations of persuasive communication. One proposal will test 21 widely recommended interventions for improving dialogue across political divides. Another project led by Professor and Center Advisor Sandra González-Bailón examines how interpersonal discussions about controversial topics shape individuals' moral and political views. Two additional projects grapple with how LLMs are reshaping information access and public trust, including cross-national research in the United States and India.

All Center-Funded Grants

  • Interpersonal Discussions and Tipping Points in Social Networks

    Examines how interpersonal discussions shape individuals' moral and political views and how these influences spread through repeated interactions within social networks.

    Sandra González-Bailón, Diego Reinero, James Houghton
  • Integrative experiment to explore the effect of conversation interventions on dialogue across disagreement

    A large-scale experiment testing 21 interventions designed to improve dialogue across lines of socio-political difference.

    James Houghton, Dean Knox, Yphtach Lelkes, Matthew Levendusky, Erik Santoro, Erin Walk, Duncan Watts
  • AI Watchman: Longitudinally Auditing Generative AI Content Moderation of Social Issues

    Introduces an open-source interactive monitoring system to improve transparency on how LLMs moderate content, especially for socially and politically contested issues.

    Emma Lurie, Sorelle Friedler, Danaé Metaxa
  • Information Density and Narrative Persuasion in AI Chatbots: Cross-National Evidence from India and the United States

    Tests how conversational styles of AI chatbots shape persuasion, trust, and information sharing across the United States and India.

    Neil Sehgal, Sharath Chandra Guntuku, Andy Tan
  • Documenting how National News Media Depict Crime in the US

    Uses large language models to analyze national mainstream media coverage of crime, examining which crimes receive attention, how they are framed, and what solutions are promoted.

    Baird Howland, Billy Pierce, Amir Tohidi, Duncan Watts
  • How AI Transforms News: Measuring Bias and Distortion During LLM Conversations

    Maps how users discuss political topics with chatbots, tracing conversations back to original news articles to document how information is fabricated, overgeneralized, or selectively emphasized.

    Elliot Pickens, Duncan Watts, Chris Callison-Burch
  • Narrative License in Science Communication in the Era of Large Language Models

    Studies Narrative License — when scientific claims outrun the evidence — leveraging LLMs to detect it in published work, test its effects on readers, and devise interventions to limit its spread.

    Calvin Isch, Phil Tetlock, Duncan Watts
  • Archiving Digital Political Advertising Content

    Builds a comprehensive, searchable dataset of digital political ads run in the United States since 2018 to support research on platform policies, campaign rhetoric, and issue priorities across election cycles.

    Andrew Arenge, Marc Meredith, Matthew Levendusky
  • Who Controls the Media? Measuring Media Orientation in Civic Coverage

    Uses LLMs to track how news outlets frame coverage for national incumbents when reporting on civic matters.

    Ezgi Yilmaz, Zung-Ru Lin, Mina Rulis, Erik Wibbels
  • Adversarial Testing of Misalignment in Frontier LLMs When Asked to Create Anti-Democratic Campaign Materials

    Conducts adversarial testing on 19 frontier models to examine how safety guardrails can be bypassed to generate anti-democratic campaign content.

    Gayoung Jeon, Neil Fasching, Deen Freelon
  • Unpacking How Context (Conversation History) Shifts the Framing of LLMs Outputs

    Examines whether LLM-powered search systems generate responses that align with users' preexisting political beliefs, potentially reinforcing echo chambers.

    Vishwanath Emani Venkata, Sandra Gonzalez-Bailón
  • Election Administration Media Dashboard

    Builds a comprehensive repository of media coverage on election administration, offering scholars, policymakers, and citizens critical data to understand how media narratives affect public confidence in elections.

    Liz Stark, Marc Meredith, Michael Morse

An Introduction to the Center from Co-Directors Duncan Watts and Christopher Yoo

Co-Directors Duncan J. Watts and Christopher S. Yoo share their vision for the Penn Center on Media, Technology, and Democracy — why they built it, what they are learning, and where they hope it can go. Originally shared in the Center newsletter.

Dear colleagues,

As faculty co-directors of the new Penn Center on Media, Technology, and Democracy, we wanted to take a moment to tell you more about the Center—why we built it, what we are learning, and where we hope it can go.

A line from a book that one of us wrote in 2011, Everything Is Obvious, anticipated what we are trying to accomplish:

Just as the invention of the telescope revolutionized the study of the heavens, so too by rendering the unmeasurable measurable, the technological revolution in mobile, Web, and Internet communications has the potential to revolutionize our understanding of ourselves and how we interact.

In the almost fifteen years since, a number of people have (appropriately) questioned the telescope metaphor, but the reality is that we have made enormous progress in measuring—and to some degree understanding—social behavior at scale. The challenge now is not whether we can measure things, but whether we are measuring the right things.

Take one question that many people worry about: Has social media driven us into partisan echo chambers? The short answer: probably not in the way people think.

When we actually look at the data, we find that television, not social media, remains the dominant source of news for Americans. And it is there—not on Facebook or X—that we see the strongest evidence of ideological segregation. From 1980 to 2022, broadcast news viewership fell by over half, while cable news—especially partisan outlets—grew steadily. Between 2016 and 2019, 17% of Americans lived in TV echo chambers, compared to just 4% of those consuming news primarily online. There is also a clear asymmetry: Fox News has become increasingly partisan over time, while MSNBC and CNN fluctuate but remain less extreme overall.

Using large language models as analytical tools, we can now measure partisanship at the level of individual sentences in TV transcripts and news articles. This allows us to map not just what stories are told, but how they're told. In one study, we found that political headlines are systematically less polarized than the articles they introduce, suggesting an editorial layer that subtly moderates tone before publication.

Our data increasingly show that misinformation is not the core problem. More often, the issue lies in framing—how information is selected, contextualized, and repeated. Experiments show that biased framing of true facts can be just as persuasive as outright falsehoods.

Looking Ahead

Understanding the information ecosystem—and improving it—requires new infrastructure, methods, and collaborations. The Penn Center on Media, Technology, and Democracy will focus on these aspects:

  • Taking a broad view: We want to study how people actually encounter information — including social media, the internet, TV, podcasts, and radio. We are already building datasets to track these domains.
  • Data Infrastructure: We need longitudinal, shared datasets that can support systematic analysis. With support from the Knight Foundation, we are expanding access to PennMAP data for large-scale data sharing across research teams.
  • Quantitative Methods: Machine learning, network analysis, and computational linguistics are essential tools for making sense of the information ecosystem at scale. Through the Information & Democracy Research Grants, we are investing in this capacity across Penn.
  • Interdisciplinary Collaboration: Understanding information systems requires input from communications, engineering, law, political science, business, and social policy. Our research seminar series and cohort of Knight PhDs and Postdocs bring together this community.
  • Public impact: Research matters only if it informs the public sphere. We are committed to convening an annual conference and other initiatives to share what we learn with policymakers and civil society.

We are just getting started, but we believe that careful measurement and open collaboration can help us move beyond speculation—and toward real understanding of how information shapes democracy.

Duncan J. Watts and Christopher S. Yoo
Co-Directors, Penn Center on Media, Technology, and Democracy

Exploring the Democratic Repercussions of Media Fragmentation

On October 21st, 2025, the Center hosted its first public event on the Democratic Repercussions on Media Fragmentation.

The media landscape continues to experience fragmentation, with an ever-expanding number of political influencers, podcasters, online platforms, and generative AI systems, all while traditional mass media has become more ideologically aligned. This event brought together leading experts—Penn Professors Duncan Watts and Sandra González-Bailón are joined by media executive S. Mitra Kalita—to explore how media fragmentation relates to American democratic decline.

Read more in the coverage from the Daily Pennsylvanian or watch the event recording above.

Welcoming Our New Staff

Penn MEDIATED is thrilled to introduce its core staff: Executive Director Alex Engler, Engagement and Policy Manager Alexis Frisbie, and Communications and Research Manager Prithvi Iyer

Penn MEDIATED Staff

Alex Engler, Executive Director

Alex was most recently the Director for Democracy and Technology at the National Security Council, and before that the Assistant Director of AI Policy at the Office of Science and Technology Policy. Preceding the White House, Alex was a fellow at the Brookings Institution and the Center for European Policy Studies, where he worked on AI policy and online platform governance. Previously, Alex spent ten years as a data scientist in policy research organizations and governments (the Urban Institute, Congress, DC local govt) and as teaching faculty in that space (at the University of Chicago and Georgetown University).

Alexis Frisbie, Engagement and Policy Manager

Alexis has extensive expertise in technology policy, especially combating foreign information manipulation that threatens democratic processes. She was previously a Foreign Affairs Officer and Senior Technology Advisor at the State Department for 6ish years, where she led on international technology challenges, private sector engagement, and coordination on emerging technology threats. Before State, Alexis worked in a variety of roles in security, democracy, and human rights issues. Alexis holds a B.A. in International Affairs from Lewis & Clark College and an M.A. in International Security from the University of Denver.

Prithvi Iyer, Communications and Research Manager

Prithvi Iyer was most recently the Program Manager at Tech Policy Press, a non-profit media venture at the intersection of technology and democracy. In this role, he managed the organization's fellowship program, helped execute various community engagement initiatives and wrote over 50 pieces summarizing emerging technology policy research for general audiences. He has also served as Research Assistant at the Observer Research Foundation, where he published extensively on topics including mental health implications of political conflict, the role of behavioural science in shaping foreign policy, and technology's role in exacerbating inter-group conflict in South Asia. At ORF, Prithvi helped launch an annual nationwide foreign policy opinion poll project and organize the Raisina Dialogue, India's flagship geopolitics conference. Prithvi holds an MA in Global Affairs from the University of Notre Dame and a BA in Psychology from Ashoka University in India.