research
- Economic risk framing increases intention to vaccinate among Republican COVID-19 vaccine refusersWei Zhong, and David A. BroniatowskiSocial Science & Medicine, 2023
Objective To determine if framing communications about COVID-19 vaccines in economic terms can increase Republicans’ likelihood to get vaccinated. Methods We examined Twitter posts between January 2020 and September 2021 by Democratic and Republican politicians to determine how they framed the COVID-19 pandemic. Based on these posts, we carried out a survey study between September and November 2021 to examine whether motivations for COVID-19 vaccine uptake matched message frames that were widely used by these politicians. Finally, we conducted a randomized controlled experiment to examine how these frames (economic vs. health) affected intentions to vaccinate by vaccine refusers in both parties. Results Republican politicians were more likely to frame the pandemic in economic terms, whereas Democrats predominantly used health frames. Accordingly, vaccinated Republicans’ choices were more likely to be motivated by economic consideration (β = 0.25, p = 0.02) and personal financial rationales (β = 0.24, p = 0.03). Among vaccine refusers, Republicans exposed to messages using economic rationales to encourage vaccination reported higher vaccination intentions compared to those exposed to messages using public health rationales (F1,119 = 4.16, p = 0.04). Conclusion Messages highlighting economic and personal financial risks could increase intentions to vaccinate for vaccine-hesitant Republicans. Public health implications Agencies should invest in developing messages that are congruent with frames that are already widely used by co-partisans. Social media may be helpful in eliciting these frames.
- Keep Your Heads Held High Boys!: Examining the Relationship between the Proud Boys’ Online Discourse and Offline ActivitiesCatie Bailard, Rebekah Tromble, Wei Zhong, and 3 more authorsAmerican Political Science Review, 2024
How does online communication by right-wing extremist groups relate to their offline behavior? In this paper, we analyze the relationship between the online communication of one prominent right-wing extremist group—the Proud Boys—and their offline activities using the long-standing and well-developed collective action framing literature as the theoretical lens driving our approach. To investigate this correlation, we utilize cutting-edge computational techniques to analyze an extensive and novel data set of Telegram activity by the Proud Boys, which we merge with U.S. Crisis Monitor data of violent and non-violent events that members of this group participated in over a 31-month period. Our findings demonstrate that the platforms provided by social media to mobilize members of extremist groups aren’t as simple as forums for calls-to-action or discussions of logistics and planning. Rather, online discussions between members of this extremist group that feature intensifying expressions of grievances and/or motivational appeals to group pride, duty, or solidarity share a reciprocal relationship with participation in offline events.
- Proud Boys on TelegramWei Zhong, Catie Bailard, David Broniatowski, and 1 more authorJournal of Quantitative Description: Digital Media , 2024
Utilizing an original data set of public Telegram channels affiliated with a right-wing extremist group, the Proud Boys, we conduct an exploratory analysis of the structure and nature of the group’s presence on the platform. Our study considers the group’s growth, organizational structure, connectedness with other far-right and/or fringe factions, and the range of topics discussed on this alternative social media platform. The findings show that the Proud Boys have a notable presence on Telegram, with a discernable spike in activity coinciding with Facebook’s and Instagram’s 2018 deplatforming of associated pages and profiles with this and other extremist groups. Another sharp increase in activity is then precipitated by the attack on the U.S. Capitol Building on January 6, 2021. By February 2022, we identified 92 public Telegram channels explicitly affiliated with the Proud Boys, which constitute the core of a well-connected network with 131,953 subscribers. These channels, primarily from the United States, also include international presences in Australia, New Zealand, Canada, the UK, and Germany. Our data reveals substantial interaction between the Proud Boys and other fringe and/or far-right communities on Telegram, including MAGA Trumpists, QAnon, COVID-19-related misinformation, and white-supremacist communities. Content analyses of this network highlights several prominent and recurring themes, including opposition to feminism and liberals, skepticism toward official information sources, and propagation of various conspiracy beliefs. This study offers the first systematic examination of the Proud Boys on Telegram, illuminating how a far-right extremist group leverages the latitude afforded by a relatively unregulated alternative social media platform.
- Fragmentation Dynamics in Electoral Assessments: Evolving Voter Criteria in U.S. Presidential Elections, 1984-2020Wei Zhong, Maggie Zhang, Simin Chen, and 1 more authorUnder review
This study investigates the underexplored and evolving dynamics of how voters evaluate U.S. presidential candidates, analyzing open-ended responses from the American National Election Studies (1984-2020). By focusing on voters’ likes and dislikes about candidates and employing the deep learning BERT model, we identify significant shifts in the evaluative criteria for Republican and Democratic candidates. Our analysis focuses on longitudinal changes in the diversity of evaluative criteria, the alignment of criteria preferences among voters, and the consistency of criteria application among candidates from different political affiliations. We observe a notable decrease in the diversity of evaluative criteria over time, alongside increasing divergence in criteria preferences among voters. Additionally, there’s a movement towards employing less consistent criteria across major party candidates. These findings suggest a trend towards more individualized and reductive decision-making, posing challenges for consensus-building and potentially leading to greater electoral polarization.
- Subdued but Unbroken: Examining the Cohesion of Far-Right Extremist Followers After Twitter DeplatformingWei Zhong, and Maggie ZhangR & R
On August 10, 2018, Twitter deplatformed the “Proud Boys,” a far-right extremist group, and its affiliated accounts. While deplatforming is an increasingly utilized strategy to combat online harm, its effects on the followers of extremist groups are not well-documented. Our research bridges that knowledge gap by exploring the impact of deplatforming on group cohesion. Specifically, we investigate whether deplatforming leads to fragmentation or strengthens unity among the group’s followers. We assess cohesion through three theoretical lenses: task commitment, social commitment, and sense of belonging. By analyzing over 12 million tweets from approximately 9,000 Proud Boys supporters between August 1, 2017, and September 1, 2019, our findings suggest that the deplatforming of Proud Boys accounts has a limited impact on reducing group cohesion among their followers. Instead, it could push followers to seek broader networks and external interactions, leaving their overall group cohesion largely unaffected.
- Picturing Protest: A Deep Dive into the Visual Representation of Protests in Authoritarian Countries’ Media on Twitter.Wei Zhong, Bin Chen, Fan Liang, and 1 more authorExtended abstract accepted, under review at Digital Journalism
This study explores how media in authoritarian regimes frame protests, focusing on the visual representation of social protests on Twitter/X. Drawing on the "protest paradigm" theory, which traditionally examines how democratic media portray protests, we extend this analysis to authoritarian contexts where state control complicates media coverage. Leveraging over 331 million tweets from 9,331 news outlet accounts across 144 countries, we identify 123,375 protest images from 38 authoritarian countries. Through computer vision analysis, we classify these images into four protest paradigms: riot, confrontation, spectacle, and debate. Our findings reveal that spectacle and debate are the most frequently depicted paradigms in authoritarian context. We also find that domestic protests are more likely to be framed as spectacles or debates, while foreign protests are associated with riots and confrontations. Furthermore, variations in press freedom within these regimes significantly influence how protests are visually portrayed, with greater press freedom leading to more confrontational depictions. This study contributes to the literature by highlighting the critical role of visual media in shaping public perceptions of protests in authoritarian contexts and provides new insights into how media systems influence the framing of dissent.
- Unveiling Engagement and Platform Algorithmic Biases in Social Media Data Collection and Analysis: An Experimental StudyJiyoun Suk, Wei Zhong, Yini Zhang, and 4 more authorsR & R
In computational social science (CSS), the use of social media data has revolutionized the exploration of social phenomena at unprecedented scale, depth, and granularity. However, the reliance on engagement metrics such as likes, shares, and comments in data construction and measurement introduces significant biases that challenge the validity of these studies. And platform algorithms that tend to incentivize engagement content can further exacerbate such biases. Employing a novel experimental design utilizing a customized online social networking site, we aim to identify, quantify, and address engagement and platform algorithm biases in social media data across political and non-political content settings. We show that engagement-based feed algorithms widen the discrepancy between attention and likes as well as shares, especially in the political content setting. Reverse-chronological feed algorithms lower overall time spent on posts and lowers the discrepancy between attention and engagement. We argue that engagement metrics do not accurately reflect true public attention and interest, especially when combined with the influence of platform algorithms, thereby affecting the validity and reliability of CSS findings. We offer methodological suggestions for researchers and practical implications for the platform industry.
- Twitter’s Architecture Undermined Medical Misinformation Removal PoliciesDavid Broniatowski, Wei Zhong, Joseph Simons, and 2 more authorsSubmitted to PNAS Nexus
Did Twitter’s COVID-19 misinformation removal interventions during the height of the COVID-19 pandemic (March 1, 2021) successfully reduce misinformative content and users? To answer this question, we collected over 400M English-language tweets related to COVID-19 using over 100 relevant keywords between February 6, 2020 and December 15, 2022. Focusing on our sample’s top 20% most prolific accounts (N = 40,835), we extracted and labeled each Twitter account and their posts’ mis-informativeness, based on third-party lists of low-credibility news sources. We used a comparative interrupted time series design, comparing more misinformative with less misinformative accounts’ content, and comparing more informative posts with less misinformative posts regardless of accounts’ type. We found that none of these series experienced an immediate drop after the policy intervention on March 1, 2021. Instead, the decrease started in early 2022. More importantly, Twitter’s interventions were not associated with a statistically significant decrease in more misinformative accounts’ content relative to less misinformative accounts’ content. Similarly, we didn’t detect a statistically significant reduction in more misinformative posts compared to less misinformative posts. Taken together, Twitter’s policies removing COVID and vaccine misinformation were not associated with a statistically significant relative reduction in misinformative posts and accounts’ content. These results call into question the ability of large social media companies, such as Twitter, to control the spread of misinformation on their platforms via content and account deletion during the COVID-19 pandemic.