Sarah Shugars, Ph.D. Candidate

shugars.s@northeastern.edu

Northeastern University

Country: United States (Massachusetts)

About Me:

Sarah is a computational political scientist, using network analysis and natural language processing to study political conversations and political reasoning. Her research focuses on developing a network methodology for deliberation; modeling the way an individual reasons as a network of interconnected ideas and studying deliberation as process in which groups exchange ideas and collectively create new solutions. A doctoral candidate in Northeastern's Network Science program, she received her BA cum laude in Physics from Clark University and her MA in Integrated Marketing Communications from Emerson College. She currently serves as senior editor for the Good Society: The Journal of Civic Studies, as core organizer for the conference on Politics and Computational Social Science (PaCSS). She previously worked at Tufts University's Tisch College of Civic Life for nearly a decade. 

Research Interests

Political Communication

Public Opinion

Political Participation

Research Methods & Research Design

Text as Data

Countries of Interest

United States

My Research:

A functional democracy demands the sound reasoning of its citizens. Efforts to strengthen democratic regimes must therefore include an understanding of the everyday political conversations through which average citizens formulate their own views, exchange factual and normative information, and reason together about matters of common concern. These conversations generate public opinion and build democratic legitimacy, yet they are not well understood. When forming political opinions, how do people navigate their own values and interpret the beliefs of others? How do people reason together -- or fail to reason together -- to identify workable solutions to complex social problems? These questions get to the very core of political behavior, to the localized mechanisms through which people attempt to identify and address their collective challenges.My research develops an empirical framework aimed at understanding of how average citizens express their political views, interpret the views of others, and reason together through everyday political conversations. This reasoning process is fundamentally networked in nature:when speaking with others, we raise ideas that seem connected to what they said; when thinking to ourselves, we move from idea to connected idea; and when assessing a complex issue, we weigh the pros and cons as well as their interconnections in order to arrive at a final judgment. To address these questions, then, I develop theoretically grounded network methods and measures for individual and small group political reasoning. 

Publications:

Journal Articles:

(2019) Why Keep Arguing? Predicting Engagement in Political Conversations Online, SAGE Open

Individuals acquire increasingly more of their political information from social media, and ever more of that online timeis spent in interpersonal, peer-to-peer communication and conversation. Yet many of these conversations can be eitheracrimoniously unpleasant, or pleasantly uninformative. Why do we seek out and engage in these interactions? Whodo people choose to argue with, and what brings them back to repeated exchanges? In short, why do people botherarguing online? We develop a model of argument engagement using a new dataset of Twitter conversations about President Trump. The model incorporates numerous user, tweet, and thread-level features to predict user participationin conversations with over 98% accuracy. We find that users are likely to argue over wide ideological divides, and areincreasingly likely to engage with those who are different from themselves. Additionally, we find that the emotionalcontent of a tweet has important implications for user engagement, with negative and unpleasant tweets more likely tospark sustained participation. Though often negative, these extended discussions can bridge political differences andreduce information bubbles. This suggests a public appetite for engaging in prolonged political discussions that aremore than just partisan potshots or trolling, and our results suggest a variety of strategies for extending and enrichingthese interactions.

(2018) Games for Civic Renewal, The Good Society

In this article, we summarize the first civic games contest, its rules and process, and the results. We describe civics, games, and argue that there is a fruitful intersection to be had between these two fields. Finally, we introduce the winning games.

(2018) Microblog conversation recommendation via joint modeling of topics and discourse, NAACL

Millions of conversations are generated every day on social media platforms. With limited attention, it is challenging for users to select which discussions they would like to participate in. Here we propose a new method for microblog conversation recommendation. While much prior work has focused on post-level recommendation, we exploit both the conversational context, and user content and behavior preferences. We propose a statistical model that jointly captures:(1) topics for representing user interests and conversation content, and (2) discourse modes for describing user replying behavior and conversation dynamics. Experimental results on two Twitter datasets demonstrate that our system outperforms methods that only model content without considering discourse.

(2017) Winning on the merits: The joint effects of content and style on debate outcomes, Transactions of the Association for Computational Linguistics

Debate and deliberation play essential roles in politics and government, but most models presume that debates are won mainly via superior style or agenda control. Ideally, however, debates would be won on the merits, as a function of which side has the stronger arguments. We propose a predictive model of debate that estimates the effects of linguistic features and the latent persuasive strengths of different topics, as well as the interactions between the two. Using a dataset of 118 Oxford-style debates, our model’s combination of content (as latent topics) and style (as linguistic features) allows us to predict audience-adjudicated winners with 74% accuracy, significantly outperforming linguistic features alone (66%). Our model finds that winning sides employ stronger arguments, and allows us to identify the linguistic features associated with strong or weak arguments.