Professors Andrew Pattison and William Cipolli III explore how people talk politics online

Horizontal Rule with Colgate C

Twitter isn’t exactly known as a bastion of civil discourse. But the very nature of the social media platform — in which relative strangers send short, sometimes inflammatory, messages into a virtual “town square” — makes it an ideal forum to measure public policy sentiment. This is according to a pair of research papers by Andrew Pattison, associate professor of environmental studies, and William Cipolli III, associate professor of mathematics and co-founder of the Data Science Collaboratory at Colgate University.

“On Facebook, you are more likely to know who you are talking to,” Pattison says. Most people in your network are friends, family, and acquaintances, so you are likely to at least try to keep things civil. “Twitter is more anonymous — you can talk about anything.”

Pattison and Cipolli have used that unfettered quality of Twitter (renamed X last year) to analyze how people try to influence public policy and how users respond when they do — or don’t — get their way.

Conducted with political scientist Jose Marichal of California Lutheran University, the research uses the controversial issue of fracking in New York State as a case study to examine the phenomena. Short for “hydraulic fracturing,” fracking is the practice of injecting fluid chemicals deep underground to access new deposits of oil and natural gas. It can lead to increased jobs and productivity, but also detrimental health and environmental effects. Democratic governor Andrew Cuomo banned the practice in 2014.

“The reason fracking worked well as an issue to analyze is that it’s emotive on a visceral level,” says Pattison. At the same time, the outcome wasn’t uniform; different states, such as Texas and Oklahoma, have come to different conclusions on the issue.

To get a handle on what different Twitter users were saying about the issue, the researchers applied the Narrative Policy Framework, which interprets public policy issues through a storytelling lens, with traditional narrative elements of plot, villains, and heroes. Think of each tweet as a miniature story, with good guys and bad guys packed into 140 or 280 characters.

“You can find narrative in that many characters,” Pattison says. “It doesn’t require an 800-word newspaper article or 10,000-word peer-reviewed journal article.”

In their first paper, published in January 2022 in the Review of Policy Research, the colleagues took 500 of the most prominent users who mentioned “New York” and “fracking” between 2008 and 2018 and divided them into pro-, neutral, and anti-fracking accounts.

At the same time, they coded individual users and conducted a sentiment analysis based on a list of English words compiled by the National Research Council of Canada. That list divided words into eight emotions — anger, fear, sadness, disgust, joy, anticipation, surprise, and trust — and either positive or negative sentiment. This rubric created a basic but useful sense for the sentiment expressed by users on both sides of the issues at different points in time.

In a follow-up paper, co-written with Christopher Cherniakov ’24 and published in Politics & Policy in October 2023, they expanded this analysis by training a machine learning model to categorize users — nearly 6,000 users in all — as neutral, pro-, or anti-fracking, and they applied a natural language software to further analyze the stories users were telling.

They found that pro- and anti-fracking users were spinning very different narratives, talking past each other rather than engaging directly. The more conservative pro-fracking users were more likely to use words associated with work and money, suggesting an economic framing of the issue, while liberal anti-fracking users emphasized family and health, focusing on the environmental aspects of the practice. 

In general, anti-fracking users were more passionate and emotional; however, pro-fracking users were more likely to use a mental health frame. That confused the researchers until they realized pro-fracking users were engaging in ad hominem attacks by referring to their opponents as “crazy” liberals.

By analyzing the content of tweets before and after the ban went into effect, the researchers were also able to examine a well-known phenomenon in sociology called devil shifting and angel shifting. The former refers to a propensity people have to demonize opponents and ascribe more power to them than they have, while the latter refers to the opposite: idealizing one’s allies and downplaying their power. 

When a group loses a fight, they tend to do more devil shifting; when they win, they tend to do more angel shifting. The data bears it out — up to a point. 

After the ban, pro-fracking users did use less inflammatory language about Cuomo, using fewer words that referred to fear, anger, or disgust. “They weren’t trashing their enemy as much once they saw themselves as losing,” Pattison says.

At the same time, however, he and his colleagues did not see a corresponding increase in angel shifting among anti-fracking users. “They didn’t grant the kind of benefits to Cuomo that he might have gotten from initiating the ban,” he says. He attributes that, perhaps, to psychology around “loss aversion.” People respond more powerfully when they lose something rather than when they gain something. “People don’t like having things taken away from them,” says Pattison. “When we lose, it hurts more.” That could serve as a cautionary tale for politicians — that they may not reap the kind of benefits electorally that they expect from supporters — even when the politicians give constituents what they want.

In general, Pattison and Cipolli say, this kind of analysis can be helpful for public policy analysts by giving them a snapshot of the sentiments of people on both sides of an issue, and perhaps identify narratives that might work for op-eds or lobbying campaigns to influence ongoing political battles. 

In future research, Pattison and Cipolli hope to test these techniques on other controversial policy issues, such as immigration and climate change. They also plan to see if they find similar results on more obscure issues, where there might not be as broad an awareness.

“The methane rules in the Clean Air Act are very impactful to climate change, but you’re probably narrowing the people who mention it,” Pattison says. “So, in that case, are you just talking to experts?” In addition, they hope to analyze other social media platforms such as Facebook and perhaps Threads, to see if the way people talk about issues is different — and to apply new AI tools to the analysis to see if they can garner more insight into the narratives that people spread.

“We’ve already talked about using ChatGPT and other large language models,” Cipolli says, “which would probably be a big technical step forward.” In all these ways, the researchers hope they can better understand the stories people tell in the virtual world — and how they affect the real one.