WHAT IS THE POLARIZATION LAB?

The Duke Polarization Lab is a group of seven faculty members, 21 graduate students, and four undergraduate students who are working to develop new technology to combat political polarization online.

 

EXECUTIVE SUMMARY

Social media sites are often blamed for contributing to political polarization because they encourage people to segregate themselves from those with opposing political views. Yet in a recent landmark study, the Duke Polarization Lab discovered that exposing social media users to opposing political views makes political divisions even worse. Our team of social scientists, computer scientists, and statisticians believes this is because of the nature of social media itself. We are building a new social media platform for public discussion of politics that is informed by the latest advances in social psychology, political science, and machine learning. In the first stage of our work, we are using this platform to conduct research that will help us determine how to connect Republicans and Democrats so that they can have more productive political debates. In the second stage of our work, we will release a public version of our platform that will implement the matching strategies we identified in our research and incentivize users to participate by giving them reputation points for influencing people who do not share their political views.

 

 

BACKGROUND 

 

Political divisions are deepening in the United States and elsewhere. Though social media once promised to democratize public debate by allowing virtually anyone to join a public discussion about politics, many now believe it has the opposite effect. Social media platforms enable people to surround themselves with those who share their political views—a human predisposition that is amplified by algorithms embedded within many of the world’s most popular platforms. In order to combat political polarization, many people—including Twitter CEO Jack Dorsey—have proposed that social media platforms should nudge people to engage with those who do not share their views.

 

OUR RESEARCH 

In October 2017, the Duke Polarization Lab—a team of social scientists, computer scientists, and statisticians—set out to test this idea. As the Figure above shows, we recruited approximately 1,200 Republicans and Democrats who visit Twitter regularly to complete a survey about their political views. One week later, we offered half of them pay to follow a bot we created that retweeted 24 messages each day from an opinion leader from the opposing political party. Unfortunately, when we measured these study participants’ political views one month later we discovered that our intervention did not decrease political polarization. Instead, it made it worse. Democrats who followed our Republican bot became slightly more liberal and Republicans who followed our Democratic bot became substantially more conservative. 

 

 

THE BACKFIRE EFFECT

 

Why didn’t people become more moderate when they were exposed to opposing political views? Previous research shows that people who are exposed to messages that conflict with their worldview experience cognitive dissonance that can lead to fear or even anger. These emotions typically provoke a process known as “motivated reasoning” where people begin to counter-argue information that conflicts with their worldview, selectively identifying even more reasons to disagree with the message than they might have had if they had not encountered it in the first place. Recent research indicates this process can even be observed within fMRI scans of the brains of people whose political identities are threatened by unfamiliar messages. Together, the growing body of evidence indicates our identities act as filters that prevent us from listening to the other side. In order to disrupt this dangerous feedback look, we believe a new vision for social media is needed.

 

 

OUR CURRENT RESEARCH

 

Over the next year, our lab will launch a series of studies designed to determine how to redesign social media for more productive political conversations. We have created a mobile chat platform that allows us to run experiments where we peal back the layers of social media in order to learn how make it better. This new research tool allows us to “turn on” and “turn off” different components of social media in an experimental setting including a) the level of anonymity; b) the criterion used to recommend who to friend or follow; and c) the existence of a social reputation system. By recruiting people to complete surveys before we pay study participants to use our platform, we can match them to discuss a political issue in carefully curated pairs—according to their political interests, non-political interests, or other personal attributes. All of these data will then be analyzed via the most sophisticated machine learning models in order to learn how to match people who our science shows are likely to have the most productive political conversations.

 

GOING PUBLIC

 

Though we are conducting scientific research in the short-term, our long-term mission is to release our social media platform to the public to create real, measurable impact in combatting political polarization. Our public uptake strategy has two cornerstones. First, the public release of our app will include a reputation system that rewards people who convince people who do not share their political position to moderate their view. We believe reputation as an effective bipartisan communicator will be a valuable social asset in a range of fields in the coming years. Yet we also realize that many people are disaffected from politics. Our second public uptake strategy thus employs insights from behavioral economics. We hope to pay 10,000 Americans to use our app each day for one month, with the hope that this nudge—combined with the aforementioned reputation system—will convince them to share it with their friends and colleagues, creating a powerful scaling mechanism.