Habit forming —

For Facebook addicts, clicking is more important than facts or ideology

Once clicking "share" becomes habitual, the content of what's shared matters less.

Image of a figure in a hoodie with the face replaced by the Facebook logo.
Aurich Lawson | Getty Images

It's fair to say that, once the pandemic started, sharing misinformation on social media took on an added, potentially fatal edge. Inaccurate information about the risks posed by the virus, the efficacy of masks, and the safety of vaccines put people at risk of preventable death. Yet despite the dangers of misinformation, it continues to run rampant on many social media sites, with moderation and policy often struggling to keep up.

If we're going to take any measures to address this—something it's not clear that social media services are interested in doing—then we have to understand why sharing misinformation is so appealing to people. An earlier study had indicated that people care about making sure that what they share is accurate, but they fail to check in many cases. A new study elaborates that by getting into why this disconnect develops: For many users, clicking "share" becomes a habit, something they pursue without any real thought.

How vices become habits

People find plenty of reasons to post misinformation that have nothing to do with whether they mistakenly believe the information is accurate. The misinformation could make their opponents, political or otherwise, look bad. Alternately, it could signal to their allies that they're on the same side or part of the same cultural group. But the initial experiments described here suggest that this sort of biased sharing doesn't explain a significant amount of information.

The researchers created a mock Facebook entry for an article with a title and a graphic and showed it to users, asking them to decide whether they'd share it; the articles were evenly divided between accurate and misinformation. Overall, accurate stories were shared at a much higher rate (32 percent versus just 5 percent of false headlines). But a subset of subjects who shared the most stories—those with the strongest Facebook habit—shared fake and real stories at roughly equal rates. As a result, just 15 percent of the participants were responsible for nearly 40 percent of the fake stories that were shared.

To the researchers, this suggested that sharing misinformation isn't necessarily indicative of bias; instead, it's a problem of a subset of users who just habitually click share (with habit being defined as involving "limited reflection, inattention"). So the team designed an experiment to force people to do some reflection, asking participants to rate a headline's accuracy before they decided whether to share it (a group made these decisions in reverse order to act as a control). This worked partially. Habitual Facebook sharers reduced their sharing of false headlines but still ended up sharing a quarter of the total, and less frequent sharers were far less likely to share something false.

The researchers then repeated the experiment but used headlines that were either consistent with or opposed to the participants' self-described political affiliation (all headlines were accurate). A similar thing happened, with the non-habitual participants sharing politically palatable headlines at a rate seven times higher than contrary ones. By contrast, those with a Facebook habit were far less discerning, showing only three times the bias toward politically compatible headlines. So again, even with prompting, the habitual users were far less discriminating.

Changing incentives

A lot of research has indicated that the response to sharing something—likes and further reshares—functions as a reward for social media users. This encourages people to adopt habitual sharing since sharing everything increases the odds of experiencing a reward. So, the researchers changed the reward process.

In a training period, participants were assigned to either share accurate headlines or misinformation for a point reward (you received points for sharing what you were assigned to). After extensive training, the participants were then told to share stories as they'd prefer. When participants were trained to share misinformation, it ended up shared about as often as accurate stories. But when people were trained to reward accuracy, accurate stories were shared at roughly three times the rate of false ones, even though there was no longer a reward for doing so.

In questions fielded before the training, those identified as habitual sharers were more likely to indicate that their main goal was attracting the attention of other users and were less likely to rate sharing accurate information as important. So the training seems to have significantly reordered priorities, and the effect lasted even after there was no longer any reward for getting it right.

So the good news is that many people don't seem committed to intentionally sharing misinformation, even when it's favorable to their political views. But that's about the end of the good news. On the bad news side, developing a social media habit seems to incidentally boost the sharing of misinformation, and social media companies have a very strong incentive to create habitual users. The researchers also note that users may self-select to mostly follow politically biased news sources. While they may not be biased toward sharing any particular ideological slant, the sources of what they could share may be extremely biased.

In any case, it would be trivial for social media companies to sporadically give users some sort of positive feedback for sharing accurate information—essentially repeating the study's training often enough to avoid its effects fading out. The research suggests doing so doesn't really lower habitual use but rather shifts the associated rewards. The big challenge is that social media companies have no incentive to do so.

PNAS, 2023. DOI: 10.1073/pnas.2216614120  (About DOIs).

Channel Ars Technica