Thu 25 Apr 2024

 

2024 newspaper of the year

@ Contact us

Latest
Latest
6h agoRevealed: Labour’s plan to start nationalising UK railways within days
Latest
8h agoPedro Sanchez considering stepping down as PM
Latest
9h agoRenters' Reform bill passes but with no timeline for ending 'no fault' evictions

Don’t panic about social media harming your child’s mental health – the evidence is weak

We’re told the internet destroys children’s mental health - but Stuart Ritchie read all the relevant studies and saw little to support the claim

If you’ve been on social media recently, you won’t have been able to avoid people talking about how bad social media is.

For more than a decade, across many countries, children’s and teen’s mental health has been getting worse: more anxiety, more depression, and lower happiness in general. It’s often noted that the trend looks even worse for girls.

Several commentators with big audiences are claiming that social media – along with the smartphones that enable it – is now known to be a cause of these growing mental health problems in children and teens.

First, the psychologist Jonathan Haidt wrote a detailed article on Substack called “Social Media is a Major Cause of the Mental Illness Epidemic in Teen Girls. Here’s the Evidence”. Other writers on Substack explained how they’d changed their minds and now agreed with Haidt, and how “honestly, it’s probably the phones” that cause teenage mental health problems.

Then John Burn-Murdoch at the Financial Times wrote a column with a headline that stated the case in no uncertain terms: “Smartphones and social media are destroying children’s mental health”.

The message of the new articles is broadly the same: a few years ago, it would’ve been acceptable to remain agnostic about the effects of smartphones and social media on young people’s mental health. But now the evidence is in – and to quote Burn-Murdoch – it makes an “overwhelming” case that they’re having a “catastrophic” effect.

I don’t agree. Having read all the relevant studies in this area, I think a lot of the evidence is shaky and unclear – and it’s okay to still be undecided. This article explains why.

Three things you shouldn’t do

Before we get into the studies, I want to point out three misleading tactics people use in the debate over the effects of smartphones and social media. I’m not arguing that these are done deliberately to mislead, but they’re misleading all the same.

1: Drawing vertical lines on graphs. You can’t read about this debate for long without seeing a graph showing children’s or teen’s mental health over time, with a big vertical line (or sometimes a shaded area) showing “the introduction of iPhones” or “the smartphone era”. We’re supposed to look at the graphs and see that obviously something went wrong at the time smartphones were introduced – after all, the rate of mental health problems went up just after it.

There’s no careful statistical inference being made here. Everyone who understands basic statistics knows that, for “correlation-isn’t-causation” reasons, you’re not allowed to imply that one thing causes another just because they happen at the same time. As it is, these graphs are little more than innuendo.

Worse, proponents can’t even agree where one should draw that line. In a famous 2017 article, the psychologist Jean Twenge drew the line at 2007: “iPhone released”. But Burn-Murdoch puts it at 2010 – apparently the start of the “smartphone era”. This shows how arbitrary the reasoning is – and that’s not even mentioning all any other major events that happened around the same time that could easily have affected people’s wellbeing (global financial crisis, anyone?).

2: Vote-counting. This is where people who are making a scientific case tot up the studies that say one thing, do the same for the studies that say the opposite, and then declare that one side has won the debate simply because it has more studies.

But this doesn’t take into account the quality of the studies. In a world where twenty tiny, low-quality studies say X is true, and a single, large, well-designed study says not-X is true, I’d go with not-X any day. Science isn’t a democracy where one study gets one vote: sometimes a study is so bad it doesn’t get to have a say.

Moreover, if scientists (and scientific journals) are more likely to publish studies that show positive results, or results that agree with one particular side of a debate (and they are – this is so-called “publication bias”), it’s no surprise that the mere number of studies that say an effect exists will be higher.

And yet Jonathan Haidt, in particular, does vote-counting again and again. Despite acknowledging twice that “the ‘winning side’ is not determined by a simple count” and “you can’t just count up the studies”, he proceeds to do so on at least six separate occasions in his article.

This is another kind of innuendo: you’re not actually making a case – just implying that your side is stronger because of the number of studies that agree with you. It’s tempting, and psychologically powerful. But it’s not how science works.

3: Claiming causality from longitudinal studies. When researchers run a longitudinal observational study – where they give people questionnaires, say, over a period of several months or years – it isn’t the same as running an experiment. That is, you can’t look at the results and conclude that any one thing they measured caused anything else, because it’s still just a series of observations.

Haidt seems to disagree:

“If there is, on average, a change in happiness the week after people quit or reduce their social media time, then we can infer that the change in mood was caused by the change in behaviour the prior week.”

But that would only be true if those people had been made to quit or reduce social media time, and if you’re comparing their mood change to that from people who kept on using it.

If those people decided of their own accord to quit or reduce social media, you can’t make an inference about causes. Why did they reduce their social media time? Maybe it was because their mood changed – meaning you’d have it completely backwards if you assumed social media caused changes in their mood.

Correlation and causation

That mention of causality leads us on to the studies themselves. In what follows, I’m not going to discuss the correlational studies: the ones that say that social media or smartphone use is correlated positively or negatively with mental health problems.

That’s a debate that’s been had over and over again, with scientists disagreeing over the size of the correlation, the direction of the correlation (is social media positive or negative for people’s well-being? There are studies in both directions), whether the correlation has got stronger over time, and what the correlation even means (if you find a negative correlation, could it be that social media causes mental health problems? Or could it be that people with mental health problems are more likely to use social media to a problematic degree? Could it be that some other, third variable causes people to use social media more often and be at higher risk of mental health problems?).

There’s plenty to read on these questions. But although they’re interesting and necessary, for the above reasons correlational studies can only take you so far. Here, I’m interested in the causal studies: the experiments and quasi-experiments that allow us, at the end, to declare that X must have caused Y.

Or at least, if they had strong, clear, results they’d allow us to say that. As we’ll see, the data from the available studies is rarely so clear-cut.

Two studies about Facebook

The first thing I’ll do is look at two recent studies that are considered to provide the very strongest evidence in this area. They’re both about Facebook.

Let’s first consider what I’ll call the “Facebook arrival” study. The researchers took advantage of the fact that, when it first appeared, Facebook (or back then, “The Facebook”) was limited to university students, and had a staggered rollout between 2004 and 2006. That is, Facebook arrived at some universities before others – and that allows for a serendipitous natural experiment.

The researchers matched up data from a mental health questionnaire for undergrad students, which was done over a period of time that overlapped with the Facebook rollout, with specifically when Facebook arrived at each student’s university. And they found that, just after the Facebook introduction, the average student’s self-reported well-being got worse.

It’s an impressive study: but there’s a big issue with the results that I haven’t seen mentioned anywhere else.

When you use standard statistical methods for separating signals from noise – working out what’s “statistically significant”— – you always run the risk of finding false-positive results. It’s just the way these methods work: you want to be alerted to real results if they exist, but there might also be some false alarms.

The more statistical tests you run, the more likely you are to find those spurious, misleading results – this is called the problem of “multiple comparisons”, and it’ll come up quite a few times as we go on. Researchers that run a lot of tests can just ignore this problem and run the risk of being misled by mere statistical fluctuations, or they can be more conservative, essentially raising their standards for what they accept as a “real” result so that they get fewer spurious ones.

And here’s the thing: when the authors of the “Facebook arrival” study raised their standards in this way, running a correction for multiple comparisons, all the results they found for well-being were no longer statistically significant. That is, a somewhat more conservative way of looking at the data indicated that every result they found was statistically indistinguishable from a scenario where Facebook had no effect on well-being whatsoever.

Now let’s turn to the second study, which was a randomised controlled trial where 1,637 adults were randomly assigned to shut down their Facebook account for four weeks, or go on using it as normal. Let’s call it the “deactivating Facebook” study. This “famous” study has been described as “the most impressive by far” in this area, and was the only study cited in the Financial Times as an example of the “growing body of research showing that reducing time on social media improves mental health”.

The bottom-line result was that leaving Facebook for a month led to higher well-being, as measured on a questionnaire at the end of the month. But again, looking in a bit more detail raises some important questions.

First, the deactivation happened in the weeks leading up to the 2018 US midterm elections. This was quite deliberate, because the researchers also wanted to look at how Facebook affected people’s political polarisation. But it does mean that the results they found might not apply to deactivating Facebook at other, less fractious times – maybe it’s particularly good to be away from Facebook during an election, when you can avoid hearing other people’s daft political opinions.

Second, just like the other Facebook study, the researchers tested a lot of hypotheses – and again, they used a correction to reduce false-positives. This time, the results weren’t wiped out entirely – but almost. Of the four questionnaire items that showed statistically-significant results before the correction, only one – “how lonely are you?” – remained significant after correction.

It’s debatable whether even this result would survive the researchers corrected for all the other statistical tests they ran. Not only that, but they also ran a second model, controlling for the overall amount of time people used Facebook, and this found even fewer results than the first one.

Third, as well as the well-being questionnaire at the end of the study, the participants got daily text messages asking them how happy they were, among other questions. Oddly, these showed absolutely no effect of being off Facebook – and not even the slightest hint of a trend in that direction.

It’s been suggested that, had people stayed off Facebook for even longer, we might’ve seen even larger, more convincing effects. That’s possible. But it’s also possible that the novelty of being away from social media would wear off after a while, and people would return to how happy they were to begin with. We’ll only know if researchers do studies with longer follow-up periods, to examine the longer-term effects of being off social media.

The broadband studies

One of the things that you’ll have noticed in all the above discussion is that they were on undergraduate students (in the first study) and adults (in the second). There are also several other studies of adults that I don’t have space to cover here – you can read about them in Jonathan Haidt’s Google Doc where he and his colleagues collect together all the available evidence.

But Haidt’s big claim is specifically about the effects on “teen girls”. The other articles I referenced at the start talk about “teen depression”, “teenage unhappiness”, and the catastrophic effects on “children”.

So, for the remainder of this article we’ll discuss causal studies that have specifically involved children and teens. As Haidt says, “it’s hard to get parental consent to do experiments on minors” – so most of the studies here are of the “natural experiment” variety.

That natural experiment often comes in the form of broadband. It turns out that, when countries upgrade their internet speeds, they tend to do so at different times in different regions. And that allows researchers to ask if the people in regions that had faster internet earlier had noticeable effects on their mental health.

The logic is the following: the internet speed increases in the area in which you live; it’s less frustrating to go online; kids start to spend more time on the internet; this damages their mental health.

There are four broadband studies that included kids and young teens. They all claim that the rollout of the internet harmed people’s mental well-being. And all four, in my view, have big problems.

The first study is from the UK. This was a difficult paper to read: it includes a lot of eye-strainingly complex tables. The researchers looked at other outcomes as well as mental health; in one of those tables, we find that, according to the statistical model, faster broadband speed strongly improves children’s performance on their age 10-11 Sat exams – but strongly reduces their performance on their age 16 GCSE exams. This, along with some other strange results in the tables, seems rather implausible to me – and makes me doubt the results relating to mental health.

Next there’s a study from Italy. In this case the data came from hospital reports, so we’re talking about serious psychiatric disorders. They had people born from to 1974-95 in their dataset, and the hospitalisations were from 2001-13.

When they ran the analysis over the entire sample, there weren’t any effects. But then, when they split the data into a younger and older group (people born from 1974-84 and 1985-95, respectively), and then split them again by sex, they started to see some effects. Since I can’t find any plan the researchers posted online before the study started, it’s not clear whether they always intended to make this split (unplanned analyses can lead to yet more false-positives) – but let’s assume they did.

The first thing to note is that there’s no effect on depression or anxiety for females. That’s despite the fact they claim these effects were “for both males and females” in the study’s abstract summary. If you just read that summary, you get the impression that this result fits with Haidt’s “teen girls” thesis, when in fact it doesn’t.

You might have noticed another big problem, too: the period of the hospitalisations (2001-13) hardly covers any of the “smartphone era” from 2010 onwards that people are worried about, and contains a lot of data from before modern social media was even a glint in Mark Zuckerberg’s eye.

Then, there’s a study from the fibre broadband rollout in Spain. This one was over a more appropriate period of time, when sites like Instagram were becoming popular (2007-19), and looks like it provides good evidence for Haidt’s thesis, because it’s only girls who seem to suffer the mental health effects of the internet rollout.

But the study suffers from our old friend “multiple comparisons”: when the authors ran a more conservative correction to the numbers, none of the results for mental health were statistically significant at the normally-accepted level. Again, the results were very fragile, only appearing when you look at the data in a certain way.

Finally, there’s a study from Canada, which purports to show an effect on severe (but, interestingly, not moderate) mental health problems in girls. The study contains lots of graphs that apparently show this – but it provides so few numerical details that I found it impossible to properly evaluate it.

I don’t see how anyone can look at these studies and think they provide a coherent, convincing line of evidence for the damaging effects of social media (and remember, they’re inherently quite removed from social media itself, because the main thing they’re measuring is broadband speed).

The broadband studies aren’t the only causal research done in children and teens. For example, there’s an experiment where teens were shown Instagram selfies and asked about their immediate feelings about body image. But it’s not clear how much this translates to the real-world use of social media. There’s even at least one randomised trial – albeit a small one – where teens’ well-being was reduced when they took a short break from Facebook.

In conclusion

The most hackneyed phrase in science is: “more research is needed”. It always feels like a cop-out to say it when you’re discussing a controversial question – can’t you just make up your mind? But in this case, we’re being pushed, almost browbeaten, into making our minds up when the evidence is actually rather weak overall.

Don’t get me wrong – there are certainly lots of suggestive studies. And the overall phenomenon of increasing mental health problems, especially among girls, does call out for an explanation. But digging into the details of the studies that are often used to stir up the social media panic reveals that the research is far more ambiguous than we’ve been led to believe.

Is social media a major cause of the teen mental health crisis? I think it’s fine, for the time being, to hedge your bets.

Most Read By Subscribers