汤头条原创

Technology may be making us unhealthy and miserable

Senior Research Associate Dr Sarah Steele, College Postdoctoral Associate Dr Christopher Markou (2014) and PhD student Tyler Shores organised the recent Intellectual Forum event 'Is technology making you miserable?'. , they consider what governments could do to help.

Social media and screens are omnipresent. Many are concerned about the amount of time we 鈥 and our children 鈥 spend on devices. Soon to be a father, Prince Harry  that 鈥渟ocial media is more addictive than drugs and alcohol, yet it鈥檚 more dangerous because it鈥檚 normalised and there are no restrictions to it鈥.

But worries are not just limited to personal use. Many schools and workplaces are increasingly delivering content digitally, and even using game-playing elements like point scoring and competition with others in non-game contexts to .

This 鈥渁lways on鈥 lifestyle means many can鈥檛 just 鈥溾. There are now claims that many of us are at risk of 鈥溾 as we find ourselves chronically stressed by hyper-connectivity. But is there evidence that so-called 鈥渟creen time鈥 is, in fact, bad for us? Or worse: is it making us miserable?

To answer this, the UK government  what we know about the impact of technology use on children, drawing from a nascent, but robust, body of  exploring these questions. The Australian government has done the same, but focused on screen time鈥檚 . Governments around the world are drawing together evidence.

We know, for example, that there is a connection between using screens and poorer attention span and academic performance , delayed development in children, increased , greater stress and depressive symptoms , increased  and .

When to act

While there are clearly correlations between increased screen use and psychosocial and physical health issues, correlation doesn鈥檛 mean causation. But without definitive scientific evidence, can we afford to ignore them? Should we refrain from making recommendations or regulations until there is direct proof, like the UK鈥檚 Royal College of Paediatrics and Child Health recently ?

From a public health perspective, the answer is a firm no. While  public health policy remains the gold standard, we have enough information to know action is required. Definitive scientific evidence of a causal link between technology use, or 鈥渟creen time鈥, and negative health is unnecessary to justify appropriate action. This is because what is ultimately at stake is public safety, health, and well-being. And of course, we might never find the evidence.

The 鈥溾 gives us a basis to act. It argues that, even without scientific consensus, governments have a duty to protect the public from harm. Policy interventions are justifiable where there exists even a plausible risk of harm. With correlations mounting, harm is more than plausible. The and are already acting. But what should be done? A few obvious actions stand out.

Moving forward

YouTube, to start with, has been described as the 鈥溾 because of how content recommendation engines lead people towards increasingly extreme content. This is because its algorithms have 鈥渓earned鈥 that people are drawn towards increasingly extreme content from what they started out searching for. We are all searching for that  and hoping the next video will provide it. This problem could be addressed by regulating content recommendation systems and disabling YouTube鈥檚 鈥渁uto play鈥 feature by default.

We also know that tech companies use elaborate strategies to keep eyes on screens. By exploiting the brain鈥檚 reward system, they鈥檝e mastered how to keep people scrolling, clicking, and liking 鈥 and potentially make them addicted. The 鈥溾 of online marketing and product or service engagement weaponises neuroscience by using the brain鈥檚 reward system to drive continuous engagement.

It is also used  where competition and gamified approaches like targets or step counters drive increased performance levels. Amazon warehouses . This is something employment and human rights law will need to address, and government should investigate, especially as children are thought to be .

A broader problem is, as tech writer Shoshana Zuboff has masterfully , the way big data is collected and used against us. We know that Google, Facebook, Amazon, and other tech giants constantly collect our data, and then use this data to target individuals and drive particular behaviours and responses.

With 鈥溾 the business model of the internet, there are no easy solutions. What we urgently need is courage from government to reign in big tech鈥檚 excesses and most insidious harms. Of course, tech companies will . Lobbying and advocacy will be their weapons of choice for influencing laws and sustaining profitability. But it is critical that politicians and professional organisations prioritise public health over industry money.

An issue for governments

Thankfully, several governments have a desire to 鈥渕ake the online world a safer place鈥 and take concrete steps towards regulating what techniques big tech can use on the public. A major step will be restricting behavioural advertising, as Germany .

Of course, given that advertising accounted for the majority of Google鈥檚 revenue in 2018, we shouldn鈥檛 expect it to respond with anything but hostility when its core business model is threatened. It is encouraging that the UK government is taking the lead, with a  calling for a new regulator and social media bosses to be legally liable for harms their platforms cause. This would be a bold step in the right direction.

We can also restrict what personal data can be used to sell products to people, and how advertisements are presented 鈥 allowing users greater control over what they see. A return to contextual advertising, where users only see ads that are related to what they鈥檙e searching or browsing for, would be a more modest but nonetheless important step.

We should expect these tech firms to use the playbook established by , and . And so transparency mechanisms and robust reporting requirements must be put in place. We must also discuss our options for 鈥 and with 鈥 .

It is key that we take a precautionary approach to industry-funded research by these tech giants in the same way we have done with tobacco industry funded research and bodies. While technology is part of our lives, how we understand it and how we regulate it must be in the interests of public health at large.

The opinions expressed are those of the authors. .