Trying to understand YouTube’s recommendation system

Nicolas Suzor
5 min readFeb 27, 2019

Edit (March 2020): Early in 2019, it looked like YouTube made major changes to its ‘up next’ recommendations. We planned to continue monitoring YouTube’s recommendations, but YouTube’s API has been unreliable since the Christchurch massacre. The API is still broken now, making it impossible for us to generate the same large scale random sample we were previously using. As many people have noted, the lack of good, detailed data makes it really hard to understand what, exactly, is going on when social media platforms moderate, amplify, and hide the content that users post online.

We can see a sharp decline in recommendations for ‘alternative influence’ channels.

We used the 80+ YouTube channels listed by Rebecca Lewis in her report on Alternative Influence Networks on YouTube as a starting point. These are influential ‘alt-right’ channels that push reactionary views — “a general opposition to feminism, social justice, or left-wing politics”, “from mainstream versions of libertarianism and conservatism, all the way to overt white nationalism.”

Using a random sample of 3.6 million videos over the last month, we counted the number of times that a video from one of these channels was recommended by YouTube.

Our preliminary results are stunning. For the first two weeks of February, YouTube was recommending videos from at least one of these major alt-right channels on more than one in every thirteen randomly selected videos (7.8%). From February 15th, this number has dropped to less than one in two hundred and fifty (0.4%).

A bad month for YouTube

This change comes after a very controversial month for YouTube. Over the last few weeks, it has been criticized for encouraging anti-vaccination disinformation, allowing links to child exploitation in its comments section, and hosting cartoons that include spliced-in suicide tips in its YouTube Kids program.

For years, commentators have been worried that YouTube’s recommendation systems create a fertile breeding ground for alt-right conspiracy theories. On 25 January 2019, YouTube announced that it would take steps to mitigate the spread of disinformation through its recommendations algorithm. In an official blog post, YouTube explained its efforts to reduce the visibility of disinformation. It promised to try to stop recommending conspiracy videos, including:

“recommendations of borderline content and content that could misinform users in harmful ways — such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11.”

Social media companies have been heavily scrutinized for their role in the hosting and spread of divisive, harmful, and misleading content online.

When you watch a YouTube video, a list of recommendations videos to watch next is displayed on the right hand side of the screen. By default, YouTube automatically plays the first video on this list after the current video ends.

These videos are designed to keep users on the platform — YouTube’s business model depends on it. As the Advisor at the Center for Humane Technology and ex-YouTube employee Guillaume Chaslot points out, YouTube’s complex artificial intelligence is used to pursue a simple goal: “maximize watch time.”

Given the majority of content that users watch is driven by the platform’s recommendations, these videos shape users’ experiences on the service. Certainly, this feature may optimise the user experience. But when YouTube recommends misleading, harmful, or extremist content, this has the potential to harm users.

“Social media platforms not only host this troubling content, they end up recommending it to the people most vulnerable to it.” — Ysabel Gerrard and Tarleton Gillespie

YouTube under pressure from users and advertisers

Advertisers are becoming more sensitive to the content of videos and recommendations that they are associated with. A&T, Hasbro, Nestle and Epic games recently pulled all of their advertising from YouTube in the wake of revelations that it was allowing links to child sexual abuse in its comments section. The Sydney Opera House was recently surprised to find that its advertisements were running against anti-feminist videos that criticized working mothers.

A search for ‘feminism’ returns a video of Emma Watson, with advertising from Oral B, and an ‘Up next’ recommendation for an anti-feminist video from an alt-right channel.

These stories are becoming more common, and YouTube is under a lot of pressure from brands to improve its recommendation and advertising systems.

The problem of personalization

YouTube may have reduced the number of times it recommends videos from certain channels, but it seems like it still has a long way to go. The problem is not just that YouTube recommends alt-right content, but that it potentially helps radicalise people who stumble upon these videos.

Take another recent controversy: anti-vaccination videos. Australia was declared free of measles in 2014. However, over the past two months, 14 people have been diagnosed with the measles in Australia.

A recent report published by The Royal Society for Public Health (RSPH) raises important concerns that anti-vaccination movements have grown on social media platforms. The report found that half of all parents with children under five had been exposed to negative messages about vaccinations ‘often or sometimes’ on social media.

Chief Executive Shirley Cramer at the RSPH argues that:

With the dawn of social media, information — and misinformation — about vaccines can spread further and faster than ever before and one of the findings of this report is that this may, unfortunately, be advantageous for anti-vaccination groups.

When we looked to see whether YouTube was recommending anti-vaccination videos, we noticed very few results in our random sample. We identified 30 of the largest channels dedicated to spreading anti-vaccination messages and counted the number of recommendations again. We found that these channels were not being promoted by YouTube at a high rate:

In absolute terms, YouTube does not often recommend videos from anti-vax channels to anonymous users.

But the problem is much more complicated than it first appears. For example, when we looked specifically at videos with keywords like ‘flu shot’, we found that anti-vaccination videos frequently turn up in the recommended videos list. A random sample of 20 videos from this list returned titles including: ‘Flu Shot Dangers’; ‘You Don’t Want to Know What’s in Your Flu Shot!’; and ‘Flu Shot Warning! Watch This Before Getting One!’

While YouTube may not be driving traffic to anti-vaccination channels at a high rate overall, its personalized recommendation systems are clearly still leading people to content of very dubious factual accuracy.

“The problem is, YouTube’s recommendation algorithm has been trained over the years to give users more of what it thinks they want. So if a user happens to watch a lot of far-right conspiracy theories, the algorithm is likely to lead them down a dark path to even more of them.” — Issie Lapowsky.

--

--

Nicolas Suzor

I study the governance of the internet. Law Professor @QUTLaw and @QUTDMRC; Member of @OversightBoard. All views are my own. Author: Lawless (July 2019).