As disinformation and hate thrive online, YouTube quietly changed how it moderates content

YouTube, the world's largest video platform, appears to have changed its moderation policies to allow more content that violates its own rules to remain online.

The change happened quietly in December, according to The New York Times, which reviewed training documents for moderators indicating that a video could stay online if the offending material did not account for more than 50 per cent of the video's duration — that's double what it was prior to the new guidelines.  

YouTube, which sees 20 million videos uploaded a day, says it updates its guidance regularly and that it has a "long-standing practice of applying exceptions" when it suits the public interest or when something is presented in an educational, documentary, scientific or artistic context.

"These exceptions apply to a small fraction of the videos on YouTube, but are vital for ensuring important content remains available," YouTube spokesperson Nicole Bell said in a statement to CBC News this week.

But in a time when social media platforms are awash with misinformation and conspiracy theories, there are concerns that YouTube is only opening the door for more people to spread problematic or harmful content — and to make a profit doing so. 

YouTube isn't alone. Meta, which owns Facebook and Instagram, dialled back its content moderation earlier this year, and Elon Musk sacked Twitter's moderators when he purchased the platform in 2022 and rebranded it as X. 

"We're seeing a race to the bottom now," Imran Ahmed, CEO for the U.S.-based Center for Countering Digital Hate, told CBC News. "What we're going to see is a growth in the economy around hate and disinformation."

WATCH | Experts warn Meta's moderation move will likely increase misinformation: Facebook and Instagram’s parent-company Meta is getting rid of fact checkers on the platforms and will instead rely on users to comment on accuracy, but experts warn the move will likely increase the spread of misinformation.Public interest vs public harm

YouTube's goal is "to protect free expression," Bell said in her statement, explaining that easing its community guidelines "reflect the new types of content" on the platform. 

For example, she said, a long-form podcast containing one short clip of violence may no longer need to be removed. 

The Times reported Monday that examples presented to YouTube staff included a video in which someone used a derogatory term for transgender people during a discussion about hearings for U.S. President Donald Trump's cabinet appointees, and another that shared false information about COVID-19 vaccines but that did not outright tell people not to get vaccinated.

A platform like YouTube does have to make some "genuinely very difficult decisions" when moderating content, says Matt Hatfield, executive director of the Canadian digital rights group OpenMedia.

LISTEN |  How Canada has come to play an outsized role in far-right misinformation:

Day 69:42What's behind Canada's outsized influence in the world of far-right misinformation

 

He believes platforms do take the issue seriously, but he says there's a balance between removing harmful or illegal content, such as child abuse material or clear incitements to violence, and allowing content to stay online, even if it's offensive to many or contains some false information. 

The problem, he says, is that social media platforms also "create environments that encourage some bad behaviour" among creators, who like to walk the line of what's acceptable.

"The core model of these platforms is to keep you clicking, keep you watching, get you to try a video from someone you've never experienced before and then stick with that person."

And that's what concerns Ahmed.

He says these companies put profits over online safety and that they don't face consequences because there are no regulations forcing them to limit what can be posted on their platforms.

He believes YouTube's relaxed policies will only encourage more people to exploit them.

How well YouTube is moderating 

In a recent transparency report, YouTube said it had removed nearly 2.9 million channels containing more than 47 million videos for community guideline violations in the first quarter — that came after the reported policy change. 

The overwhelming majority of those, 81.8 per cent, were considered spam, but other reasons included violence, hateful or abusive material and child safety.

LISTEN | Why you're being tormented by ads algorithms and AI slop:

Front Burner38:12The internet sucks now, and it happened on purpose

 

Hatfield says there is a public interest in having harmful content like that removed, but that doesn't mean all controversial or offensive content must go.

However, he says YouTube does make mistakes in content moderation, explaining that it judges individual videos in a sort of "vacuum" without considering how each piece of content fits into a broader context.

"Some content can't really be fairly interpreted in that way."

Regulations not a perfect solution

Ahmed says companies should be held accountable for the content on their platforms through government regulation.

He pointed to Canada's controversial but now-scuttled Online Harms Act, also known as Bill C-63, as an example. It proposed heavier sentences, new regulatory bodies and changes to a number of laws to tackle online abuse. The bill died when former prime minister Justin Trudeau announced his resignation and prorogued Parliament back in January. Ahmed says he hopes the new government under Prime Minister Mark Carney will enact similar legislation. 

Hatfield says he liked parts of that act, but his group ultimately opposed it after it tacked on some other changes to the Criminal Code and Human Rights Act that he says were unrelated to the platforms.

He says groups like OpenMedia would have liked to see a strategy addressing business models that encourage users to post and profit off of "lawful but awful" content. 

"We're not going to have a hate-free internet," he said. "We can have an internet that makes it less profitable to spread certain types of hate and misinformation." 

WATCH | How people can become more discerning news consumers:Canada Research Chair Timothy Caulfield's new book, The Certainty Illusion: What You Don’t Know and Why It Matters, explores how people can become more discerning and critical news consumers. He says the ability to understand where information comes from and being able to learn and change opinions has never been more important.
Comments (0)
No login
gif
color_lens
Login or register to post your comment