By Ruth Vitale

Say what you will about YouTube, but one thing is certain – they’ve got chutzpah. It’s not just on another level. Heck, it’s in another dimension.

Consider the extent to which the Google-owned video behemoth has gone to spread fake news and harmful misinformation. And how it has enabled and rewarded conspiracy-spewing bigots even after their words inspired real-world violence from their followers. And how it has helped undermine democracy by happily hosting content from hostile foreign actors intent on meddling in our elections. And how it has exposed our own children to disturbing, addictive, mind-altering trash while monetizing them via Orwellian-level data collection and surveillance practices.

And consider how severely YouTube has damaged the ability of creators to make a living, even while putatively “increasing access” to their works – so much so that U.S. Senator Elizabeth Warren was recently moved to say, “We must help America’s content creators — from local newspapers and national magazines to comedians and musicians — keep more of the value their content generates, rather than seeing it scooped up by companies like Google and Facebook.”

You’ve got to wonder, in the face of all this, at YouTube’s astonishing ability to keep patting itself on the back. We just took a look at their recent self-congratulatory blog post, “Continuing our work to improve recommendations on YouTube.” 

It’s a response to the much-deserved criticism that YouTube’s “Up Next” recommendation algorithm has a bad habit of sending viewers down rabbit holes of extremist, hateful, and/or downright false content that can lead to extremely dangerous and disturbing places.

Not surprisingly, YouTube’s post fails to acknowledge those dark aspects of the harmful content on its platform. It doesn’t want to remind us that, for example, videos painting the Sandy Hook massacre as a hoax led to life-ruining real-world harassment of victims’ families. Or that its algorithms enabled a secret “wormhole” for pedophiles, or that it hosts videos with millions of views that are helping fuel pseudo-scientific movements like anti-vaxxers. 

No, no, no, you’ve got it all wrong. All YouTube wants to do is “to make sure [they’re] suggesting videos that people actually want to watch.” And how are they doing that? Not by cleaning up their platform. Instead, YouTube is “reducing recommendations of borderline content and content that could misinform users in harmful ways.” Content, the post continues, such as “videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11.”

Uh-huh. So, it will reduce its recommendations of gross and dangerous stuff. But anyone can still find them, by subscribing to the bizarre YouTube channels that carry them or simply searching for them. YouTube says it plans to keep most of its “borderline” content, but steering its users to that junk a little less, will strike “a balance between maintaining a platform for free speech and living up to our responsibility to users.”

We’ve seen this act before – and YouTube’s sense of “balance” is about as good as a toddler on an ice rink. We’ve watched YouTube enable and empower widespread copyright infringement for years, so we know that the only “balance” YouTube strikes is between what’s good for its bottom line and what isn’t. 

This is why it keeps feeding its users copyrighted materials that are posted by people who don’t own it – piracy, in short – and fights off every effort that would make it more responsible to clean up its platform. Just one example: as I write this, shills for YouTube are busily painting the mandatory infringement filters proposed by the European Copyright Directive as a form of censorship.

YouTube doesn’t want to draw a hard line in the content moderation sand and stick to it. That’s why its blog post alludes obliquely to “borderline content” and to “work[ing] with human evaluators and experts” to help the recommendation machine work better. We’ll be curious to see where they draw the “borderline” when it comes to pedophilia rings, or Holocaust denials, or election tampering. And, because we’re a pro-copyright organization fighting on behalf of the millions of Americans who depend on copyright to make a living, we will be curious to see if YouTube is any less aggressive in protecting against “borderline” content than it is in protecting against infringing content. 

The truth is that YouTube is playing an endless game of bending language to make it seem like they are enacting real change on their broken system – but the reality is, their core principles are the same as they’ve always been

YouTube doesn’t believe in censorship, unless it benefits them to do so. 

YouTube is a proud proponent of free speech, unless it doesn’t benefit them to be so.

YouTube lacks a moral center, which benefits their bottom line – at least until Congress does something serious to hold them accountable.

Left to their own devices, YouTube will only continue to make perfunctory tweaks to a business model that profits from harmful content. It has no real obligation to implement systemic change, and it fights any efforts to impose an obligation. That needs to change, and soon.