Big Tech companies have a knack for wielding the “free speech” argument as a shield to avoid tackling the systemic criminal behavior on their platforms. This tendency bleeds into the realms of copyright disputes and content protection, where Big Tech’s deep-pocketed, anti-copyright advocacy network often vilifies efforts to fight piracy online as dire threats to our cherished internet liberties.
This tactic, already deeply misleading, reaches new levels of bad faith when these groups turn to “fair use,” which they repeatedly weaponize to make baseless attacks on automated content filtering solutions such as YouTube’s Content ID and Content Verification Program.
At a Senate DMCA hearing in June of last year, for instance, Public Knowledge’s Meredith Rose argued that automated filtering systems chill free speech “all the time” by giving users too much power to take down works that qualify as fair use, such as commentary or parody. And in December, the EFF – a notorious Big Tech bulldog – echoed that claim even more forcefully, arguing that one of YouTube’s automated tools “discourages fair use and dictates what we see online.”
Perhaps the cruelest irony facing creatives who have had their work pirated on YouTube and are left with no recourse is that YouTube has the tools to find and fight infringement. But YouTube just won’t give them to most individuals. And the purported concerns over “fair use” are a big reason why.
In fact, YouTube has an entire suite of content protection tools, including Content ID and Content Verification Program (CVP), which automatically detect unauthorized works on the platform and give rightsholders options to remove or monetize them. CVP is less sophisticated than Content ID but provides users the ability to quickly root out unlicensed uploads of their works and file takedown notices in large batches.
Then there is Copyright Match, which uses the same matching technology as Content ID but is only useful for finding full uploads of the user’s original videos – and YouTube will only allow use of Copyright Match if the creatives have previously uploaded a full version of their copyrighted content to the platform themselves.
Finally, there is YouTube’s “default” copyright protection “tool,” a cumbersome Copyright Takedown Webform that must be filled out anew for each and every new alleged copyright violation. When the same pirated work pops up elsewhere on YouTube, even after an earlier version was taken down, the form must be filled out all over again. It’s like Groundhog Day.
Alas, most creatives are relegated to using the webform since YouTube is notoriously stingy about who receives access to its higher-level offerings. In the case of Content ID – by far the most powerful tool of them all – this withholding makes sense, as its complicated dashboard is designed for large-scale copyright owners (such as movie studios and music labels) who may have to manage thousands of titles across hundreds of international territories
But the very effective Content Verification Program seems tailor-made for creatives with fewer copyrights and less complicated management scenarios. CVP has been proven to be a boon to creatives who own their copyrighted work outright and simply want an automated tool for finding – and easily removing – their content from the platform. Expanding access to CVP could be enormously beneficial to thousands of creative individuals and small businesses with smaller but frequently pirated catalogues.
Unfortunately, very few creatives are granted access to CVP, and YouTube provides no set guidelines explaining who gets it and who doesn’t – other than a hazy suggestion that “If you often need to remove content and have previously submitted many valid takedown requests, you may be eligible for our Content Verification Program.”
But even when creatives meet both these vague criteria, they are routinely denied not only CVP but even the significantly less effective Copyright Match. That leaves them the (non-)choice of hunting down and removing every last unauthorized upload on their own, one Copyright Takedown Webform submission at a time, and doing it over and over and over.
The Myth of Fair Use Abuse
Why does YouTube not offer the appropriate existing content protection tools to more creatives? Shouldn’t every rightsholder have the chance to quickly and easily protect their own work from being exploited by others on a platform worth many billions of dollars?
EFF would like you to think this is the reason: Giving copyright owners free access to reasonable content protection tools would deprive users of their free speech rights because copyright owners would file copyright claims en masse on content that actually qualifies as fair use.
Rebecca Tushnet, an outspoken copyright critic from Harvard, provided a particularly succinct summary of this deceptive viewpoint in her written testimony for a February 2020 DMCA Senate hearing: “Automated systems [like Content ID] don’t respect fair use and other limits on copyright,” she wrote, “harming the creators copyright is supposed to serve.”
So that’s Orwellian reasoning. The Tushnet contingent believes that the most effective tools we have for stopping the theft of creative works on the world’s biggest video platform are actually harming creatives. And their reasoning, in part, involves a purported plague of fair use abuse.
But there is no plague. And you don’t have to take our word for it, because Google (YouTube’s parent company) agrees. At yet another DMCA hearing, in December 2020, Katherine Oyama, Google’s Global Head of IP Policy, testified that “less than 1% of all Content ID claims made in the first half of 2020 were disputed.” And within that tiny amount, according to Google’s own YouTube data, “60% resolve in favor of the uploader.” Do the math and that leaves 40% of this dispute total resolving in outcomes that do not favor uploaders, some of which, yes, surely involve the denial of those uploaders’ fair use claims.
To summarize: Only one percent of all Content ID claims are disputed to begin with, and within that one percent, only 40% of those claims, fair use or otherwise, are ruled legitimate, i.e. favor the disputer.
To simplify: That’s four-tenths of one percent of all infringement claims that could potentially be wrongful fair use takedowns.
Doesn’t sound like Content ID is suppressing our internet freedoms “all the time”… does it?
One Big Problem
Of course, even a broken clock is right twice a day. The anti-copyright cohort is correct that there are instances where the use of automated copyright protection services lead to some legitimate fair uses being wrongly taken down. But, as demonstrated above, the volume of these take-downs is microscopic in relation to the overwhelming amount of piracy that could be, and already is, reversed by tools like Content ID and the Content Verification Program.
Furthermore, in those rare instances where a user can legitimately contest fair use in response to a takedown, YouTube has an appeals system that works to restore false or invalid claims as quickly as possible. Yes, even in the tiny fraction of a percent of cases where a takedown claim is invalid, the perceived threat can be mitigated by YouTube’s built-in resolution system.
So it’s time to get to the truth – the risk of inadvertent takedowns of content using YouTube’s content filtering tools where this a legitimate fair use claim is miniscule. And the harms to creatives that are prevented by the use of filtering tools are vastly larger than the inconveniences imposed on a very small number of creatives by the use of these tools. In short, fair use abuse is no excuse to bar quality content filtering tools, and access to these tools should be in the hands of more creatives.