By Justin Sanders

The typical Facebook or YouTube comment thread is often a swamp of hate speech, bullying, misinformation, and outright lies. Sometimes, it is obvious that whomever is writing these comments is clueless, or never even experienced the thing they’re commenting on – and may just be malicious. Comments should be a useful source of information, but these platforms get loaded up with trash. Why shouldn’t, say, Google users have to prove they actually had a room in a particular hotel before they bash it? Why does Google’s YouTube accept comments on videos regardless of whether the commenter has actually watched them? And, why does Facebook let its users call us disgusting names even when it’s obvious the users haven’t read our posts?

Until recently, the answers to these burning questions could be boiled down to: “Because that’s the internet. Uninformed hot takes are part and parcel!” But now, one important website is showing us that maybe, just maybe, there is another way.

Influential critic aggregator and movie website Rotten Tomatoes recently launched “verified ratings and reviews,” a tool that encourages visitors to prove they have purchased a ticket to a given film before they review it. Going forward, the Audience Score for all new movies will comprise only Verified Ratings given by users who have offered such proof. Unverified ratings will be dropped into a separate bucket called “All Ratings” that must be clicked on to access. So, no one is banned from “reviewing” a movie, but those who include proof of purchase will receive a “verified” checkmark badge; this will help readers to separate the wheat from the chaff.

Rotten Tomatoes announced this change earlier this year, after Captain Marvel became the target of users who objected to star Brie Larson’s public statement about the film being feminist, as well as remarks she made about seeking out underrepresented journalists in her efforts to promote it. In retaliation, a thuggish gang of disgruntled nobodies conspired to harm the movie by swarming not only Rotten Tomatoes, but also IMDb and YouTube, with negative reviews and comments. These coordinated attacks occurred before Rotten Tomatoes had unveiled its new proof-of-purchase system, but there was no doubt that few, if any, of these jokers had seen Captain Marvel before savaging it – the film was still a month away from its premiere.

Rotten Tomatoes’ thoughtful retaliation to its trolling problem is clever because it doesn’t exclude anyone. Users who wish to comment on a movie, without proving they have seen the movie, still can. Users who, for some strange reason, want to read the thoughts of commenters who have not actually seen the movie they are opining about, still can. There is a place for the informed and the uninformed alike on Rotten Tomatoes – it’s just that Rotten Tomatoes has taken steps to ensure that the uninformed are no longer featured by default and don’t factor into a film’s Audience Score.

It’s a concept that seems so obvious, it’s difficult to imagine why something similar has not been adopted in all corners of the internet. At the very least, it shows that platforms like Facebook and YouTube could be doing much more to rein in the toxicity on their platforms. The simple act of requiring users to actually watch or read whatever story, video, song, or link they wish to comment on would have an immediate impact. Those just looking to troll would lose much of their impact, leaving behind (mostly) those who are actually interested in the content and want to say something thoughtful (positive or negative) about it.

Sure, verification of user comments would be slightly trickier to implement than requiring, say, a barcode from a ticket, but requiring users to authenticate their identity before receiving access to comment threads would be one way to go. Whether courtesy of bots or human imposters, all of the major social platforms are rife with fake accounts, many of which are created specifically to foment chaos online through commenting and sharing. 

Facebook claims to be fighting the problem, but it only seems to be getting worse. One approach could be to confirm that users have actually clicked through to a link on a post, if there is one, before they can comment on it. Or, if they wanted to get a little more ambitious, they could develop a technology akin to the CAPTCHA tests that serve to distinguish between a human user and a bot. Where CAPTCHA requires users to do things like read and type distorted text, or select images from a group based on whether they contain a certain object (e.g., traffic lights), the new comment verification feature could, say, offer a brief multiple-choice question with the correct answer being a phrase, image, or other element that actually appears in the corresponding link.

But, we’re just spit-balling here. We’re not product designers, and aren’t pretending to be – those are jobs for Facebook and YouTube to oversee. And, if Rotten Tomatoes can figure out something that works within its infrastructure, we have no doubt that Facebook and YouTube can, too. What’s more, it’s imperative that they do. The billions and billions of comment threads across their platforms are major vessels for the hate speech, fake news, bullying, and horrors such as pedophilia and terrorism that have turned too much social media into a social blight. 

Rotten Tomatoes has shown one way of dealing with this toxic component of internet culture. It’s time for Google and Facebook to do them one better.