Facebook and Twitter Need Spoiler Tags
When the latest episode of Disney’s latest Marvel show WandaVision dropped Friday morning, by Friday afternoon, headlines littered across Facebook and Twitter already offered spoilers that hinted at some dramatic turn or how the series might end.
But it doesn’t have to be this way. Platforms could build spoiler tag features into their products — and in doing so, help solve problems a lot more consequential than ruining TV shows because the same feature could empower communities to moderate access to a wide array of content.
Both Reddit and Discord, for example, have robust systems for tagging text, images, and links that will hide them from users unless they specifically click on them.
The best examples of existing spoiler tag features are those created by Reddit and Discord. These are robust systems for tagging text, images, and links that will hide spoilers from users unless they specifically click on them. Rather than relying on automated tools like word filters, which can be imperfect and difficult to maintain, both sites let users offer a warning to the people who might come across their content.
Using these features is entirely voluntary, but they immediately enable and encourage a culture of courtesy. Communities like the /r/MarvelStudios subreddit often adopt internal rules for when spoiler tags are required. The honor system works surprisingly well because while staying silent about your favorite show on social media or using complex spoiler workarounds is hard, clicking an extra button to blur images or hide text behind a click is an easy way to be courteous to other members of a community.
However, an interesting thing happens on platforms with spoiler tag features: Users start spoiler tagging things that aren’t spoilers.
In Discord servers where users discuss sensitive topics like abuse, participants will sometimes label messages with content warnings to let others know they’re about to describe something that could be hard for other community members to hear. Images of violence and police brutality at protests are important to share, but spoiler tags are often used to let members decide for themselves how much violent imagery they’re able to view.
How Content Moderation Turned Into a Game of Hot Potato
App stores and cloud hosting platforms want the right to ban content without the responsibility of moderation
In other Discord servers where pornography or other NSFW material is allowed, a spoiler tag serves the double function of making it possible for members to keep their chats open at work. Discord also makes it possible to flag an entire channel as NSFW, after which users need to confirm they’re older than 18 to enter.
Reddit makes this purpose explicit with a separate NSFW tag, even though the feature performs a similar function as the spoiler tag. A user who subscribes to both regular and pornographic subreddits will see NSFW posts in their main feed, but images will be blurred until they click on them. Adult content — which is easier for Reddit to identify when its users are all voluntarily labeling it as such — can also be blocked at the account level. Overall, users have the tools to customize their experience.
Facebook has a sensitive content warning feature that blurs images until a user chooses to click on it, but only Facebook can apply this warning.
Existing spoiler tag systems aren’t perfect, but they are incredibly useful. Which raises the question: Why don’t more sites use them?
On a technical level, adding features to hide content behind an extra click is not only easy, but a feature that some sites have already built. Facebook has a sensitive content warning feature that blurs images until a user chooses to click on them, but only Facebook can apply this warning. Users don’t have the option of adding it themselves.
Twitter, which has a much more permissible policy toward violent or pornographic content, also has features to hide sensitive media. Tweets can be hidden behind a single click, giving the user a chance to opt out (a feature Twitter briefly co-opted to hide Trump tweets containing misinformation about the 2020 election). However, the only way for a user to add this warning voluntarily is to change a setting that marks all media that an account tweets as sensitive. It can’t be done on a per-tweet basis.
These platforms could provide cover for the more nuanced areas of their content policies by giving their users the tools to self-govern their own content. For example, if Facebook users could tag images of nude bodies that are allowed under the company’s policies, Facebook might not need to stir up controversy by stepping in to remove content as frequently.
If platforms wanted to take it a step further, they could even make it possible for web designers to add content or spoiler warnings into the code for articles shared online. Websites already embed tons of information in their pages to describe their content to sites like Facebook and Google. Embedded metadata tells Facebook, for example, whether a URL is a news article or a video, which will change how it’s displayed. It would be possible to use this same sort of metadata to flag content that contains spoilers. This way, even if an article had spoilers in its headline, users on Facebook wouldn’t see them unless they choose to click.
Everyone hates having their favorite show spoiled, and a button to hide those spoilers behind an extra click would likely be a welcome feature. But it would also work similarly enough to sensitive content warnings that it could make sharing more serious media easier and more courteous across the entire internet.
Discord and Reddit have already shown that it’s relatively simple to implement spoiler tag features. So what’s the rest of the internet waiting for?