Twitter and Facebook's Moderation Rules Set Them Up for Failure

Veteran Agent Andrea Nelson Meigs Joins UTA as Partner

Drew Angerer / Getty Images

How Twitter and Facebook Moderation Rules Are Set Up to Fail

Inconsistent and selectively-enforced rules led to fiasco surrounding the decision to block a New York Post report on Joe Biden’s son

Twitter and Facebook’s decision last week to block the New York Post’s reporting on Hunter Biden, the son of Democratic nominee Joe Biden, has made their arbitrary and capriciously-enforced moderation policies into a joke. It’s also put the legal safeguards they enjoy in jeopardy.

For both companies, the goal was to not be the story in 2020, much as they were following the 2016 election. That has backfired spectacularly. Instead, their decisions last week have made them an even bigger story heading into this year’s election. That’s a direct result of having rules that few understand, are enforced seemingly at random and can appear to change on a whim.

Continue reading

Join WrapPRO for Exclusive Content,
Full Video Access, Premium Events, and More!

What’s at stake? User confidence in their platforms, for one thing. Some users could ultimately ditch Facebook or Twitter if they believe the companies are playing political favorites. Regulatory crackdowns are also in play. The FCC has announced it’s looking to “clarify” the framework of Section 230 of the Communications Decency Act, which is the broad legal shield that allows tech companies to moderate content as they see fit. Any changes to this quarter-century old legislation could severely dent Twitter and Facebook’s businesses.

If you missed it, the Post last Wednesday published an article that suggested that Joe Biden used his position to financially benefit his son, Hunter, when he was vice president. Rudy Giuliani, the former mayor of New York and current lawyer for President Trump, shared the information with the Post. The credibility of the article has been widely called into question.

Within hours of the report publishing, Facebook said it was “reducing” the story in users’ Newsfeeds. Why? As of Monday, we’re still not sure. Facebook Policy Communications Director Andy Stone, the man behind the decision, has not responded to multiple requests for comment.

Twitter went even farther, blocking users from sharing the report altogether. Users were greeted with a notification the story was “potentially harmful” when they were blocked from sharing it. Twitter said the report was blocked because it violated the company’s policy against sharing hacked information.

The result was a mess.

Many pointed out the policy would’ve blocked previous iconic works of journalism, including the Pentagon Papers, from being published. Inconsistency was another issue, with others noting the policy, which said users could not share “possibly illegally obtained materials,” hadn’t been used against other hot-button reports in recent years, including The New York Times’ report on President Trump’s taxes last month. On top of that, the Twitter accounts for White House Press Secretary Kayleigh McEnany and Trump’s reelection team were locked until they deleted tweets tied to the Post’s report. The blowback was so intense — and the shortcomings of the policy so apparent — that Twitter ended up reversing its hacked materials policy on Friday, with CEO Jack Dorsey admitting the company had taken the “wrong” approach by censoring the report.

The damage had been done by that point, though. Republicans were irate, with Sen. Ted Cruz saying it was a “brazen attempt to manipulate the election outcome” by Twitter. Others were simply concerned over the precedent being set, where a handful of faceless Silicon Valley workers are responsible for what hundreds of millions — or in the case of Facebook, billions — of users can read or talk about.

Reporter Matt Taibbi called the moves “Orwellian,” and pointed out the optics couldn’t be worse for Twitter and Facebook. Stone has a background in Democratic politics, including being the press secretary for former California Senator Barbara Boxer, and Twitter’s Senior Comms Manager Nicholas Pacilio is the former press secretary for Kamala Harris. By leveraging arcane and inconsistent rules to swiftly block a story that could harm a Democratic frontrunner, Twitter and Facebook opened themselves up to complaints they’re playing political favorites.

“The lines between fake news and bad news, between actual misinformation that is merely politically adverse, have been blurred,” Taibbi said. “It’s no longer clear that some of these people see a meaningful distinction between the two ideas.”

For anyone watching closely over the last four years, this was a natural progression.

Facebook and Twitter were skewered for not doing enough to weed out misinformation and Russian trolls during the 2016 election cycle. (A study from Oxford University later found there is little evidence to support bogus Russian ads played a major role in shaping the results of the 2016 election. Russian trolls, working for the St. Petersburg-based Internet Research Agency, spent less than $75,000 on Facebook ads between 2015 and 2017, Oxford researchers found.) Since then, both companies have expanded their moderation policies, beefed up their review staffs, and cracked down on more posts they deem unfit for their platforms. Twitter’s hacked materials policy, for example, was introduced in 2018.

Now, Twitter and Facebook find themselves in content moderation purgatory: If they censor posts, critics ask what gives them the authority to determine what is and isn’t misinformation. And when they don’t censor posts, critics from the other side ask why they’re not doing enough to police their platforms.

Aggressively moderating posts — especially when it’s viewed as politically-motivated by half the country — has consequences. That’s a lesson Twitter and Facebook just learned, after FCC Chairman Ajit Pai said the commission would now revisit Section 230 of the Communications Decency Act. His announcement coincides with bipartisan support to revamp the law.

“As elected officials consider whether to change the law, the question remains: What does Section 230 currently mean?” Pai said. “Many advance an overly broad interpretation that in some cases shields social media companies from consumer protection laws in a way that has no basis in the text of Section 230.”

Whether the FCC even has the authority to change Section 230 is up for debate. One legal scholar recently told TheWrap there is “no basis” for the FCC being allowed to revamp the law. That’s a topic for another day, though. The reality is, Twitter and Facebook’s ham-handed approach to moderating their platforms has led to increased regulatory scrutiny. Where that leads is to be determined. In an apparent attempt to right their perceived wrongs from 2016, both companies may have set themselves up for failure.

Sean Burch