Jan. 21 The Washington Post on the need for transparency from social media platforms. Meta CEO Mark Zuckerberg has taken a lot of heat since he announced last week that he is pulling his company out of the fact-checking business and curtailing content moderation on its platforms.
Editorials from Washington Post, New York Times and others
Jan. 21 The Washington Post on the need for transparency from social media platforms. Meta CEO Mark Zuckerberg has taken a lot of heat since he announced last week that he is pulling his company out of the fact-checking business and curtailing content moderation on its platforms. The criticism is understandable, given the uncertainty over how Meta's new rules will handle misinformation and otherwise harmful material.
Keep in mind, however, that the company's content-moderation strategies - and indeed those of practically all social media platforms - have not worked as intended. As Zuckerberg noted in a video about the changes, Meta's automated content screening often got things wrong. And even when it correctly identified "misinformation" - a nebulous term that's far more difficult to define than many people want to admit - it struggled to remove the stuff, given the volume and persistence of bad actors.
In any case, the problems that social media poses for its users run much deeper than content moderation. Bigger concerns stem from how platforms disseminate content. Tech companies should be helping address these worries by doing far more to reveal their algorithms to the public, allowing for greater scrutiny of their operations. The companies should also grant access to their data so that researchers and policymakers alike can study the effects that social media networks have on users and society.
The need for such transparency has become more evident since Zuckerberg's announcement. Rather than use third-party fact-checkers to determine the accuracy of content on its platforms, the CEO said, Meta will adopt a system similar to X's Community Notes function, which crowdsources the work of debunking false claims. Meanwhile, the company will loosen its content filters to prioritize screening "illegal and high-severity violations," such as terrorism, child sexual exploitation and fraud.