FeaturedOpinionTechnology

We need content moderation: Meta is out of step with public opinion

This is a bad moment for fact-checking.

On his first day in office, Trump signed an executive order titled “Restoring Freedom of Speech and Ending Federal Censorship,” which targets social media platforms’ use of fact-checkers to moderate misinformation.

And earlier this month, Meta — which owns Facebook, Instagram and WhatsApp — claiming Trump’s victory shows that Americans prioritize free speech over combating misinformation, announced an end to its partnership with independent fact-checking organizations in the U.S. Mark Zuckerberg, Meta’s chief executive, acknowledged that these changes will allow more “bad stuff” on its sites with the promised benefit of reducing the amount of “censorship” on the platforms.

While Republican leaders have been railing against fact-checking for years, that does not mean these changes reflect the will of the public. In polls of thousands of Americans, we found the opposite — there is broad bipartisan support for platforms taking action against harmfully misleading content, and relying on the judgment of experts to make such decisions. Meta’s actions are out of step with the desires of its users.

From 2016 until recently, Facebook and Instagram posts deemed inaccurate by fact-checking partners certified through the nonpartisan International Fact-Checking Network had received warning labels and be demoted in users’ feeds, so that fewer people would see unlabeled false content. Meta’s recent announcement signals an end to this status quo and a plan to move to a crowdsourced fact-checking model similar to X’s Community Notes, where it is up to users to classify posts as misleading.

These changes are the latest in a series of corporate and political moves to restrict tech platforms’ efforts to moderate content and suppress misinformation. After Elon Musk acquired Twitter (now X), the company quickly ended its policies prohibiting users from sharing false information about COVID-19 or vaccines, dissolved Twitter’s Trust and Safety Council and moved the platform’s content moderation efforts to largely rely on its fledgling Community Notes system.

Soon after, similar rollbacks of content moderation efforts occurred at Alphabet (the parent company of Google and YouTube) and Meta. For instance, in 2023 YouTube reversed its policy disallowing content advancing claims of widespread fraud in the 2020 presidential election. And Meta enacted layoffs drastically reducing its trust and safety team and curtailing the development of fact-checking tools.

These changes are a fairly clear response to efforts by Republicans to pressure tech platforms to stop moderating false content. Lawmakers in Florida and Texas have attempted to pass laws prohibiting social media platforms from banning or moderating posts from political candidates, claiming censorship of conservative voices.

At the same time, Republicans in Congress, led by House Judiciary Committee Chair Jim Jordan (R-Ohio), have put academics researching misinformation under legal scrutiny over alleged targeting of right-wing political views. This jeopardizes the ability of academics to evaluate the online information landscape and the effects of waning moderation efforts. Trump’s new executive order is the latest round of such efforts.

But what does the American public actually want in terms of content moderation? Along with our colleagues Adam Berinsky, Amy Zhang and Paul Resnick, we first assessed this question in summer 2023 through a nationally representative poll of nearly 3,000 Americans. We asked respondents whether, in general, social media companies should try and reduce the spread of harmful misinformation on their platforms. Americans overwhelmingly agreed — 80 percent indicated that the companies should indeed be trying to reduce harmful misinformation on their platforms. And while this was especially the case for Democrats (93 percent), the majority of Republicans (65 percent) also agreed.

We again examined public opinion on this issue shortly after Meta announced its policy change this month. We asked a new set of nearly 1,000 respondents if they thought social media companies should try to reduce the spread of harmfully misleading content on their platforms. Again, the vast majority (84 percent) agreed — including majority support across Democrats (97 percent), independents (78 percent) and Republicans (65 percent). We also found that a clear majority of respondents (83 percent), including the majority of Republicans (63 percent), supported attaching warning labels that say “false information” to posts evaluated as such by independent fact-checkers and including links to sources with verifiably correct information.

And although Zuckerberg claimed that fact-checkers “have destroyed more trust than they created,” we found in a large online experiment that even Republicans perceived fact-checkers as more legitimate at doing content moderation compared to social media users. These findings may foretell a decline in confidence in Meta’s content moderation procedures as they pivot to replacing professional fact-checkers with user-based community notes.

Indeed, in our most recent public opinion survey from this month, relying solely on community fact-checking was very unpopular across respondents. We asked which group social media platforms should use to evaluate whether online posts are false — independent fact-checkers, users, a combination of the two or neither. Only 8 percent of respondents (and 11 percent of Republicans) selected the policy using only users to flag and fact-check each other’s posts. In contrast, about 39 percent of respondents chose the policy using only independent fact-checkers, and another 40 percent advocated for the policy combining professional fact-checkers and users.

There is an appetite among the mass public for social media companies to continue using moderation policies targeting misleading content. Even the majority of Republicans want these companies to reduce misleading content online and support policies such as the labeling of harmfully misleading content about issues like election integrity. And while user-based content moderation approaches like Community Notes have shown promise, they best serve as a complement to, rather than replacement for, other tools for mitigating falsehoods, such as fact-checker warning labels and downranking misinformation.

Rather than a rollback of moderation efforts, Americans want progress on, not prevention of, platform governance. Instead, Trump’s executive order and the recent changes from Meta and other tech giants reflect a major political bias in policy — a bias towards the beliefs of tech billionaires and conservative political elites and away from what the broad public wants.

David Rand is the Erwin H. Schell Professor and professor of Management Science and Brain and Cognitive Sciences at MIT. Cameron Martel is a PhD candidate at MIT.

Source link

Related Posts

Load More Posts Loading...No More Posts.