Why online speech is moderate despite what Elon Musk wants
When the World Wide Web was opened to the public in 1991, its enthusiasts heralded a new era of unfiltered, free expression. This was before the internet in general, and social media platforms in particular, proved to be such effective places to spread misinformation about important topics such as Covid-19 and vaccines, misinformation (lies intentional) about politics and elections, as well as all kinds of conspiracies. theories and hate speech. Social media platforms have come under scrutiny over what content they silence and what content they amplify. It’s the backdrop as Elon Musk pursues his deal to take over Twitter, promising to prioritize free speech.
IS THERE NOT A RIGHT TO FREEDOM OF EXPRESSION ON THE INTERNET?
No. The First Amendment to the United States Constitution prohibits censorship by government, not censorship by private corporations.
In fact, like newspapers, book publishers and TV stations, online gathering places like Twitter and Facebook have constitutional protections for deciding what to moderate and filter. Section 230 of the Communications Decency Act 1996 provides them with broad protection against the types of liability that publishers traditionally face for defamatory content, as well as broad leeway to moderate discussions and remove posts or leave alone.
HOW DO COMPANIES MODERATE SPEECH?
Facebook, Twitter, Instagram and YouTube routinely remove posts deemed to violate standards on violence, sexual content, privacy, harassment, impersonation, self-harm and other concerns. Most of these actions happen automatically, thanks to decisions made by artificial intelligence. (This has led to complaints of over-enforcement or the removal of content that may not have violated the rules.)
Facebook and Google partner with third-party fact checkers to verify posts and news that might be suspicious. Twitter labels certain posts with misleading or disputed claims in certain categories, such as Covid-19 or elections.
More rarely, platforms ban users, such as radio provocateur Alex Jones, removed from Facebook, Twitter, YouTube and Apple for making hate speech.
Then-President Donald Trump’s Facebook and Twitter accounts were frozen following the Jan. 6 riot by his supporters at the United States Capitol. Twitter permanently banned it; Facebook says it could reinstate it in 2023 if “the risk to public safety” subsides.
WHAT IS MUSK’S VIEW?
He calls himself a “free speech absolutist” and promises to take a minimalist approach to restrictions. “By ‘free speech,’ I just mean what’s within the law,” he tweeted a day after he struck a deal to buy Twitter for $44 billion.
“I am against censorship which goes far beyond the law. If people want less freedom of speech, they will ask the government to pass laws to that effect. Therefore, going beyond the law is against the will of the people.
A few weeks earlier, during a panel discussion at the Ted2022 conference, Musk had said he would be “very reluctant to take things down” and “very careful with permanent bans – timeouts, I think, are better”. He talked about opening Twitter’s content algorithms – which decide which posts get promoted or demoted – to public scrutiny and trying to “authenticate all real humans” as a way to tell bots apart from legitimate accounts.
Musk also suggested allowing long tweets, breaking Twitter’s current limit of 280 characters.
CAN HE DO ALL OF THIS?
It won’t be easy, even for the richest man in the world. “He launched rockets into space and he helped solve the global energy crisis,” said social media consultant and industry analyst Matt Navarra. “He’s about to find that tackling content moderation on social media platforms is harder than those two things.”
Evelyn Douek, a senior fellow at Columbia University’s Knight First Amendment Institute, writes in The Atlantic that Musk is destined to learn that user-generated platforms will lose users, advertisers and technology partners if they don’t moderate. not the scams, pornography and other offensive material that comes in droves.
“Many of those who thought a do-it-all internet run by its users alone was a good idea have come to regret their naivety,” she writes.
WHO ELSE HAS BEEN DISSATISFIED WITH CONTENT MODERATION?
Trump condemned social media platforms for “suppressing conservative voices and hiding information and news that is good” and launched his own platform, Truth Social, but that rollout failed.
The 2016 presidential election, when Trump used Twitter as a megaphone, led to a torrent of criticism from social media companies over what many saw as do-it-all policies for politicians. This criticism grew as Trump, as president, used Twitter to issue threats, mock opponents and spread the truth. (Cornell University researchers found Trump “probably the biggest driver” of pandemic misinformation.)
Frances Haugen, who worked as a Facebook product manager for nearly two years, mostly on a team dedicated to ending election misinformation, has provided fresh ammunition for critics of all political persuasions. In revelations to the Wall Street Journal and testimony to Congress, she said a 2018 change Facebook made to its proprietary algorithm boosted the visibility of toxic, contested and objectionable content that sparked outrage and anger among consumers. readers, resulting in greater interaction with the service.
HOW DO OTHER COUNTRIES DEAL WITH THIS PROBLEM?
In China, Russia, and other countries under authoritarian rule, governments actively censor the Internet, including blocking or severely restricting access to American-owned social media sites.
Some democracies are moving faster than the United States to enforce stricter social media rules. India has placed Twitter, Facebook and others under direct government scrutiny, issuing regulations requiring internet platforms to help law enforcement identify those posting “malicious information”.
The European Union’s Digital Services Act approved on April 23 gives member states new power to remove illegal content such as hate speech and terrorist propaganda and urge platforms to do more to tackle harmful content .
Companies like Twitter must submit annual reports to the EU detailing how they deal with systemic risks posed by content such as racial slurs or posts glorifying eating disorders.
After Musk sealed his takeover of Twitter, EU Internal Market Commissioner Thierry Breton sent a warning to the billionaire: “Whether it’s cars or social media, any company operating in Europe must comply with our rules, regardless of his participation,” he said in a statement. Tweeter. “Mr. Musk knows that.”