As the world adjusts to a Twitter without @realdonaldtrump, the next big question is: “Now what?”
Major tech platforms, long accused of giving President Donald Trump special treatment not allotted to regular users, have shown him the door in the wake of his incitement of violence by supporters at the U.S. Capitol on Jan. 6. He’s gone from Twitter, Facebook, Snapchat — even Shopify.
But in many ways, booting the president was the easy part.
Will companies now hold other world leaders to the same standard? Will they wade further into deciding what is and isn’t allowed on their platforms, potentially alienating large swaths of their user base? Will all this lead to further online splintering, pushing those flirting with extreme views to fringe sites and secret chat groups?
Although they’ve long sought to remain neutral, Facebook, Twitter and other social platforms are slowly waking up to the active role they and their algorithms have played in shaping a modern world filled with polarized, angry groups and huge factions falling for bogus conspiracies and misinformation about science, politics and medicine.
“What we’re seeing is a shift from the platforms from a stance of free-speech absolutism, towards an understanding of speech moderation as a matter of public health,” said civic media professor Ethan Zuckerman of the University of Massachusetts-Amherst.
None of this can be fixed soon, if ever. Certainly not by blocking a president with just a few days left in his term.
But there are blueprints for future action. Remember “Plandemic?” That was the slickly-produced, 26-minute, misinformation-ridden video promoting COVID-19 conspiracies that emerged seemingly out of nowhere and racked up millions of views in a matter of days. Facebook, Twitter and YouTube scrambled to take it down — too late. But they were ready for the sequel, which failed to attract even a fraction of the attention of the first.
“Sharing disinformation about COVID is a danger because it makes it harder for us to fight the disease,” Zuckerman said. “Similarly, sharing disinformation about voting is an attack on our democracy.”
Unsurprisingly, it’s been easier for tech giants to act decisively on matters of public health than on politics. Corporate bans of the U.S. president and his supporters have led to loud, if generally unfounded, cries of censorship as well as charges of left-wing bias. It’s even attracted criticism from European leaders such as German Chancellor Angela Merkel — not exactly a friend of Trump’s.
Merkel’s spokesman, Steffen Seibert, said freedom of opinion is a fundamental right of “elementary significance.”
“This fundamental right can be intervened in, but according to the law and within the framework defined by legislators — not according to a decision by the management of social media platforms,” he told reporters in Berlin. “Seen from this angle, the chancellor considers it problematic that the accounts of the U.S. president have now been permanently blocked.”
From that German perspective, it should be the government, and not private companies like Facebook and Twitter, who decides what counts as dangerous speech on social platforms. That approach might be feasible in Europe, but it’s much more complicated in the U.S., where the First Amendment of the U.S. Constitution protects freedom of expression from government interference, although not from corporate policy on privately owned communication platforms.
Governments, of course, remain free to regulate tech companies, another area of ferment. Over the past year, Trump, other Republicans and some Democrats have called for revoking a fundamental 1996 legal provision known as Section 230. That protects social platforms, which can host trillions of messages, from being sued into oblivion by anyone who feels wronged by something someone else has posted. But so far there’s been more heat than light on the issue.
Still, few are happy with the often sluggish, after-the-fact, three-strikes takedowns and suspensions that have characterized Twitter and Facebook for years. Particularly in the light of the Capitol insurrection, the deadly Charlottesville rally in 2017 and live-streamed mass shootings.
Sarita Schoenebeck, University of Michigan professor who focuses on online harassment, said it might be time for platforms to reevaluate how they approach problematic material on their sites.
“For years, platforms have evaluated what kinds of content are appropriate or not by evaluating the content in isolation, without considering the broader social and cultural context that it takes place in,” she said. “We need to revisit this approach. We should rely on a combination of democratic principles, community governance and platform rules to shape behavior.”
Jared Schroeder, an expert in social media and the First Amendment at Southern Methodist University, thinks the Trump bans will encourage his base of followers to move towards other social platforms where they can organize and communicate with fewer — if any — restrictions.
“It’s likely the bans will fuel the us-against-them narrative – and it’s also likely other forums will get a boost in traffic, as we saw after the 2020 election,” he said. “The bans have taken away the best tools for organizing people and for Trump to speak to the largest audiences, but these are by no means the only tools.”