This article is part of the On Tech newsletter. You can sign up here to receive it on weekdays.
Big internet companies are finally taking “superspreaders” misinformation seriously. (All it took was a global health crisis and the big lie of a rigged election.)
I’ve written about influential people, including former President Donald J. Trump, who were instrumental in spreading false information online on important issues like electoral integrity and vaccination safety. Some of these people have twisted our beliefs repeatedly – and internet companies have largely given them a pass.
Let’s examine why habitual misinformation traders matter and how internet businesses focus on them – including the new rules Facebook introduced this week.
Facebook, Twitter, and YouTube deserve credit for starting to target repeat misinformation offenders. But I also want people to be aware of the limits of what companies do and understand the challenge of applying these guidelines fairly and transparently.
How big is the problem with people who repeatedly post untrue things?
A lot of things people say online aren’t necessarily true or false. We want room for the messy center. The concern is when information is completely wrong and we know that some of the same people are responsible for reinforcing this misinformation over and over again.
Last fall, a coalition of misinformation researchers found that roughly half of all retweets related to multiple and widespread false allegations of polluting elections came from just 35 Twitter accounts, including those of Mr. Trump and conservative activist Charlie Kirk. A research group recently identified the reports of about a dozen people, including Robert F. Kennedy Jr., who repeatedly – sometimes for years – disseminated discredited information about vaccines or, more recently, false “cures” for Covid-19.
Until recently, it mostly didn’t matter if someone posted junk health information once or 100 times, or a false election conspiracy theory, or if the person was Justin Bieber or your cousin with five Facebook followers. Internet companies usually rated the content of each message in isolation. That didn’t make any sense.
How politics focuses on these habitual abusers
The January 6th Uprising in the US Capitol showed the danger of falsehoods being repeatedly told to a public inclined to believe them. Internet companies began to grapple with the overwhelming influence of people with large followers, who commonly spread false information.
Facebook said Wednesday that it would impose stricter penalties on individual accounts that repeatedly post things that the company’s fact-checkers have found misleading or untrue. Posts from habitual abusers are less circulated on Facebook’s news feed, which means that others are less likely to see them. A similar guideline was adopted for Facebook groups in March.
Twitter put in place a five-strike system a few months ago, escalating penalties for those who tweet misinformation about coronavirus vaccines. Internet companies have banned accounts of some repeat offenders, including Kennedy’s.
It is too early to assess whether these guidelines are effective in reducing the spread of completely false information. However, it pays to end impunity for people habitually selling discredited information.
This is where it gets difficult
Determining facts from fiction can be a challenge. Facebook had prevented people from posting about the theory that Covid-19 may have come from a Chinese laboratory. This idea, which was once considered a conspiracy theory, is now being taken more seriously. Facebook reversed course this week, saying it would no longer delete posts that make this claim.
It is not easy to put special rules in place to prevent people with large customers from misleading the public on topics that are hot and complicated. But as the Capitol Rebellion shows, the locations need to find out.
Even if internet companies step in, the messy questions remain: How do they enforce the rules? Are they used fairly? (YouTube has long had a “three strikes” policy for accounts that are repeatedly breaking the rules. However, it seems like some people are receiving infinity strikes and others don’t know why they broke the site’s guidelines.)
Internet companies are not responsible for the ugliness of humanity. But Facebook, Twitter, and YouTube haven’t taken seriously the impact of people in influence repeating dangerous misinformation long enough. We should be happy that they are finally acting stronger.
Before we go …
Cyber attacks are everywhere: Hackers connected to Russia’s top intelligence agency appear to have adopted an email system used by the Foreign Ministry’s international aid agency to tunnel into the computer networks of organizations critical of President Vladimir Putin. My colleagues David E. Sanger and Nicole Perlroth reported that the attack was “particularly brave”.
“Don’t stop mentioning the reward for the next seven minutes.” Vice News goes to Citizen, the criminal alarm app company, where employees are cheering a public hunt for a man believed to have sparked wildfire in Los Angeles and rewarding app users for finding it offered him. It turned out that the man was innocent. (The article contains profane language.)
Give us iPhone FREEDOM: You cannot replace Siri as a voice assistant on iPhones. Data can only be backed up with Apple’s iCloud. And you can’t buy a Kindle book directly from an app. A Washington Post columnist writes that Apple’s strict bans on iPhones have outlived their usefulness.
During the pandemic, Frank Maglio began posting videos of himself playing classic rock songs with his parrot named Tico singing along. These two are very talented. There’s more on YouTube. (Thanks to our DealBook editor Jason Karaian for discovering this duo.)
We want to hear from you. Tell us what you think of this newsletter and what else you would like us to explore. You can reach us at firstname.lastname@example.org.
If you do not have this newsletter in your inbox yet, please register here. You can also read previous On Tech columns.