If your time is short

– In the year since Elon Musk purchased Twitter for $44 billion, the platform now known as X has removed guardrails designed to restrict the flow of mis- and disinformation, including stripping away what was once a free account verification process designed to combat impersonation and replacing it with paid “blue check” accounts that guarantee posts will be prioritized by X’s algorithm.


– The platform has instituted processes that experts say elevate and encourage the spread of misleading content, including sharing ad revenue with its largest content creators — accounts that have paid money for “blue checks” to expand their posts’ reach.
– Data shows that these measures and others have sparked increased sharing of misinformation and hate speech. At the same time, X has instituted cost-prohibitive monthly fees for limited access to Twitter’s application programming interface, or API — data that third-party researchers commonly use to study and measure critical trends about the influence of social media.

Hours after federal filings showed entrepreneur Elon Musk offered about $43 billion to buy Twitter, Musk told a Vancouver TED Talk audience about his vision for the social media platform.

“My strong intuitive sense is that having a public platform that is maximally trusted and broadly inclusive is extremely important to the future of civilization,” Musk said April 14, 2022.

Musk closed the Twitter deal Oct. 27, 2022, for $44 billion. A year into Musk’s ownership, however, experts say the platform formerly known as Twitter has, through its practices, eroded trust and fanned misinformation. It disabled features that helped users avoid being duped by false information and established new systems that promote confusion and encourage the spread of false claims.

Musk personally has sown misinformation, too. The day after Hamas militants invaded Israel Oct. 7, killing more than a thousand people, Musk directed his millions of followers to two accounts that he described as “good” sources for “real-time” information about the war — both known for publishing unverified stories and false accounts. Musk later took down the tweet, but it had been seen 11 million times. Three days later, he posted a laughing emoji on a post that falsely suggested CNN had faked an attack in Israel.

That the anniversary of Musk’s acquisition coincides with the outbreak of the Israel-Hamas war provides a real-time snapshot of the platform’s health. Pre-Musk Twitter was hardly a panacea in the world of truth-trading. And the flood of Israel-Hamas war misinformation on X — the new name Musk gave Twitter in late July — isn’t unique to this violence or unprecedented on social media.

But taken together, experts told PolitiFact that the changes Musk has ushered in — sometimes erratically and based on the outcomes of user polls — have worsened the information ecosystem on a platform once revered as a go-to place for breaking news.

“Under Elon Musk’s ownership, misinformers are emboldened and lent an air of legitimacy,” said Jack Brewster, enterprise editor with NewsGuard, a company that tracks online misinformation. “Rather than achieving the goal of leveling the playing field, Musk’s alterations, which include a major overhaul of the platform’s verification system and a reduction in content moderation, have instead fostered an environment in which bad actors can flourish.”

When PolitiFact contacted X for comment about this story, we received an auto-reply that said, “Busy now, please check back later.”

An aerial view shows a newly constructed X sign on the roof of the headquarters of the social media platform previously known as Twitter, in San Francisco, on July 29, 2023.

Research shows rising misinformation, hate speech on X, even with limited data

Analysts have documented some notable shifts on the platform since Musk took over:

The Atlantic Council’s Digital Forensic Research Lab found in October that a network of pro-Saudi Arabia Twitter accounts were coordinating in an apparent attempt to persuade Musk to reinstate the account of a banned user who, according to reports, helped orchestrate the 2018 murder of Jamal Khashoggi, a Saudi journalist and Washington Post columnist who was critical of Saudi Arabia’s crown prince.

The numbers of tweets containing slurs have spiked, as has the volume of engagement with those tweets, the Center for Countering Digital Hate reported in December 2022. The Center analyzed the number of tweets containing certain slurs on an average day in 2022 — from Jan. 1 to Oct. 27, 2022 — and compared it with the average number of daily tweets containing those slurs from Musk’s Oct. 28 takeover to Nov. 29, 2022. The Center found that depending on the word, the average number of daily posts using slurs shot up between 33% to 202%. Musk has since sued the group, accusing it of a “scare campaign to drive away advertisers from the X platform.” The case is ongoing.

Russian, Chinese and Iranian state media outlets known for spreading disinformation gained followers on Twitter, the Digital Forensic Research Lab reported in April. Its analysis relied on data from Meltwater Explore, a social media monitoring platform, to analyze Twitter data from Jan. 1 to April 19 for accounts labeled “state-affiliated.” Twitter under Musk took deliberate steps to stop reducing the reach of accounts from such state-sponsored sources, NPR reported.

In the 90 days after April 21, when X removed labels identifying content from state-affiliated accounts, Russian, Chinese, and Iranian state media English-language accounts surged 70% compared with the previous 90-day period, NewsGuard found.

But X itself has thwarted researchers’ efforts to analyze critical trends, despite Musk’s stance that its policy is to keep things “open source and transparent.” In February, the platform started charging cost-prohibitive fees of $42,000 to $210,000 per month for limited access to Twitter’s application programming interface, or API. The interface gives third-party researchers access to data they can analyze to better understand how information spreads. Researchers have used such data to learn more about social media’s role in election misinformation spread, democracy, COVID-19 discourse and social justice advocacy.

There is also a $100 per month API access option, but researchers say the data it provides is far too limited. The board for the Coalition for Independent Technology Research, a group of academics, journalists, civil society researchers and community scientists interested in advancing research on technology’s impact on society, criticized the changes. It said that prior API availability gave researchers low-cost access to real-time data on 10% of all tweets while even the most expensive tier under Musk’s new plan would cut access by 80% and cost 400 times more.

“Twitter’s new system to monetize and dramatically restrict access to its API will render this research and development impossible,” the coalition wrote in an open letter April 3. “Unless they can pay, researchers will not be able to collect any tweets at all.”

Mike Caulfield, a research scientist at the University of Washington’s Center for an Informed Public, said, “The tools that researchers would generally use to answer a question like ‘is there more or less misinformation’ have been taken away.”

As European Union regulators crack down on social media misinformation and hate speech, Musk has signaled little interest in cooperating. Under his leadership, Twitter withdrew from the EU’s Code of Practice on Disinformation, a unique set of voluntary commitments social media platforms made to research and fight disinformation.

Before Musk pulled out, however, TrustLab, a company commissioned by the social media platforms in response to that agreement, accessed X’s data on the platforms in three EU countries.

TrustLab’s September report analyzing the prevalence and sources of disinformation on social media found that mis- and disinformation discoverability — a measure of how easily a platform surfaces mis- and disinformation for users searching certain keywords — is highest on X compared with Facebook, TikTok, Instagram, YouTube and LinkedIn. Mis- and disinformation content on X received more engagement than other content, and X had the largest “ratio of disinformation actors,” which refers “to the proportion of disinformation actors relative to the total accounts sampled on a platform.” The analysis included samples from May to June 2023.

TrustLab co-founder and CEO Tom Siegel said that compared with other platforms, this data snapshot showed misinformation is the worst on Musk’s platform. “Has it always been that way? Has it recently spiked particularly when the governance change started happening?” Siegel said, reflecting on this data set. “I can’t say that with certainty.”

Musk has also appeared to resist other efforts in the EU to discourage bad actors from sharing false information.

When EU Commissioner Thierry Breton posted on X an Oct. 10 open letter, calling on Musk to enforce rules of the new Digital Services Act and moderate and remove violent and terrorist content, Musk responded by asking him to list the violations on X so “the public can see them.”

Breton was unmoved. “You are well aware of your users’ — and authorities’— reports on fake content and glorification of violence,” he responded.

The transformation of the blue check mark: How an $8 subscription fee buys favor with the platform’s algorithm

Twitter’s blue check mark was a once-coveted indicator that an account holder’s identity was authentic, “verified.”

On X, paid users reign supreme.

Under Musk’s changes — initially called “Twitter Blue” but now called “X Premium” — anyone can buy a blue check mark for $8 a month or $84 a year, guaranteeing that their posts, no matter the content, will be prioritized by X’s algorithm.

Rampant impersonation followed Twitter Blue’s launch Nov. 9. An account impersonating the pharmaceutical giant Eli Lilly and Co. and carrying a blue check mark falsely tweeted, “We are excited to announce insulin is free now,” triggering a drop in Eli Lilly’s stock price.

A day later, Twitter paused the program. When it relaunched in December, it included some safeguards to prevent impersonation. Then, in late March 2023, Musk announced that only subscribers’ tweets would be recommended on the “For You” page — the default feed users see when opening the platform.

The result is that people are exposed to mis- and disinformation shared by accounts they don’t follow and previously might not have seen, said Nick Reiners, senior analyst at Eurasia Group, a political risk consultancy.

These changes were among the most influential to the proliferation of misinformation on X, experts told us.

News organizations, including PolitiFact, were also stripped of the verified blue checks that afforded credibility and warded off impersonators — unless they paid a monthly business subscription fee of $1,000, which would secure them a gold check mark.

So, verifiable news became harder to find as less-trusted sources were empowered to thrive. “The easiest way to get distribution is to buy it,” Siegel said. That means people who want to spread scams or low-quality information online have a cheap and convenient way to do so.

McKenzie Sadeghi, a NewsGuard senior analyst, said the change gave misinformers willing to pay “an air of legitimacy.”

A person signs into X, the platform formerly known as Twitter, in an office in central London, Monday July 24, 2023.

Viral posts have become profitable

Bad information has click-appeal; it is often designed to trigger an emotional response, said Kolina Koltai, a former Twitter contractor now with Bellingcat, a Netherlands-based digital investigative journalism group.

So, in mid-July, when X began sharing ad revenue with its largest content creators who also pay for blue check marks, it compounded the platform’s misinformation problem, experts said.

Now, blue check mark subscribers can earn a profit when people interact with their content. It is unclear how an account becomes eligible for payouts, or how the payments are calculated.

The first round of payments to creators totaled $5 million, according to Musk. Those payments went largely to right-wing influencers, with people such as Andrew Tate, Ian Miles Cheong and Benny Johnson each tweeting that they’d received payments of about $10,000 or more.

X’s revenue sharing policy introduced “an additional incentive for posting viral low-quality content,” said Boston University professor Gianluca Stringhini, who researches malicious activity on the internet.

Researchers have already started to document this incentive’s impact.

From Oct. 7 to Oct. 14, the Israel-Hamas war’s first week, NewsGuard analyzed “the 250 most-engaged posts” that promoted one of 10 prominent false or unsubstantiated war narratives identified by NewsGuard and found that 186 of the posts were shared by X subscribers with blue checks. Such narratives included the false claims that CNN staged an attack in Israel and that a White House memo showed the U.S. approved $8 billion in aid for Israel.

This means 74% “of the most viral posts on X advancing misinformation about the Israel-Hamas War” were pushed by paid X users, NewsGuard found.

While amplifying falsehoods, Musk has replatformed misinformers

Musk himself shares false and misleading narratives and has used his position to promote accounts of known misinformers.

On Oct. 30, 2022, three days after closing his Twitter purchase, Musk tweeted a link from a site known to spread misinformation that fueled an unsubstantiated narrative about the attack on then-House Speaker Nancy Pelosi’s husband, Paul Pelosi. Musk later deleted the tweet.

Musk restored accounts for thousands of users who were once banned from Twitter for misconduct such as posting violent threats, harassment or spreading misinformation.

On Nov. 19, less than a month after taking over the platform, Musk restored former President Donald Trump’s account after holding a poll that asked users whether he should. Trump had been banned since Jan. 6, 2021, when a pro-Trump mob attacked the U.S. Capitol.

On Nov. 23, Musk asked users whether the platform should “offer a general amnesty to suspended accounts, provided that they have not broken the law or engaged in egregious spam.” After 72% of respondents voted “yes,” Musk said he would implement such a policy.

It appears these reinstated accounts drive profit for X.

The Center for Countering Digital Hate in February analyzed publicly available data on tweet impressions for 10 reinstated accounts it described as “renowned for publishing hateful content and dangerous conspiracies.” By its estimates, the platform stood to make more than $19 million per year in ad revenue from the 10 accounts, most of which appeared to have paid for blue check marks.

X’s Help Center says it addresses misinformation. But its policies are not clear.

The Help Center says the actions it takes on misinformation are “meant to be proportionate to the level of potential harm from that situation” and warns that people who repeatedly violate the platform’s policies “may be subject to temporary suspensions.”

It also says it limits amplification of misleading content or removes it “if offline consequences could be immediate and severe.” But it does not explain what content would qualify.

Before Musk, repeated violations of Twitter’s policies prohibiting the spread of COVID-19 and election misinformation could result in permanent bans, CNN reported.

About a month after Musk’s takeover, the platform said it would no longer enforce its COVID-19 misinformation policy. The platform’s written policy on election misinformation says the platform can label or deamplify posts that confuse users about their ability to vote, including incorrect information about polling times or locations, PolitiFact reported. But PolitiFact has reported that the policy’s enforcement is inconsistent.

Siegel said that judging by the X content that freely spreads unchecked, it appears X has become more permissive about what people can say on the platform before facing penalties. That reflects a shift in values within X’s leadership.

“It’s just really deciding what, as a platform, do you allow and not allow, according to your own values?” Siegel said. “They’re just extremely biased toward freedom of speech and not interfering with people’s rights to post content.”

Musk, a self-described “free speech absolutist,” has repeatedly said that he promotes free speech to the extent that it is legal.

“I don’t know what’s going on with every part of this platform all the time, but our policy worldwide is to fight for maximum freedom of speech under the law,” Musk wrote Sept. 17. “Anyone working for X Corp who does not operate according to this principle will be invited to further their career at any one of the other social media companies who sell their soul for a buck.”

At times, however, Musk has flip-flopped on his free speech stance. He said he would not ban an account that was following his plane, and then he did ban it.

Elon Musk speaks to reporters after leaving lunch at the Russell Senate Office Building in Washington, D.C. on September 13, 2023 between public hearings with members of Congress on concerns over artificial intelligence.

Musk’s changes made vetted, independent news harder to find

Gone are many features people once used to more successfully navigate the platform’s information environment.

In April, Musk’s platform removed labels that told users when accounts were state-affiliated or government-funded and stopped reducing their reach. Users no longer know of an account’s government ties, unless they have previous knowledge of that entity or conduct their own research, Sadeghi said.

As a result of Musk’s 2022 layoffs, experts say the platform has little to no staff dedicated to content moderation or fostering trust and safety.

Reiners said the changes to how accounts obtain check marks undermined news organizations by making it harder to identify authentic news organizations’ accounts.

Russian state-sponsored media organization, RT, for example, now shows the paid gold check mark badge but no longer includes the label alerting readers that it’s state-sponsored. Some local news organizations have check mark accounts; others do not. In the case of The New York Times, it appears X freely gave the gold badge and then took it away.

In a recent update, X also stopped displaying headline text when news outlets share links to stories, making news content harder to recognize — a change Reiners called “baffling.”

Musk’s displeasure with the press has seemed apparent in other decisions, too.

On Dec. 15, Musk abruptly banned from Twitter several journalists, including from The Washington Post, The New York Times and CNN, who had reported on a platform rule change that led to the suspension of @ElonJet, an account that tracked the location of Musk’s private jet. Musk claimed without evidence that the journalists had violated an anti-doxxing policy by sharing his precise, real-time location.

Facing backlash, Musk again polled platform users. A majority of voters supported reinstating the journalists, and Musk reinstated most of the accounts by Dec. 18.

In August, The Washington Post reported that X deliberately slowed users’ access to links directing people to news organizations and other social media platforms, including The New York Times, Reuters and Facebook.

Crowd-sourced fact-checking via Community Notes is not enough, experts say

X touted its Community Notes program — the platform’s crowdsourced approach to addressing misinformation — as one way to combat Israel-Hamas war misinformation. The program allows certain users to submit context to tweets that might be otherwise misleading.

“In one week we’ve added 10,000 new authors and simultaneously rolled out new enhancements to help people see more notes, faster,” X’s CEO Linda Yaccarino posted Oct. 16.

On Oct. 17, the Community Notes account said it would require people to include sources for proposed notes. Musk responded: “Links to actual source data, not some bs press article, are what matter. Many legacy media organizations have no business model or meaningful circulation anymore — they just exist as propaganda tools for their owners.”

Experts described Community Notes as innovative, but they cautioned that the feature is imperfect and not enough to single-handedly combat misinformation on X.

Using “the wisdom of crowds” to inform rather than having platform moderators decide what people can or cannot see is one way to promote freedom of speech while providing context, said Siegel.

For example: In June, Florida Gov. Ron DeSantis falsely claimed that the Los Angeles Dodgers’ decision to recognize an LGBTQ+ group during a Pride Night event resulted in photos of “a virtually empty stadium” for the game, and his post received a Community Note. “The photo was taken an hour before the opening pitch,” it read. “The Dodgers reported attendance for June 16 was 49,074.”

The downside is that forms can get spammed, and Community Notes can contain inaccurate information or lack crucial context.

Community Notes volunteers have expressed frustration at how long it has taken for notes to appear on posts containing misinformation since the Israel-Hamas war started. Community Notes must be accepted by a consensus of people from across the political spectrum, so they can be slow to publicly appear. And many notes on polarizing subject matters will never become public at all; sometimes, they disappear.

NewsGuard found that Community Notes appeared on 79 of the 250 posts the group analyzed that shared misinformation about the Israel-Hamas war — or 31.6% of the time.

Koltai, who researched Community Notes for Twitter, said the initiative was not meant to be the platform’s only approach to addressing misinformation.

Siegel said, “Anytime you have a free-form feature that allows for community action and interaction, it just has a lot of potential to introduce a lot of noise.”

PolitiFact Researcher Caryn Baird contributed to this report.