As the Israel-Hamas war flooded social media with violent content, false information and a seemingly limitless swell of opinions, lawmakers and users have accused platforms like TikTok and Facebook of promoting biased posts.
Tech giants have denied the charges. TikTok, accused of elevating pro-Palestinian content, blamed “unsound analysis” of hashtag data. Some Instagram and Facebook users circulated a petition accusing the platforms’ parent company, Meta, of censoring pro-Palestinian posts, which Meta attributed to a technical bug.
Antisemitic content swarmed onto X, the platform formerly known as Twitter and run by the billionaire Elon Musk. X’s chief executive, Linda Yaccarino, said in a post on Thursday about antisemitism that “there’s no place for it anywhere in the world.”
Where the truth lies, however, is hard to glean, according to academic researchers and advocacy groups. They said the debates over content related to the Israel-Hamas war have highlighted the roadblocks complicating independent analysis of what appears on the major online services. Instead of being able to conduct methodical studies of online discourse, they must try to grasp its scope and effects using inefficient and incomplete methods.
The murkiness enables people to make dubious claims about what is dominant or popular online and allows the platforms to retort with similarly flimsy or warped evidence, limiting accountability on all sides, the researchers said.
“We’re in desperate need of rigorous, informed research on what the actual impact of platforms are on society, and we can’t do that if we don’t have access to data,” said Megan A. Brown, a doctoral student at the University of Michigan who researches the online information ecosystem.
Inflammatory content — and what to do about it — remained top of mind at social media platforms this week. More than a dozen Jewish TikTok creators and celebrities, including the actors Sacha Baron Cohen and Debra Messing, confronted TikTok executives and employees in a private meeting about the platform’s handling of antisemitism and harassment. After Mr. Musk endorsed an antisemitic post on X, internal messages showed that IBM cut off $1 million in planned advertising spending.
Researchers also tried to understand a surge of interest in a decades-old letter from Osama bin Laden. The so-called “Letter to America” criticized the United States and its support of Israel, repeating antisemitic tropes and condemning the destruction of Palestinian homes.
After reviewing public social media posts from Tuesday to Thursday, researchers from the Institute for Strategic Dialogue concluded that references to the letter jumped more than 1,800 percent on X. They found 41 “Letter to America” videos with more than 6.9 million views on TikTok.
The researchers, Isabelle Frances-Wright and Moustafa Ayad, said in an interview that they wanted to do much more sophisticated analysis. Instead, they had to run searches by hand using basic terms, unable to analyze the letter’s spread by region or language.
“Much of this content, particularly video content, is not tagged with the type of text we can manually search, so anything we’re finding is really just the tip of the iceberg,” Ms. Frances-Wright said.
Jamie Favazza, a spokeswoman for TikTok, said that the company supported independent research, and that it allowed over 130 academic research teams access to analyze the site. “We’re working diligently to expand eligibility to civil society researchers in the U.S. soon,” she said.
Meta declined to comment. X did not respond to a request for comment.
Background data about engagement, volume and other metrics is usually retrieved through a platform’s application programming interface, or A.P.I. The major tech companies have long offered some degree of access, but researchers said that now seems to be shrinking.
This year, as Mr. Musk sought to find new ways to monetize X, the company started charging thousands of dollars for monthly access to its A.P.I., effectively shutting out many researchers. Meta’s support for the data analysis tool CrowdTangle has dwindled amid internal concerns about damaging the company’s reputation.
These days, researchers said, the data they can study is often dictated by what platforms want to release — “research by permission,” some explained — and is often unreliable and delayed long past the point of relevance.
“With data, you can always paint the picture that you want when you are the only one who has access to that data,” said Sukrit Venkatagiri, an assistant computer science professor and misinformation expert at Swarthmore College. “If we have no lens into what is happening in these spaces that have billions of users, that is a little scary.”
TikTok has been at the center of the recent firestorm, partly because of its ownership by the Chinese company ByteDance, with some critics claiming that it is pushing pro-Palestinian content to align with the government in Beijing. TikTok has been accused of amplifying pro-Palestinian videos through its powerful algorithmic feed and of failing to address antisemitic content.
TikTok has issued multiple statements pushing back on accusations of bias, pointing to polls showing that young Americans supported the Palestinian cause before the company existed. The company has also tried to poke holes in data about popular hashtags that critics said revealed the pro-Palestinian bent on the service.
This week, TikTok said that the hashtag #standwithIsrael had fewer videos than #FreePalestine, but “68 percent more views per video in the U.S., which means more people are seeing the content.” It also pointed to public data on Instagram and Facebook, which showed millions of #FreePalestine posts and fewer than 300,000 #standwithisrael posts.