X's Struggle to Contain Israel-Gaza Disinformation Bodes Ill for Its Election Preparedness
Oct 13, 2023 | 19:43 GMT
A week into the Israel-Gaza war, the proliferation of false or misleading information on X (formerly known as Twitter) illustrates the platform's content moderation challenges, suggesting that threat actors will have more opportunities to use X in their efforts to sway public opinion ahead of major upcoming elections. Since the conflict erupted on Oct. 7, various media reports have revealed significant shortcomings in the platform's ability to prevent and remove misinformation and disinformation. Over the past week, numerous widely shared posts on the Israel-Gaza war have been found to contain content that is either partially false or entirely fabricated, making it difficult for users — as well as researchers and open-source intelligence professionals — to discern truth from reality. The situation reached a critical point on Oct. 8, when X owner Elon Musk posted personal recommendations for users to follow accounts known for spreading misinformation, such as @WarMonitors and @sentdefender, both of which played a part in spreading deepfake images in May of an explosion at the U.S. Pentagon. Though Musk ultimately deleted the post, it still had over 11 million views before being taken down. Musk has also failed to delete accompanying posts (which appear on all users' feeds as part of X's updated algorithm) that encouraged users to trust X for the truth rather than traditional media sources, claiming that the mainstream media is to blame for misinformation.
Some of the false posts that have spread on X in recent days include videos of explosions and buildings collapsing, positioned to appear as if they were a series of rockets fired by Hamas, but were actually years-old images taken during the Syrian civil war. Users have also shared a violent video appearing to show an Israeli woman being tortured by Hamas that, in reality, was footage taken in 2015 of a 16-year-old being burned to death in Guatemala. Other X accounts have spread false claims that Iran had entered the Israel-Gaza conflict, and that the U.S. embassy in the Lebanese capital of Beirut had been evacuated. A fabricated document purporting to show White House plans to give Israel $8 billion in aid, along with a fabricated BBC report claiming to show evidence that Ukraine sold NATO weapons to Hamas, have circulated on the platform as well over the past week.
Disinformation actors also created a fake account impersonating the Israeli English-language newspaper The Jerusalem Post, whose legitimate website had been taken down for several days by Anonymous Sudan (a hacktivist group with alleged ties to the Russian state) following the initial outbreak of the conflict, in hopes of spreading false information about the war. In one post that received over 700,000 views, the fictitious/impersonation account spread claims that Israeli Prime Minister Benjamin Netanyahu had been hospitalized.
The seemingly significant uptick in false information on X follows a number of other changes to the platform that have undermined content moderation efforts, including the removal of headlines from links, new subscription plans and reported cuts to its election moderation team. Most recently, on Oct. 4, X announced that it would no longer supply headlines to contextualize article links, with links instead appearing as the primary image included in the article. This forces users to rely on often vague or arbitrary images to guess what an article may be about, at times falling prey to misleading images about the contents of a link and creating confusion for those searching for information about a particular subject. On top of this, one benefit for users who pay for X's new Twitter Blue subscription service is having their posts reach more people by being prioritized on other users' feeds. The consequent deluge of paid content has, in turn, helped fuel the proliferation of false information on the platform, as posts from more authoritative primary sources are now often being buried beneath promoted posts from paying subscribers who could be anyone, despite having official-looking checkmarks next to their names.
In late September, X reportedly cut election moderation teams, along with its election misinformation reporting tool, at least for U.S. and Australian users, though a similar feature is still available for EU users that enables reporting of content that may have ''negative effects on civil disorders or elections.''
In December 2022, Musk dissolved then-Twitter's Trust and Safety Council, a group of 100 independent academics, civil leaders and activists who sought to combat hate speech and other harmful content on the platform. The move to scrap the council then kicked off a series of other changes to the platform's moderation efforts.
In November 2022, X rolled out its new ''Twitter Blue'' paid subscription service, which gives paying users a blue checkmark to verify their account — a feature previously reserved for high-profile users like celebrities and politicians. Content from these paying users is also promoted to the top of user timelines above content from non-paid users.
The proliferation of false content on X since the outbreak of the Israel-Gaza conflict suggests that nation-state actors, along with domestic political groups, will have an easier time exploiting the platform for influence operations, seeking to sway public perceptions in the leadup to numerous major elections in the coming year. Content on X in the early days of the Israel-Gaza conflict has been characterized by a highly accelerated rate of disinformation, with various actors attempting to control the narrative, offering a preview of what is likely to be a wave of fake content on the platform surrounding a number of prominent elections that will be held across the world through the end of 2023 and throughout 2024. An early preview may come as soon as Oct. 15, when Poland will hold general elections in which pro-Russian threat actors have strong incentives to deploy disinformation to weaken Polish support for Ukraine and cause further fissures with the European Union. Looking ahead, not only will threat actors be able to take advantage of X's content moderation challenges, but many will use generative AI tools (which overall are harder to detect) to fabricate video, audio, text and other content to make false information appear more convincing. This will provide additional opportunities for threat actors to spread narratives that aim to achieve a wide range of ends ahead of a series of major elections in 2024, such as the Taiwanese presidential election in January 2024, Indian general elections between April-May 2024, elections for the European Parliament in June 2024, and U.S. general elections in November 2024. Russian, Chinese and Iranian threat actors in particular will be active on the platform throughout these election cycles to influence public perceptions to align more with their interests, exploit divisions among social groups, and undermine trust in democratic processes. The U.S. Office of the Director of National Intelligence in its 2023 Annual Threat Assessment specifically cited Russia and China as states pushing disinformation to promote authoritarianism in a broader conflict with democratic governments. Iran, too, has leveraged social media in the past to spread disinformation and attempt to influence public opinion and will continue to do so in the future, though its efforts have tended to be less sophisticated and failed to gain much traction. In at least some countries holding upcoming elections, including the United States, domestic political groups will also take advantage of X to spread fake content, particularly deepfakes and other content fueled by generative AI, focusing on politically salient issues and depicting opposition parties and politicians in unfavorable and fictitious circumstances.
Earlier this year, the European Union launched a pilot program that assesses how successful various tech companies are at fighting disinformation. On Sept. 26, European Commission Vice President Vera Jourova noted that the program found that X is ''the platform with the largest ratio of misinformation or disinformation posts,'' with disinformation accounts on X also more likely to have recently joined the platform and have more followers than legitimate accounts.
Slovakia's Sept. 30 parliamentary elections were characterized by a flurry of false posts on social media, including AI-generated content, with more than 365,000 election-related disinformation posts on Slovak social media in the first two weeks of September detected by U.K.-based nonprofit Reset, whose analysis also found that disinformation posts generated five times more exposure than average posts.