Viktor Dimas

Evidence of Online Manipulation in UK public Debate

By content-lead

Valent investigation shows hundreds of thousands spent on undermining London’s key clean-air policy – ULEZ

Over the past three years, Valent has investigated disinformation and online manipulation in almost a dozen different contexts around the world. In the UK, although we saw indications manipulation was taking place, we struggled to find concrete evidence. We now know why that is; the manipulation taking place in the UK is much more sophisticated than the hashtag spamming you might find in Africa or the Middle East. The effort to undermine ULEZ that we examine below is just the tip of the iceberg

– Amil Khan

ULEZ and Online Manipulation

The recent London Uxbridge by-election saw the Labour party lose, with many attributing the loss to the clear air policy championed by Mayor Sadiq Khan. A new investigation by Valent, however, has unearthed an extensive online campaign targeted to undermine support for the Ultra-Low Emission Zone (ULEZ) ahead of the election.

As a rapid light touch assessment, this study was intended to establish whether online manipulation was taking place (SPOILER: It is). We did not seek to identify the actors behind the manipulation, or whether the manipulation was affecting voting intention. These points could be established via more in-depth research The investigation was conducted independently by Valent, without funding or commission from any external entities.

A total of 13k tweets and 8k retweets from 8,583 Twitter accounts were analyzed. However, as the investigation was primarily focused on Twitter – not the most significant platform for disinformation in the UK – the findings may only represent the tip of the iceberg. Larger manipulative activities might be happening on platforms like Facebook, Instagram, Telegram, and TikTok.

Dissecting the Methodology

The methodology observed uses a multi-tier digital architecture, where different types of accounts undertake specific roles in a coordinated manner. Instead of simple hashtag spamming, this approach manipulates Twitter’s algorithm to prioritize content from anti-ULEZ influencers.

Such manipulation results in thousands of accounts promoting anti-ULEZ content onto users’ timelines. Furthermore, lower-quality accounts, which are easiest to replace, run the highest risk of detection.

It’s important to note that there is a significant amount of genuine anti-ULEZ sentiment, and influencers leading the charge may not even be aware that their content is being amplified through algorithmic manipulation.

The Timeline of Manipulation

ULEZ-related account registration remained consistent with overall Twitter trends until the introduction of ULEZ in central London in 2019. Post this date, ULEZ-related accounts saw a substantial surge, suggesting the start of a coordinated campaign.

After Mayor Khan expanded ULEZ coverage in October 2021, account generation increased significantly. Of the 8,583 Twitter accounts mentioning ULEZ, about 48% (4,114 accounts) were created post-November 2022.

Identifying Seeders and Spreaders

On closer examination, the accounts with the largest follower numbers (top 10%) were found to be authentic anti-ULEZ voices, termed as “seeders”. These accounts seemed genuine with clear attribution to real persons or groups, long-standing account registration, unique posts, and a low retweeting ratio.

After removing these seeder accounts, 3,702 accounts remained. These accounts exhibited signs of inauthenticity, such as being newly registered, using fake or generic names, and a high retweet and duplication ratio focused solely on attacking ULEZ. We refer to these accounts as “spreaders”.

Spreader accounts are primarily low-quality and often retweet or like content posted by seeders, rarely posting original content. It’s suspected that these accounts are controlled by automated software that can direct multiple accounts simultaneously.

The Tools of Manipulation

The Guardian’s investigation in February 2023 revealed an Israeli company marketing a software tool called AIMS that can control up to 30,000 accounts across various platforms. However, this company is not alone; dozens of companies worldwide advertise similar tools, often using euphemisms like “multi-accounting”.

On average, the 3,702 spreader accounts examined have 380 followers each, and these follower accounts are inauthentic with large following-to-follower ratios, stolen or fake profile pictures, and recent creation dates. These accounts, called “validators”, are generated and maintained by dubious online vendors to provide customers with the ability to quickly add more followers to their accounts, thereby tricking Twitter’s algorithms into treating these accounts as more important.

Our analysis shows it would cost approximately £44,000 to buy an average of 380 followers for each of the 3,702 spreader accounts, a figure based on a costing of £31 (converted from $40) for 1,000 followers per account when bought in bulk.

Cost of Manipulation

The estimated cost of the online manipulation observed during this study includes £124,000 for AIMS-style software (based on 2015 prices quoted by the Guardian) and £44,000 for false followers for the spreader accounts, totalling £168,000. However, this indicative figure might not capture the full cost, as it doesn’t include all potential associated services.

Even though seeders may express genuine anti-ULEZ sentiment, their voices are being amplified through Twitter manipulation, leading to greater traction and attention for their talking points. Some of the biggest anti-ULEZ voices, such as @ronin19217435 and @sophielouisecc and @GarethBaconMP, are among those we see benefitting from inauthentic amplification, possibly unbeknownst to them.

Further Investigation

Online manipulation techniques are evolving rapidly. The anti-ULEZ campaign investigated here represents one of the most advanced strategies seen in nearly four years of studying similar activities in Africa, the Middle East, and Europe. The resources allocated for such manipulations are significant and often overshadow the targets’ communications efforts. At the same time, advancements in AI are making these operations cheaper.

Further investigation is required to identify who is behind this anti-ULEZ manipulation effort, its exact cost, and to delve into the potential manipulation on larger platforms like Facebook, TikTok, and Telegram. At this point, we are scratching the surface of understanding the role of online manipulation in shaping public opinion around UK policy debates.

Want to hear more about the work that we do, why not Follow us on LinkedIn?