A coordinated effort to misrepresent photos from the Glastonbury festival sheds light on online right-wing networks in the UK
Welcome to the first post we have published in a while. The author is our new colleague Viktor Dimas, a graduate of Yale University and Sciences Po in Paris, who is looking at how modern information systems are used to influence politics in democratic systems – Amil
While checking out photos and videos from the recent Glastonbury festival on my social media feed (enjoy here and here), this one picture of the festival’s Pyramid Stage being shared on Twitter caught my eye.
It supposedly shows the carnage left behind after climate-change activist Greta Thunberg addressed the crowds and is being used to show up the hypocrisy of “woke millennials”. However, the photo is from 2015 and so has nothing to do with Thunberg.
The photo might not tell us much about the recycling habits of festival goers, but it does show how orchestrated campaigns are using events in the public eye as “gotcha” moments to promote hard-right political narratives.
Glastonbury has become something of a platform for young, progressive voices who aren’t afraid to show their anger towards those in power. Singer Lily Allen also took to the stage this year, singing “Fuck You” in response to the US Supreme Court’s decision to overturn landmark abortion legislation. In 2019, UK grime artist Stormzy added the line “Fuck the government, and fuck Boris” to his hit Vossi Bop.
What’s happening with Glastonbury is part of a wider trend of online manipulation in the service of what is sometimes called the “culture war”, an international clash over worldviews, where the more progressive (and mostly younger) are pitched against the (mostly older) and more traditional. We have seen similar online engagement around other events and issues that garner popular attention, including the Amber Heard vs Johnny Depp trial, and the ongoing abuse directed at Megan Markle, Prince Harry’s mixed-race wife.
In this post, I want to delve a little into what I found when looking into the accounts misrepresenting these images because it sheds light on how such campaigns are managed and the intent of those behind them.
I saw a total of 10 photos depicting litter at Glastonbury. Although they claimed to show the site after Thunberg’s appearance on stage, all were taken in previous years. The character of the accounts using these photos to portray festival goers as hypocrites was unexpectedly diverse and international in flavour.
Searching for repeated use of the same photos and text, it became obvious there were several different networks involved in the effort. US far-right, pro-MAGA accounts were the most active, which is not a surprise considering their presence in other UK-related public debates. Amongst the UK-based accounts, some were supportive of UK Prime Minister Boris Johnson, while others were critical. They did, however, contain a common thread of Covid skepticism and support for Brexit. More unexpectedly, I also saw accounts set up to look like supporters of Brazilian president Jair Bolsonaro and Indian Prime Minister Narendra Modi. Both men are known for their nationalist and socially conservative political stances.
At this point, you might conclude that this is a case of right-leaning individuals across the world sharing what they see as an egregious example of just the sort of thing that winds them up about progressives. However, the sharing patterns suggest something a little more premeditated.
Examining the hundreds of accounts promoting these messages, it became clear that there are two different types of accounts; high quality accounts with large numbers of followers producing high quality content, and low-quality accounts that merely copy and paste posts.
Digging a bit further into the text and photos used, I was able to trace them back to an account on messaging app Telegram used by supporters of UK hard-right activist Tommy Robinson.
More than 40 different variations of the Glastonbury-rubbish Tweet could be traced back to this one account. Content produced in hard-right channels is regularly shared on Twitter completely unchanged. Simultaneously, it was forwarded to multiple other Telegram channels and social media platforms. I found the posts from the Tommy Robinson fan accounts had been published again on Reddit, Facebook, GAB, and GETTR. Similar sharing patterns could be observed for at least one more text/picture combination, written in German and originating in German far right Telegram channel @MeinungsFreiheit2.0.
Again, sharing and coordinated posting via messaging apps doesn’t necessarily show fake amplification – although platforms say they take a dim view when this method is used to promote false information or bully and intimidate.
To determine whether or not this activity is actually inauthentic – in other words, not just the result of real-life angry individuals each running their own accounts – I used two free, open-source tools; OSO’s BotAmp and the brilliant BotSenitinel, which looks at behaviours that violate Twitter’s policies.
With BotAmp I was able to compare the posts about “hypocritical millennials” to more general conversation about Glastonbury. I found that at least half of the 10 most popular anti-woke conversations were being driven by inauthentic activity (fake accounts). I then examined individual accounts within these conversations using BotSentinel and found that many of those engaging with the most popular tweets were showing up as highly likely to be bots. It should be noted that such inauthentic activity is not permitted by Twitter. However, the consensus amongst researchers is that the platform does not strictly enforce its own policies.
Using OSO’s Botometer scores, I found that such activity accounted for 12-17% of all the tweets promoting the false claim about the rubbish left by festival goers. Some might claim that these findings show most of the activity was not due to bots, and therefore their impact was negligible. This would be a misunderstanding of the way online conversation works. The 17% of tweets identified represent the driving force of an effort that then attracted other, authentic participants. As such, the inauthentic networks succeeded in taking a false claim and having it believed by authentic users who liked, shared and endorsed it in good faith because it was pushed on to their timelines and they had sympathy for the underlying premise that the woke/politically correct are actually selfish hypocrites.
We have seen enough instances of similar activity to conclude there are organised efforts taking place to frame high-profile events in a way that promotes a worldview that is characterised by its hostility to women, migrants and minorities.
Experts such as Steve Corman at Arizona State University say the weaponisation of narratives in this way can radicalise individuals. It also skews public discourse and creates political traction for views and groups that in reality only form a minority of the wider public discourse.
So, how can such a situation be avoided?
At Valent we try to understand how false information spreads. In doing so, we see behaviours that are not visible to most casual users. Users look at their social media feeds and assume they depict an accurate rendition of reality. Even when we know what appears on our feeds is decided by an algorithm, it’s difficult to maintain objectivity when something provocative or concerning clogs our screens hour after hour or day after day. Clearly, we need a new type of media literacy.
It is vital that social media platforms are alive to how their tools are being used to mislead. For example, why is click-to-tweet not allowed by Twitter, but en masse copying and pasting from Telegram channels is OK? Ensuring policies are aligned with the way users are interacting with platforms and existing rules are properly enforced will go some way towards ensuring platforms encourage rather than undermine public debate.
The observations in this newsletter do not reflect the views of the clients or partners of Valent Projects.