BBCs research revealing how deepfake content is dealt with on X.
The BBC’s Undercover Voter Project has revealed how deepfake content is spreading misinformation ahead of the UK’s general election. Despite policies against such content, social media platforms like X are slow to act, allowing damaging misinformation to persist.
Social media site X has finally taken action against a network smearing UK politicians, coming after a BBC investigation – part of its Undercover Voter Project – that revealed how groups of accounts had been creating and sharing deepfake images ahead of the general election.
So, what is the Undercover Voter Project?
Created by BBC’s Disinformation and Social Media Correspondent, Marianna Spring, 24 Undercover Voters were created to investigate how social media platforms influence political views and behaviours. The key objectives are exploring echo chambers that can be created online, analysing political advertising, assessing moderation on social media platforms, and the one that applies to today’s topic – tracking misinformation.
For example, a doctored video of Labour’s Wes Streeting has been pushed to X users, claiming that the man called a fellow politician Diane Abbott a “silly woman”. Not only are accounts posting these videos, but they’re also leaving misleading comments to push that the videos are real. Although the posters, tracked down by Marianna Spring, claim they are simply “s##tposting”, the effect of this is very real.
While the owners of some of the clips claim to simply “trolling”, the effects are very damaging in the long term for any political candidate. Further from that, the misinformation spread muddles the waters for elections, leaving people confused on what is real or fake.
All of this was brought up to X several times in more than 12 months, and finally they responded to requests from Spring, with an X spokesperson telling the BBC: “X has in place a range of policies and features to protect the conversation surrounding elections.” In a quote they brought up labelling content that violates their synthetic and manipulated media policy and claimed to remove accounts that engage in violations.
This all brings up an important question about social media sites, and the need there is for heavy moderation and checking for doctored content. Although all of this violates X’s own policies, the time it takes to have these accounts and content taken down is too long in the scheme of how long it takes to convince a person that a politician has said something that they never actually said.
It also relates back to the lack of control around AI and how its constant development is not followed up by the necessary controls and laws. Creating doctored content and deepfakes of people is easier than ever, and without heavy penalties for misuse, or laws even dictating what is allowed and what isn’t, misinformation is allowed to run rampant.
There are very little repercussions for actions that could have severe consequences. Posting thousands of videos and pictures to sway elections using misinformation will at worst simply get your account suspended – and even that could take long, meaning you have more than enough time to confuse and anger people before your account is taken down.
What is your opinion around this? Are law and policy makers moving quick enough to catch up with AI development to prevent things like this happening or could more be done at a faster pace to make sure that people are protected and aware?