Why mandatory identification on social media is mostly a terrible idea


I’m old and a social media junkie. I ran a BBSs in the late 80s, built and ran dozens of community forums and websites since the 90s (some of which are still going strong today), I blogged on Mono before the term blog was coined and live journal existed, I was on Facebook back when you needed a university email address to access it and my twitter account dates back to 2008. I even spent 6 years working for one of the leaders in online content moderation, working with many world renowned brands with colossal global userbases. I’ve seen virtually every type of online community and witnessed the widest possible range of attempts by those communities to tackle toxic users of every kind

Do you know what I’ve never seen? An entirely effective way to manage toxic behaviour, other than running what is essentially a very small online gated community of very similarly minded people. I’ve not seen a single solution that works at scale.

The oft-touted solutions, especially for global platforms like Facebook and Twitter is mandatory verified ID. This on the face of it seems like a great idea, surely the world would be a better place if people couldn’t hide behind anonymity and had to publicly own their opinions. There are just two problems with it, it doesn’t fix the problem and it puts a whole bunch of vulnerable people as risk.

Why doesn’t it fix it? Well whilst I’m not disputing Penny Arcade’s G.I.F.T theory (warning: possibly NSFW language), that if you take a normal person, give them anonymity and an audience then you end up with something altogether different, what it doesn’t cover is how much the anonymity is needed if the person isn’t absolutely “normal”.

I’m writing this blog the day England lost in the Euro 2020 finals to Italy and rather than blame the defeat on out lacklustre second half performance, some people took to twitter and blamed it on three young, black, players who failed to score in the penalty shoot out that eventually decided the competition.

But the tweets that hurled racial abuse at the players weren’t coming from an army of hard to trace anonymous sock puppet accounts, these were coming from easily identifiable real people. People who listed their employers in the bios, who mentioned their wife and kids in their previous tweets. Anonymity made zero difference.

And this isn’t a one-off, generally these attacks aren’t well pre-planed campaigns where users are attempting cover their tracks, they are angry spur of the moment things done in the open. Tracing them is rarely hard (even without the help of the platform or the police), it’s convincing the CPS that it’s worth chasing for a conviction that’s the problem.

I’ve moderated public forums, where not only are people using their real names and have disclosed their real home addresses, but actually regularly meet up in person, where this toxic behaviour is still rife. The people responsible generally don’t feel they did anything wrong and if they did, nothing will actually happen to them.

There is an exception to “anonymity solves nothing” and that’s in the case of organised pre-planned trolling and cyber bullying. In these cases the perpetrators too often rely on anonymity and yes a platform with strong verification processes may be less of a target, so why wouldn’t we do it?

Simple. The cost. Not the financial cost, that’s easily absorbed, but the cost to other users.

Social Media has been a massive tool for self help and peer support, especially in situations where people may not feel comfortable discussing their issues with their own peer groups. from the LGBTQ+ community to people questioning their religious upbringing, from people in abusive relationships to those scared about not understanding parts of their job others assume they know, and those seeking help with an addiction or simply unable to find better mental health support; anonymous communities have been a haven for people who feel it may have a detrimental impact on their real lives if their online persona was associated with the real life one. Currently they can escape to a whole new world online, knowing that those who would look to hurt or ridicule them could obtain little more “real world” info on them beyond burner email addresses and phone numbers.

In the UK, since 2014, online services that allow user generated content have been legally obliged to facilitate anyone who claimed they have been defamed to get in contact with their alleged defamer. I’ve been involved with multiple cases where knowledge of this law has be used as a mechanism to obtain contact details of potential victims (generally of harassment) and I’ve been very glad that all I can provide is a gmail address.

It’s obvious why storing additional personally identifiable information (linking back to real world details) would be very dangerous in this situation. Not to mention what would happen in the case of a data breach (please don’t come at me with “technical controls” for this, that’s my day job and I stand by, the more data you hold, the more data you can lose).

Where anonymity isn’t possible, terrible things can and do happen, I’ve had a project colleague forced to move out of their home and into a hotel as racially motivated death threats against them were deemed credible. I’ve had a friend physically assaulted in front of his family as he was (wrong it turned out) identified as a social services worker involved in a child being placed into care and I myself have had a vexatious third party instruct his lawyers to send legal papers to my employers (claiming he couldn’t find my address, despite it being clearly listed in multiple places). Why on earth would we make this EASIER to achieve, especially when the targets are particularly vulnerable?

So, is pushing a few trolls off a few platforms (it won’t stop them, it’ll just limit where they ply their trade a little) worth placing all those vulnerable groups who rely on anonymity to seek help, at significant risk?

I’m not advocating that every platform should support anonymity, there is a place in the world for both anonymous and verified ID platforms, what I’m against is removal of the ability to choose. Both already exist in abundance, covering every possible niche and thanks to COPPA Verified Parental Consent (so by extension, that includes ID) is common on platforms permitting under 13s and on the other side many services simply require an easily obtained free email address, so if one or the other is the obvious solution, why do the masses continue to use this middle group where a level of anonymity is still possible, but the platforms discourage it, not for online safety reasons, but because your online personal data is worth so much more when it’s associated with a real life persona?

So when the noise has died down, the likes of the BCS have stopped jerking knees and start talking to the experts they have amongst their own members, and the pros and cons are are being discussed in parliament (which I feel is inevitable) I expect there will suddenly be groups of influence and power pushing for mandatory verified ID. When they ask yourself “What’s in it for them, are the REALLY doing this for online safety“?






Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.