Will 2022 be a landmark year for social media platform regulation?
It’s still difficult to determine how the various proposed approaches to social media legislation will actually work, and what impact they’ll have, but with the UK outlining its latest push to hold social platforms more accountable for the content they host, the movement for significant regulatory change is growing, which is likely to form a key point of debate over the next year.
The UK’s proposed ‘Online Safety Bill’, announced earlier today, outlines new protections for young users, and tougher regulations on fake ads and scams to better protect online consumers.
As explained by BBC:
“The report also recommends that a wide range of new criminal offences should be created, based on proposals from the Law Commission, and carried in the bill, including promoting or "stirring up" violence against women, or based on gender or disability and knowingly distributing seriously harmful misinformation”
Essentially, the bill aims to implement tougher penalties for social platforms to ensure that they’re held more accountable for enforcement, in order to address rising concerns about the influence of digital communication and connection. But questions remain as to how, exactly, such regulations can be effectively enforced, with a lot coming down to what’s considered ‘reasonable’ in regards to response times when addressing such complaints.
Various regulatory groups have sought to implement similar rules and enforcement penalties, by imposing clearer parameters around what social platforms are expected to do in response to official complaints. But Meta has generally been able to argue that it can’t reasonably be expected to remove content within, say, 24 hours unless it’s made aware of such. When an official complaint is issued, such response can be enacted, but often, the damage is caused by content that hasn’t sparking initial concern, which makes truly effective enforcement difficult.
For its part, Meta has repeatedly outlined its ongoing push for improvement on such, via its regular Community Standards Enforcement Reports, but gaps remain between community and government expectation, and realistic capacity to act, given that all users can post whatever they want, in real time, and automated systems, while improving, cannot catch everything before anyone sees it.
The arguments then come down to what’s reasonable, what’s possible in enforcement and action, and again, the remaining disconnect between what regulators expect and what social platforms, given their real-time nature, can provide.
Is it possible to ever bridge such perspectives – and more importantly, will tougher penalties actually improve that situation in any way?
It’s hard to say on a general basis, but there are other elements where Meta can be held accountable, and where it does look set to face even more pressure over the next year as Governments seek more ways to take matters into their own hands, and enact control where they can.
A key element on this front is the sharing of user data, and the accessibility of such to law enforcement. Right now, Meta is in the midst of a move towards implementing end-to-end encryption as standard across all of its messaging apps (Messenger, WhatsApp and Instagram Direct), which various authorities claim will offer protection for criminal activity by blocking potential detection and interception measures.
Meta claims that it’s working to align with rising expectations around data privacy, but various governments are now scrambling to implement new measures to either block its encryption plans, or establish new methods to extract user data from social platforms.
For example, the Australian Government recently announced new legislation that would essentially force social media companies to reveal the identities of anonymous troll accounts, offering a pathway for legal action against these users.
As per The Guardian:
“Under the legislation, the laws would require social media companies to collect personal details of current and new users, and allow courts to access the identity of users to launch defamation cases.”
Which is flawed in itself, as social platforms don’t currently enforce user identity, and attaching real world contact information to accounts, as such. If enacted, that would essentially force the platforms to confirm the real-world info of millions of users, which would be a major undertaking in itself, and that’s before you even consider the implications of free speech and legal enforcement.
Australia’s High Court has also approved legal interpretation which puts more onus on media companies in regards to inciting defamatory comments on their Facebook Pages. Some have suggested that this will see media outlets held legally accountable for all comments on their social media profiles, but the actual detail of the case is a lot more nuanced, with direct connection required between incitement and action in order to seek legal recourse.
Which, really, is where all of these legislative and regulatory approaches get tangled – the interpretation between actual cause and effect, and how that works in a legal sense when considering online speech. Social platforms have changed the paradigms for communication, by providing everybody with a platform to be heard, with the immediacy of the format essentially making enforcement impossible, as there’s no moderation between user and output.
And with billions of users, it’s not possible for any platform to moderate all comments at scale, which means that time-based penalties for response to official complaints are really the only mechanism to enforce such rules, and the technical interpretations around such also leave a lot of room for debate.
So while it seems like the regulatory walls are closing in around social platforms, really, a lot of gray area remains within each approach. And while Governments are keen to put forward their ‘solutions’, especially in the lead-up to their respective elections, given the broader focus on social media misinformation and abuse, it still feels like we’re a long way from actual, solid progress.
Various approaches are generating some results, but a more uniform, international regulatory approach to digital speech and enforcement needs to be established to set clear parameters and expectations across the board, in all regions, which ideally will also include parameters relating to algorithmic amplification, and the role it plays in boosting certain elements.
The variance between grandstanding, for political gain, and actual, effective action is clouding true progress on these key elements.