This could throw a spanner in the works for the rising trend of generative AI elements within social apps.
Today, Republican Senator Josh Hawley and Democrat Senator Richard Blumenthal introduced legislation that would effectively side-step Section 230 protections for social media companies in regards to AI-generated content, which would mean that the platforms could be held liable for spreading harmful material created via AI tools.
As per Hawley’s website:
“This new bipartisan legislation would clarify that Section 230 immunity will not apply to claims based on generative AI, ensuring consumers have the tools they need to protect themselves from harmful content produced by the latest advancements in AI technology. For example, AI-generated ‘deepfakes’ – lifelike false images of real individuals – are exploding in popularity. Ordinary people can now suffer life-destroying consequences for saying things they never said, or doing things they never would. Companies complicit in this process should be held accountable in court.”
Section 230 provides protection for social media providers against legal liability over the content that users share on their platforms, by clarifying that the platforms themselves are not the publisher or creator of information provided by users. That ensures that social media companies are able to facilitate more free and open speech – though many have argued, for many years now, that this is no longer applicable based on the way that social platforms selectively amplify and distribute user content.
This far, none of the challenges to Section 230 protections, based on updated interpretation, have held up in court. But with this new push, US senators are looking to get ahead of the generative AI wave before it becomes an even bigger trend, which could lead to widespread misinformation and fakes across social apps.
What’s less clear in the current wording of the bill is what exactly this means in terms of liability. For example, if a user were to create an image in DALL-E or Midjourney, then share it on Twitter, would Twitter liable for that, or the creators of the generative AI apps where the image originated from?
The specifics here could have significant bearing over what types of tools social platforms look to create, with Snapchat, TikTok, LinkedIn, Instagram, and Facebook already experimenting with integrated generative AI options that enable users to create and distribute such content within each app.
If the law relates to distribution, then each social app will need to update its detection and transparency processes to address such, while if it relates to creation, that could also halt them in their development tracks on the AI front.
It seems like it’ll be difficult for the Senators to get such a bill approved, based on the various considerations, and the evolution of generative AI tools. But either way, the push highlights rising concern among government and regulatory groups around the potential impact of generative AI, and how they’ll be able to police such moving forward.
In this sense, you can likely expect a lot more legal wrangling over AI regulation moving forward, as we grapple with new approaches to managing how this content is used.
That’ll also relate to copyright, ownership, and the various other considerations around AI content, that are not covered by current laws.
There are inherent risks in not updating the laws in time to meet these evolving requirements – yet, at the same time, reactive regulations could impede development, and slow progress.