The Threat to Free Speech: Newsom’s Deepfake Laws and Clinton’s Call for Civil and Criminal Charges

In an era marked by the rapid rise of AI and digital manipulation, the battle for truth in media has become one of the most pressing issues of our time. California Governor Gavin Newsom and former Secretary of State Hillary Clinton have both taken bold steps to address what they perceive as dangerous disinformation and manipulation in the public sphere. Yet, in doing so, they risk encroaching on one of the fundamental pillars of American democracy: free speech.

Recently, Newsom signed into law a series of bills designed to curb the spread of AI-generated deepfakes in political ads, including Assembly Bills 2839, 2655, and 2355. Meanwhile, Clinton has made waves by calling for Americans who spread disinformation to face civil and even criminal charges. While these actions are framed as necessary steps to protect democracy and electoral integrity, critics argue they introduce the risk of government overreach and could lead to the suppression of legitimate discourse.

Newsom’s Deepfake Legislation: A Double-Edged Sword

Governor Newsom’s deepfake legislation seeks to address the growing concern around the use of AI to manipulate political messaging. Assembly Bill 2839, which went into effect immediately, makes it illegal to distribute “materially deceptive audio or visual media of a candidate” in the months leading up to and immediately following an election. Assembly Bill 2655 requires platforms like Facebook and X (formerly Twitter) to take down such content, while Assembly Bill 2355 mandates that political campaigns disclose when they use AI-generated or altered visuals in their advertising.

On the surface, these bills appear to be common-sense protections against the manipulation of voters through sophisticated technology. Deepfakes—AI-generated videos that can make it appear as though a politician has said or done something they never did—pose a significant threat to the integrity of elections. However, the vague language in these bills, particularly regarding what constitutes “materially deceptive” media, leaves them open to interpretation, potentially making them a dangerous tool for censorship.

For example, a parody video that mocks a political candidate could be swept up under this legislation if it is deemed “deceptive” by those in power. Political satire has long been a critical form of expression in the United States, protected under the First Amendment. The concern is that these bills, while targeting deepfakes, may also ensnare other forms of political commentary that are integral to public discourse.

The Free Speech Dilemma

The heart of the problem lies in the ambiguity of the laws. What exactly qualifies as “materially deceptive”? If the definition is too broad, platforms could feel pressured to take down any content that even remotely resembles a deepfake to avoid legal ramifications. In this scenario, tech companies like Facebook or X would become arbiters of political speech, potentially removing legitimate content that they perceive as too risky.

Furthermore, this legislation could have a chilling effect on content creators. Those producing political commentary, satire, or even investigative journalism may self-censor out of fear of being targeted under these laws. Smaller creators, in particular, may lack the legal resources to fight back if their content is taken down or challenged, further consolidating control of political speech into the hands of a few powerful platforms.

The risk of overreach is exacerbated by the requirement in Assembly Bill 2355 that political campaigns disclose when they use AI in their advertisements. While transparency in political ads is generally a positive thing, the requirement could be weaponized against campaigns that are already under scrutiny, forcing them to justify the use of even minor AI enhancements. In a political landscape where public trust is already low, this could lead to further erosion of confidence in the democratic process.

Clinton’s Call for Civil and Criminal Charges Against Disinformation

Hillary Clinton’s recent comments on disinformation add another layer to the growing concern over free speech. During a conversation about Russian interference in U.S. elections, Clinton suggested that Americans who engage in spreading disinformation could be held civilly or even criminally liable. This comment has sparked intense debate, with many questioning what exactly Clinton means by “disinformation.”

Clinton’s statement reads: “But I also think there are Americans who are engaged in this kind of propaganda. And whether they should be civilly or even in some cases criminally charged is something that would be a better deterrence, because the Russians are unlikely, except in a very few cases, to ever stand trial in the United States.”

While it is understandable that Clinton and others are concerned about foreign interference in elections, the idea of prosecuting Americans for spreading disinformation raises significant red flags. The term “disinformation” is notoriously difficult to define, and without clear guidelines, this proposal could quickly become a tool for political suppression.

For example, dissenting voices, investigative journalists, or even ordinary citizens sharing controversial opinions could be labeled as spreading disinformation. What happens when someone challenges the mainstream narrative on a sensitive issue? Could they be prosecuted simply for expressing an alternative viewpoint? The vagueness of Clinton’s proposal opens the door to a future where the government, or those in power, decide what constitutes acceptable speech.

The Slippery Slope of Government Overreach

Both Newsom’s deepfake laws and Clinton’s call for civil and criminal penalties reflect a growing trend: the use of legislation to control the flow of information in the name of protecting democracy. However, these efforts, no matter how well-intentioned, could end up doing more harm than good.

  1. Government Overreach: The ability of the state to determine what constitutes disinformation or deceptive media is a slippery slope. Today’s efforts to target deepfakes and foreign propaganda could easily expand into a broader censorship regime, where political dissent and alternative viewpoints are silenced.
  2. Censorship by Proxy: By forcing platforms to remove content or face legal consequences, these laws effectively turn tech companies into de facto censors. In an attempt to avoid liability, platforms may err on the side of caution, removing any content that could remotely be considered deceptive. This could disproportionately impact smaller creators and independent media, who lack the resources to fight back against takedowns.
  3. Erosion of Trust: As these laws are implemented, the public may begin to question whether the information they receive is censored or altered. If voters feel that their access to information is being controlled by the government or powerful platforms, trust in democratic institutions and media could erode further.

Aside from the free speech concerns, there are significant technological and legal challenges to enforcing these deepfake laws. Detecting and proving that a piece of media is a deepfake requires advanced AI tools, which may not be available to smaller platforms or independent content creators. Additionally, enforcing these laws equitably across all platforms—ranging from large tech giants to smaller, independent websites—will be a monumental task.

Moreover, the costs associated with compliance could be prohibitive for smaller media outlets or platforms, potentially forcing them to shut down or severely limit the scope of their operations. This could lead to a consolidation of information sources, where only the most well-funded organizations are able to navigate the legal and technological minefield of deepfake detection.

Public Perception and Trust

Laws like these, while intended to protect the public, could inadvertently sow distrust. Voters may begin to wonder if the information they see is truly authentic or if it has been sanitized by platforms and the government. As deepfake laws are enforced, the public may start to feel that their access to information is being curated and controlled, leading to greater skepticism of the electoral process and media at large.

The broader implications are clear: if voters no longer trust the information they receive, the foundation of democratic decision-making is weakened. Elections, after all, rely on an informed electorate. Without confidence in the authenticity of the information they consume, voters may become apathetic or disengaged.

Conclusion: Safeguarding Free Speech in the Age of Disinformation

The fight against deepfakes and disinformation is undeniably important. However, legislation that seeks to curb these issues must be crafted with the utmost care. Newsom’s deepfake laws and Clinton’s proposal to prosecute Americans for spreading disinformation raise serious concerns about the future of free speech in America.

In the quest to protect democracy, we must avoid overreaching and undermining the very freedoms that make democracy possible. Laws targeting disinformation and deepfakes must be clear, precise, and transparent, ensuring that they do not become tools for censorship or government overreach. Otherwise, we risk creating a society where only state-sanctioned narratives are allowed, and the free exchange of ideas becomes a relic of the past.

The path forward requires a delicate balance. While we must address the real dangers posed by AI-manipulated media and disinformation campaigns, we cannot afford to do so at the expense of free speech. The ability to challenge, question, and dissent is what keeps our democracy alive. Let’s ensure that in our fight against deepfakes, we don’t sacrifice the very freedoms we seek to protect.