Skip to Main Content
 

Major Digest Home US State Department Plans to Use AI to Monitor Social Media and Tamp Down on Dissent - Major Digest

US State Department Plans to Use AI to Monitor Social Media and Tamp Down on Dissent

US State Department Plans to Use AI to Monitor Social Media and Tamp Down on Dissent
Credit: Chris Stokel-Walker, Fast Company

A New Era of Digital Repression?

As protests against the Trump administration and in favor of Palestine continue to grow across the country, the U.S. State Department is reportedly planning to use tech to try and tamp down on dissent. The mooted use of AI comes as former Columbia University grad student Mahmoud Khalil has become the face of the Trump administration’s tougher line on protest, with Khalil currently detained and threatened with the revocation of his green card for his participation in on-campus protests.

AI-Powered Surveillance: A Recipe for Disaster?

Using AI to try and analyze the contents of people’s social media posts for actions that the Trump administration deems unacceptable is a risky move that runs the risk of creating huge false positives. And it worries privacy and AI experts in equal measure. Carissa Véliz, an AI ethicist at the University of Oxford, warns that the use of AI-powered surveillance could have serious consequences for democracy. "Precisely the point of privacy is to keep democracy," she says. "When you don’t have privacy, the abuse of power is too tempting, and it’s just a matter of time for it to be abused."

The Dangers of Digital Witch-Hunts

The risk Véliz and others worry about is that digital privacy is being eroded in favor of a witch hunt driven by a technology that people often have more faith in its accuracy than is truly deserved. Joanna Bryson, professor of ethics and technology at Hertie School in Berlin, Germany, echoes these concerns. "Disappearing political enemies, or indeed just random citizens, has been a means of repression for a long time," she says. "I don’t need to point out the irony of Trump choosing a mechanism so similar to the South and Central American dictators in the countries he denigrates."

The Israeli Example: A Warning Sign?

Bryson also points out that there parallels with how Israel used AI to identify tens of thousands of Hamas targets, many of whom were then targeted for physical bombing attacks in Gaza by the Israeli military. The controversial program, nicknamed Lavender, has been questioned as a military use of AI that could throw up false positives and is unvetted. "Unless the AI systems are transparent and audited," Bryson warns, "we have no way of knowing whether there’s any justification for which 35,000 people were targeted." Without appropriate regulation and enforcement of AI and digital systems, we can’t tell whether there was any justification for the targets, or if they just chose enough people that any particular building they wanted to get rid of they’d have some justification for blowing it up."

Accountability in the Digital Age

The use of AI is also something of a smokescreen, designed to deflect responsibility for serious decisions that those having to make them can claim are guided by supposedly "impartial" algorithms. "This is the kind of thing Musk is trying to do now with DOGE, and already did with Twitter," Bryson says. "Eliminating humans and reducing accountability. Well, obscuring accountability."

The Risks of Hallucination and Bias

And the problem is that when looking at AI classifications of social media content, accountability is important because it’s a case of when, not if, the technology misfires. The risks of hallucination and bias are big problems within AI systems. Hallucinations occur when AI systems make up answers to questions, or invent what could be seen as damning posts for users if their social media content is being parsed through artificial intelligence. Inherent bias in systems because of the way they’re designed, and by whom they’re created, is also a big factor in many errors in AI systems.

A High-Stakes Situation

It’s bad enough for those errors to impact on whether or not someone gets invited to a job interview. But when it comes to potentially being detained and deported from the United States – and risking not being allowed back into the country in the future – it’s a much more high-stakes situation. The US State Department's plan to use AI-powered surveillance raises serious concerns about digital rights and privacy. As we move further into an era of increasing digitalization, it's essential that we prioritize accountability and transparency in our systems. Anything less would be a recipe for disaster.

Sources:
Published: