Normal view

The SAFE Act is an Imperfect Vehicle for Real Section 702 Reform

9 March 2026 at 21:27

The SAFE act, introduced by Senators Mike Lee (R-UT) and Dick Durbin (D-IL), is the first of many likely proposals we will see to reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA) Amendments Act of 2008and while imperfect, it does propose a litany of real and much-needed reforms of Big Brother’s favorite surveillance authority. 

The irresponsible 2024 reauthorization of the secretive mass surveillance authority Section 702 not only gave the government two more years of unconstitutional surveillance powers, it also made the policy much worse. But, now people who value privacy and the rule of law get another bite at the apple. With expiration for Section 702 looming in April 2026, we are starting to see the emergence of proposals for how to reauthorize the surveillance authorityincluding calls from inside the White House for a clean reauthorization that would keep the policy unchanged. EFF has always had a consistent policy: Section 702 should not be reauthorized absent major reforms that will keep this tactic of foreign surveillance from being used as a tool of mass domestic espionage. 

What is Section 702?

Section 702 was intended to modernize foreign surveillance of the internet for national security purposes. It allows collection of foreign intelligence from non-Americans located outside the United States by requiring U.S.-based companies that handle online communications to hand over data to the government. As the law is written, the intelligence community (IC) cannot use Section 702 programs to target Americans, who are protected by the Fourth Amendment’s prohibition on unreasonable searches and seizures. But the law gives the intelligence community space to target foreign intelligence in ways that inherently and intentionally sweep in Americans’ communications.

We live in an increasingly globalized world where people are constantly in communication with people overseas. That means, while targeting foreigners outside the U.S. for “foreign intelligence Information” the IC routinely acquires the American side of those communications without a probable cause warrant. The collection of all that data from U.S telecommunications and internet providers results in the “incidental” capture of conversations involving a huge number of people in the United States.

But, this backdoor access to U.S. persons’ data isn’t “incidental.” Section 702 has become a routine part of the FBI’s law enforcement mission. In fact, the IC’s latest Annual Statistical Transparency Report documents the many ways the Federal Bureau of Investigation (FBI) uses Section 702 to spy on Americans without a warrant. The IC lobbied for Section 702 as a tool for national security outside the borders of the U.S., but it is apparent that the FBI uses it to conduct domestic, warrantless surveillance on Americans. In 2021 alone, the FBI conducted 3.4 million warrantless searches of US person’s 702 data.

The Good

Let’s start with the good things that this bill does. These are reforms EFF has been seeking for a long time and their implementation would mean a big improvement in the status quo of national security law.

First, the bill would partially close the loophole that allows the FBI and domestic law enforcement to dig through 702-collected data’s “incidental” collection of the U.S. side of communications. The FBI currently operates with a “finders keeper” mentality, meaning that because the data is pre-collected by another agency, the FBI believes it can operate with almost no constraints on using it for other purposes. The SAFE act would require a warrant before the FBI looked at the content of these collected communications. As we will get to later, this reform does not go nearly far enough because they can query to see what data on a person exists before getting a warrant, but it is certainly an improvement on the current system. 

Second, the bill addresses the age-old problem of parallel construction. If you’re unfamiliar with this term, parallel construction is a method by which intelligence agencies or domestic law enforcement find out a piece of information about a subject through secret, even illegal or unconstitutional methods. Uninterested in revealing these methods, officers hide what actually happened by publicly offering an alternative route they could have used to find that information. So, for instance, if police want to hide the fact that they knew about a specific email because it was intercepted under the authority of Section 702, they might use another method, like a warranted request to a service provider, to create a more publicly-acceptable path to that information. To deal with this problem, the SAFE Act mandates that when the government seeks to use Section 702 evidence in court, it must disclosure the source of this evidence “without regard to any claim that the information or evidence…would inevitably have been discovered, or was subsequently reobtained through other means.” 

Next, the bill proposes a policy that EFF and other groups have nonetheless been trying to get through Congress for over five years: ending the data broker loophole. As the system currently stands, data brokers who buy and sell your personal data collected from smartphone applications, among other sources, are able to sell that sensitive information, including a phone’s geolocation, to the law enforcement and intelligence agencies. That means that with a bit of money, police can buy the data (or buy access to services that purchase and map the data) that they would otherwise need a warrant to get. A bill that would close this loophole, the Fourth Amendment is Not For Sale Act passed through the House in 2024 but has yet to be voted on by the Senate. In the meantime, states have taken it upon themselves to close this loophole with Montana being the first state to pass similar legislation in May 2025. The SAFE Act proposes to partially fix the loophole at least as far as intelligence agencies are concerned. This fix could not come soon enoughespecially since the Office of the Director of National Intelligence has signaled their willingness to create one big, streamlined, digital marketplace where the government can buy data from data brokers. 

Another positive thing about the SAFE Act is that it creates an official statutory end to surveillance power that the government allowed to expire in 2020. In its heyday, the intelligence community used Section 215 of the Patriot Act to justify the mass collection of communication records like metadata from phone calls. Although this legal authority has lapsed, it has always been our fear that it will not sit dormant forever and could be reauthorized at any time. This new bill says that its dormant powers shall “cease to be in effect” within 180 of the SAFE Act being enacted. 

What Needs to Change 

The SAFE Act also attempts to clarify very important language that gauges the scope of the surveillance authority: who is obligated to turn over digital information to the U.S. government. Under Section 702, “electronic communication service providers” (ECSP) are on the hook for providing information, but the definition of that term has been in dispute and has changed over timemost recently when a FISA court opinion expanded the definition to include a category of “secret” ECSPs that have not been publicly disclosed.  Unfortunately, this bill still leaves ambiguity in interpretation and an audit system without a clear directive for enforcing limitations on who is an ECSP or guaranteeing transparency. 

As mentioned earlier, the SAFE Act introduces a warrant requirement for the FBI to read the contents of Americans’ communications that have been warrantlessly collected under Section 702. However, the law does not in its current form require the FBI to get a warrant before running searches identifying whether Americans have communications present in the database in the first place. Knowing this information is itself very revealing and the government should not be able to profit from circumventing the Fourth Amendment. 

When Congress reauthorized Section 702 in 2014, they did so through a piece of policy called the Reforming Intelligence and Securing America Act (RISAA). This bill made 702 worse in several ways, one of the most severe being that it expanded the legal uses for the surveillance authority to include vetting immigrants. In an era when the United States government is rounding up immigrants, including people awaiting asylum hearings, and which U.S officials are continuously threatening to withhold admission to the United States from people whose politics does not align with the current administration, RISAA sets a dangerous precedent. Although RISAA is officially expiring in April, it would be helpful for any Section 702 reauthorization bill to explicitly prohibit the use of this authority for that reason. 

Finally, in the same way that the SAFE Act statutorily ends the expired Section 215 of the Patriot Act, it should also impose an explicit end to “Abouts collection” a practice of collecting digital communications, not if their from suspected people, but if their are “about” specific topics. This practice has been discontinued, but still sits on the books, just waiting to be revamped. 

Weasel Words: OpenAI’s Pentagon Deal Won’t Stop AI‑Powered Surveillance

6 March 2026 at 17:03

OpenAI, the maker of ChaptGPT, is rightfully facing widespread criticism for its decisions to fill the gap the U.S. Department of Defense (DoD) created when rival Anthropic refused to drop its restrictions against using its AI for surveillance and autonomous weapons systems. After protests from both users and employees who did not sign up to support government mass surveillanceearly reports show that ChaptGPT uninstalls rose nearly 300% after the company announced the dealSam Altman, CEO of OpenAI, conceded that the initial agreement was “opportunistic and sloppy.” He then re-published an internal memo on social media stating that additions to the agreement made clear that “Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, [and] FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

Trouble is, the U.S. government doesn’t believe “consistent with applicable laws” means “no domestic surveillance.” Instead, for the most part, the government has embraced a lax interpretation of “applicable law” that has blessed mass surveillance and large-scale violations of our civil liberties, and then fought tooth and nail to prevent courts from weighing in. 

"After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time."

“Intentionally” is also doing an awful lot of work in that sentence. For years the government has insisted that the mass surveillance of U.S. persons only happens incidentally (read: not intentionally) because their communications with people both inside the United States and overseas are swept up in surveillance programs supposedly designed to only collect communications outside the United States. 

The company’s amendment to the contract continues in a similar vein, “For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” Here, “deliberate” is the red flag given how often intelligence and law enforcement agencies rely on incidental or commercially purchased data to sidestep stronger privacy protections.

Here’s another one: “The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.” What, one wonders, does “unconstrained” mean, precisely—and according to whom? 

Lawyers sometimes call these “weasel words” because they create ambiguity that protects one side or another from real accountability for contract violations. As with the Anthropic negotiations, where the Pentagon reportedly agreed to adhere to Anthropic’s red lines only “as appropriate,” the government is likely attempting to publicly commit to limits in principle, but retain broad flexibility in practice.

OpenAI also notes that the Pentagon promised the NSA would not be allowed to use OpenAI’s tools absent a new agreement, and that its deployment architecture will help it verify that no red lines are crossed. But secret agreements and technical assurances have never been enough to rein in surveillance agencies, and they are no substitute for strong, enforceable legal limits and transparency.

OpenAI executives may indeed be trying, as claimed, to use the company’s contractual relationship with the Pentagon to help ensure that the government should use AI tools only in a way consistent with democratic processes. But based on what we know so far, that hope seems very naïve.

Moreover, that naïvete is dangerous. In a time when governments are willing to embrace extreme and unfounded interpretations of “applicable laws,” companies need to put some actual muscle behind standing by their commitments. After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time. OpenAI promises the public that it will  “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” but we know that enabling mass surveillance does both.     

OpenAI isn’t the only consumer-facing company that is, on the one hand, seeking to reassure the public that they aren’t participating in actions that violate human rights while, on the other, seeking to cash in on government mass surveillance efforts.  Despite this marketing double-speak, it is very clear that companies just cannot do both. It’s also clear that companies shouldn’t be given that much power over the limits of our privacy to begin with. The public should not have to rely on a small group of people—whether CEOs or Pentagon officials—to protect our civil liberties.

The Anthropic-DOD Conflict: Privacy Protections Shouldn’t Depend On the Decisions of a Few Powerful People

3 March 2026 at 22:35

The U.S. military has officially ended its $200 million contract with AI company Anthropic and has ordered all other military contractors to cease use of their products. Why? Because of a dispute over what the government could and could not use Anthropic’s technology to do. Anthropic had made it clear since it first signed the contract with the Pentagon in 2025 that it did not want its technology to be used for mass surveillance of people in the United States or for fully autonomous weapons systems. Starting in January, that became a problem for the Department of Defense, which ordered Anthropic to give them unrestricted use of the technology. Anthropic refused, and the DoD retaliated.

There is a lot we could learn from this conflict, but the biggest take away is this: the state of your privacy is being decided by contract negotiations between giant tech companies and the U.S. government—two entities with spotty track records for caring about your civil liberties. It’s good when CEOs step up and do the right thing—but it's not a sustainable or reliable solution to build our rights on. Given the government’s loose interpretations of the law, ability to find loopholes to surveil you, and willingness to do illegal spying, we needs serious and proactive legal restrictions to prevent it from gobbling up all the personally data it can acquire and using even routine bureaucratic data for punitive ends.

Imposing and enforcing such those restrictions is properly a role for Congress and the courts, not the private sector. 

The companies know this. When speaking about the specific risk that AI poses to privacy, the CEO of Anthropic Dario Amodei said in an interview, “I actually do believe it is Congress’s job. If, for example, there are possibilities with domestic mass surveillance—the government buying of bulk data has been produced on Americans, locations, personal information, political affiliations, to build profiles, and it’s not possible to analyze all of that with AI—the fact that that is legal—that seems like the judicial interpretation of the Fourth Amendment has not caught up or the laws passed by Congress have not caught up.” 

The example he cites here is a scarily realistic one—because it’s already happening. Customs and Border Protection has tapped into the online advertising world to buy data on Americans for surveillance purposes. Immigration and Customs Enforcement has been using a tool that maps millions of peoples’ devices based on purchased cell phone data. The Office of the Director of National Intelligence has proposed a centralized data broker marketplace to make it easier for intelligence agencies to buy commercially available data. Considering the government’s massive contracts with a bunch of companies that could do analysis, including Palantir, a company which does AI-enabled analysis of huge amounts of data, then the concerns are incredibly well founded. 

But Congress is sadly neglecting its duties. For example, a bill that would close the loophole of the government buying personal information passed the House of Representatives in 2024, but the Senate stopped it.  And because Congress did not act, Americans must rely on a tech company CEO has to try to protect our privacy—or at least refuse to help the government violate it.

Privacy in the digital age should be an easy bipartisan issue. Given that it’s wildly popular (71% of American adults are concerned about the government's use of their data and among adults that have heard of AI 70% have little to no trust in how companies use those products) you would think politicians would be leaping over each other to create the best legislation and companies would be promising us the most high-end privacy protecting features. Instead, for the time being, we are largely left adrift in a sea of constant surveillance, having to paddle our own life rafts.

EFF has, and always will, fight for real and sustainable protections for our civil liberties including  a world where our privacy does not rest upon the whims of CEOs and back room deals with the surveillance state. 

Tech Companies Shouldn’t Be Bullied Into Doing Surveillance

25 February 2026 at 00:42

The Secretary of Defense has given an ultimatum to the artificial intelligence company Anthropic in an attempt to bully them into making their technology available to the U.S. military without any restrictions for their use. Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance. The Department of Defense has reportedly threatened to label Anthropic a “supply chain risk,” in retribution for not lifting restrictions on how their technology is used. According to WIRED, that label would be, “a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic’s AI in their defense work.”

Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance.

In 2025, reportedly Anthropic became the first AI company cleared for use in relation to classified operations and to handle classified information. This current controversy, however, began in January 2026 when, through a partnership with defense contractor Palantir, Anthropic came to suspect their AI had been used during the January 3 attack on Venezuela. In January 2026, Anthropic CEO Dario Amodei wrote to reiterate that surveillance against US persons and autonomous weapons systems were two “bright red lines” not to be crossed, or at least topics that needed to be handled with “extreme care and scrutiny combined with guardrails to prevent abuses.” You can also read Anthropic’s self-proclaimed core views on AI safety here, as well as their LLM, Claude’s, constitution here

Now, the U.S. government is threatening to terminate the government’s contract with the company if it doesn’t switch gears and voluntarily jump right across those lines.  

Companies, especially technology companies, often fail to live up to their public statements and internal policies related to human rights and civil liberties for all sorts of reasons, including profit. Government pressure shouldn’t be one of those reasons. 

Whatever the U.S. government does to threaten Anthropic, the AI company should know that their corporate customers, the public, and the engineers who make their products are expecting them not to cave. They, and all other technology companies, would do best to refuse to become yet another tool of surveillance.

AI Police Reports: Year In Review

23 December 2025 at 18:00

In 2024, EFF wrote our initial blog about what could go wrong when police let AI write police reports. Since then, the technology has proliferated at a disturbing rate. Why? The most popular generative AI tool for writing police reports is Axon’s Draft One, and Axon also happens to be the largest provider of body-worn cameras to police departments in the United States. As we’ve written, companies are increasingly bundling their products to make it easier for police to buy more technology than they may need or that the public feels comfortable with. 

We have good news and bad news. 

Here’s the bad news: AI written police reports are still unproven, untransparent, and downright irresponsible–especially when the criminal justice system, informed by police reports, is deciding people’s freedom. The King County prosecuting attorney’s office in Washington state barred police from using AI to write police reports. As their memo read, “We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.” 

In July of this year, EFF published a two-part report on how Axon designed Draft One to defy transparency. Police upload their body-worn camera’s audio into the system, the system generates a report that the officer is expected to edit, and then the officer exports the report. But when they do that, Draft One erases the initial draft, and with it any evidence of what portions of the report were written by AI and what portions were written by an officer. That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report – they could point to the contradictory parts of their report and say, “the AI wrote that.” Draft One is designed to make it hard to disprove that. 

In this video of a roundtable discussion about Draft One, Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added): 

So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices—so basically the officer generates that draft, they make their edits, if they submit it into our Axon records system then that’s the only place we store it, if they copy and paste it into their third-party RMS [records management system] system as soon as they’re done with that and close their browser tab, it’s gone. It’s actually never stored in the cloud at all so you don’t have to worry about extra copies floating around.”

Yikes! 

All of this obfuscation also makes it incredibly hard for people outside police departments to figure out if their city’s officers are using AI to write reports–and even harder to use public records requests to audit just those reports. That’s why this year EFF also put out a comprehensive guide to help the public make their records requests as tailored as possible to learn about AI-generated reports. 

Ok, now here’s the good news: People who believe AI-written police reports are irresponsible and potentially harmful to the public are fighting back. 

This year, two states have passed bills that are an important first step in reigning in AI police reports. Utah’s SB 180 mandates that police reports created in whole or in part by generative AI have a disclaimer that the report contains content generated by AI. It also requires officers to certify that they checked the report for accuracy. California’s SB 524 went even further. It requires police to disclose, on the report, if it was used to fully or in part author a police report. Further, it bans vendors from selling or sharing the information a police agency provided to the AI. The bill also requires departments to retain the first draft of the report so that judges, defense attorneys, or auditors could readily see which portions of the final report were written by the officer and which portions were written by the computer.

In the coming year, anticipate many more states joining California and Utah in regulating, or perhaps even banning, police from using AI to write their reports. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

❌