Normal view

Open Letter to Tech Companies: Protect Your Users From Lawless DHS Subpoenas

10 February 2026 at 23:52

We are calling on technology companies like Meta and Google to stand up for their users by resisting the Department of Homeland Security's (DHS) lawless administrative subpoenas for user data. 

In the past year, DHS has consistently targeted people engaged in First Amendment activity. Among other things, the agency has issued subpoenas to technology companies to unmask or locate people who have documented ICE's activities in their community, criticized the government, or attended protests.   

These subpoenas are unlawful, and the government knows it. When a handful of users challenged a few of them in court with the help of ACLU affiliates in Northern California and Pennsylvania, DHS withdrew them rather than waiting for a decision. 

These subpoenas are unlawful, and the government knows it.

But it is difficult for the average user to fight back on their own. Quashing a subpoena is a fast-moving process that requires lawyers and resources. Not everyone can afford a lawyer on a moment’s notice, and non-profits and pro-bono attorneys have already been stretched to near capacity during the Trump administration.  

 That is why we, joined by the ACLU of Northern California, have asked several large tech platforms to do more to protect their users, including: 

  1.  Insist on court intervention and an order before complying with a DHS subpoena, because the agency has already proved that its legal process is often unlawful and unconstitutional;  
  2. Give users as much notice as possible when they are the target of a subpoena, so the user can seek help. While many companies have already made this promise, there are high-profile examples of it not happening—ultimately stripping users of their day in court;  
  3. Resist gag orders that would prevent companies from notifying their users that they are a target of a subpoena. 

 We sent the letter to Amazon, Apple, Discord, Google, Meta, Microsoft, Reddit, SNAP, TikTok, and X.  

Recipients are not legally compelled to comply with administrative subpoenas absent a court order 

 An administrative subpoena is an investigative tool available to federal agencies like DHS. Many times, these are sent to technology companies to obtain user data. A subpoena cannot be used to obtain the content of communications, but they have been used to try and obtain some basic subscriber information like name, address, IP address, length of service, and session times.  

Unlike a search warrant, an administrative subpoena is not approved by a judge. If a technology company refuses to comply, an agency’s only recourse is to drop it or go to court and try to convince a judge that the request is lawful. That is what we are asking companies to do—simply require court intervention and not obey in advance. 

It is unclear how many administrative subpoenas DHS has issued in the past year. Subpoenas can come from many places—including civil courts, grand juries, criminal trials, and administrative agencies like DHS. Altogether, Google received 28,622 and Meta received 14,520 subpoenas in the first half of 2025, according to their transparency reports. The numbers are not broken out by type.   

DHS is abusing its authority to issue subpoenas 

In the past year, DHS has used these subpoenas to target protected speech. The following are just a few of the known examples. 

On April 1, 2025, DHS sent a subpoena to Google in an attempt to locate a Cornell PhD student in the United States on a student visa. The student was likely targeted because of his brief attendance at a protest the year before. Google complied with the subpoena without giving the student an opportunity to challenge it. While Google promises to give users prior notice, it sometimes breaks that promise to avoid delay. This must stop.   

In September 2025, DHS sent a subpoena and summons to Meta to try to unmask anonymous users behind Instagram accounts that tracked ICE activity in communities in California and Pennsylvania. The users—with the help of the ACLU and its state affiliates— challenged the subpoenas in court, and DHS withdrew the subpoenas before a court could make a ruling. In the Pennsylvania case, DHS tried to use legal authority that its own inspector general had already criticized in a lengthy report.  

In October 2025, DHS sent Google a subpoena demanding information about a retiree who criticized the agency’s policies. The retiree had sent an email asking the agency to use common sense and decency in a high-profile asylum case. In a shocking turn, federal agents later appeared on that person’s doorstep. The ACLU is currently challenging the subpoena.  

Read the full letter here

Speaking Freely: Yazan Badran

10 February 2026 at 18:39

Interviewer: Jillian York

Yazan Badran is an assistant professor in international media and communication studies at the Vrije Universiteit Brussel, and a researcher at the Echo research group. His research focuses on the intersection between media, journalism and politics particularly in the MENA region and within its exilic and diasporic communities.

*This interview has been edited for length and clarity. 

Jillian York: What does free speech or free expression mean to you?

Yazan Badran: So I think there are a couple of layers to that question. There's a narrow conception of free speech that is related to, of course, your ability to think about the world.

And that also depends on having the resources to be able to think about the world, to having resources of understanding about the world, having resources to translate that understanding into thoughts and analysis yourself, and then being able to express that in narratives about yourself with others in the world. And again, that also requires resources of expression, right?

So there's that layer, which means that it's not simply the absence of constraints around your expression and around your thinking, but actually having frameworks that activate you expressing yourself in the world. So that's one element of free expression or free speech, or however you want to call it. 

But I feel that remains too narrow if we don't account also for the counterpart, which is having frameworks that listen to you as you express yourself into the world, right? Having people, institutions, frameworks that are actively also listening, engaging, recognizing you as a legitimate voice in the world. And I think these two have to come together in any kind of broad conception of free speech, which entangles you then in a kind of ethical relationship that you have to listen to others as well, right? It becomes a mutual responsibility from you towards the other, towards the world, and for the world towards you, which also requires access to resources and access to platforms and people listening to you.

So I think these two are what I, if I want to think of free speech and free expression, I would have to think about these two together. And most of the time there is a much narrower focus on the first, and somewhat neglecting the second, I think.

JY: Yeah, absolutely. Okay, now I have to ask, what is an experience that shaped these views for you?

YB: I think two broad experiences. One is the…let's say, the 2000s, the late 2000s, so early 2010 and 2011, where we were all part of this community that was very much focused on expression and on limiting the kind of constraints around expression and thinking of tools and how resources can be brought towards that. And there were limits to where that allowed us to go at a certain point.

And I think the kind of experiences of the Arab uprisings and what happened afterwards and the kind of degeneration across the worlds in which we lived kind of became a generative ground to think of how that experience went wrong or how that experience fell short.

And then building on that, I think when I started doing research on journalism and particularly on exiled journalists and thinking about their practice and their place in the world and the fact that in many ways there were very little constraints on what they could do and what they could voice and what they could express, et cetera.

Not that there are no constraints, there are always constraints, but that the nature of constraints were different - they were of the order of listening; who is listening to this? Who is on the other side? Who are you engaged in a conversation with? And that was, from speaking to them, a real kind of anxiety that came through to me.

JY: I think you're sort of alluding to theory of change…

YB: Yes, to some extent, but also to…when we think about our contribution into the world, to what kind of the normative framework we imagine. As people who think about all of these structures that circulate information and opinion and expressions, et cetera, there is often a normative focus, where there should be, about opening up constraints around expression and bringing resources to bear for expression, and we don't think enough of how these structures need also to foster listening and to foster recognition of these expressions.

And that is the same with, when we think about platforms on the internet and when we think about journalism, when we think about teaching… For example, in my field, when we think about academic research, I think you can bring that framework in different places where expression is needed and where expression is part of who we are. Does that make sense?

JY:  Absolutely. It absolutely makes sense. I think about this all the time. I'm teaching now too, and so it's very, very valuable. Okay, so let's shift a little bit. You're from Syria. You've been in Brussels for a long time. You were in Japan in between. You have a broad worldview, a broad perspective. Let’s talk about press freedom.

YB: Yeah, I've been thinking about this because, I mean, I work on journalism and I'm trying to do some work on Syria and what is happening in Syria now. And I feel there are times where people ask me about the context for journalistic work in Syria. And the narrow answer and the clear answer is that we've never had more freedom to do journalism in the country, right? And there are many reasons. Part of it is that this is a new regime that perhaps doesn't still have complete control over the ground. There are differentiated contexts where in some places it's very easy to go out and to access information and to speak to people. In other places, it's less easy, it's more dangerous, etc. So it's differentiated and it's not the same everywhere.

But it's clear that journalists come out and in from Syria. They can do their job relatively unmolested, which is a massive kind of change, contrast to the last thirteen or fourteen years where Syria was an information black hole. You couldn't do anything.

But that remains somewhat narrow in thinking about journalism in Syria. What is journalism about Syria in this context? What kind of journalism do we need to be thinking about? In a place that is in, you know, ruins, if not material destruction, then economic and societal disintegration, et cetera. So there are, I think, two elements. Sure, you can do journalism, but what kind of journalism is being done in Syria? I feel that we have to be asking a broader question about what is the role of information now more broadly in Syria? 

And that is a more difficult question to answer, I feel. Or a more difficult question to answer positively. Because it highlights questions about who has access to the means of journalism now in Syria? What are they doing with it? Who has access to the sources, and can provide actual understanding about the political or economic developments that are happening in the country. Very few people who have genuine understanding of the processes are going into building a new regime, a new state. In general, we have very, very little access. There are few avenues to participate and gain access to what is happening there.

So sure, you can go on the ground, you can take photos, you can speak to people, but in terms of participating in that broader nation-building exercise that is happening; this is happening at a completely different level to the places that we have access to. And with few exceptions, journalism as practiced now is not bringing us closer to these spaces. 

In a narrow sense, it's a very exciting time to be looking at experiments in doing journalism in Syria, to also be seeing the interaction between international journalists and local journalists and also the kind of tensions and collaborations and discussion around structural inequalities between them; especially from a researcher’s perspective. But it remains very, very narrow. In terms of the massive story, which is a complete revolution in the identity of the country, in its geopolitical arrangement, in its positioning in the world, and that we have no access to whatsoever. This is happening well over our heads—we are almost bystanders. 

JY:  That makes sense. I mean, it doesn't make sense, but it makes sense. What role does the internet and maybe even specifically platforms or internet companies play in Syria? Because with sanctions lifted, we now have access to things that were not previously available. I know that the app stores are back, although I'm getting varied reports from people on the ground about how much they can actually access, although people can download Signal now, which is good. How would you say things have changed online in the past year?

YB:  In the beginning, platforms, particularly Facebook, and it's still really Facebook, were the main sphere of information in the country. And to a large extent, it remains the main sphere where some discussions happen within the country.

These are old networks that were reactivated in some ways, but also public spheres that were so completely removed from each other that opened up on each other after December. So you had really almost independent spheres of activity and discussion. Between areas that were controlled by the regime, areas that were controlled by the opposition, which kind of expanded to areas of Syrian refugees and diaspora outside.

And these just collapsed on each other after 8th of December with massive chaos, massive and costly chaos in some ways. The spread of disinformation, organic disinformation, in the first few months was mind-boggling. I think by now there's a bit of self-regulation, but also another movement of siloing, where you see different clusters hardening as well. So that kind of collapse over the first few months didn't last very long.

You start having conversations in isolation of each other now. And I'm talking mainly about Facebook, because that is the main network, that is the main platform where public discussions are happening. Telegram was the public infrastructure of the state for a very long time, for the first six months. Basically, state communication happened through Telegram, through Telegram channels, also causing a lot of chaos. But now you have a bit more stability in terms of having a news agency. You have the television, the state television. So the importance of Telegram has waned off, but it's still a kind of parastructure of state communication, it remains important.

I think more structurally, these platforms are basically the infrastructure of information circulation because of the fact that people don't have access to electricity, for example, or for much of the time they have very low access to bandwidth. So having Facebook on their phone is the main way to keep in touch with things. They can't turn on the television, they can't really access internet websites very easily. So Facebook becomes materially their access to the world. Which comes with all of the baggage that these platforms bring with them, right? The kind of siloing, the competition over attention, the sensationalism, these clustering dynamics of these networks and their algorithms.

JY: Okay, so the infrastructural and resource challenges are real, but then you also have the opening up for the first time of the internet in many, many years, or ever, really. And as far as I understand from what friends who’ve been there have reported, is that nothing being blocked yet. So what impact do you see or foresee that having on society as people get more and more online? I know a lot of people were savvy, of course, and got around censorship, but not everyone, right?

YB: No, absolutely, absolutely not everyone. Not everyone has the kind of digital literacy to understand what going online means, right? Which accounts for one thing, the avalanche of fake information and disinformation that is now Syria, basically.

JY: It's only the second time this has happened. I mean, Tunisia is the only other example I can think of where the internet just opened right up.

YB: Without having gateways and infrastructure that can kind of circulate and manage and curate this avalanche of information. While at the same time, you have a real disintegration in the kind of social institutions that could ground a community. So you have really a perfect storm of a thin layer of digital connectivity, for a lot of people who didn't have access to even that thin layer, but it's still a very thin layer, right? You're connecting from your old smartphone to Facebook. You're getting texts, et cetera, and perhaps you're texting with the family over WhatsApp. And a real collapse of different societal institutions that also grounded you with others, right? The education system, of different clubs and different neighborhoods, small institutions that brought different communities together of the army, for example, universities, all of these have been disrupted over the past year in profound ways and along really communitarian ways as well. I don't know the kind of conditions that this creates, the combination of these two. But it doesn't seem like it's a positive situation or a positive dynamic.

JY:  Yeah, I mean, it makes me think of, for example, Albania or other countries that opened up after a long time and then all of a sudden just had this freedom.

YB: But still combined, I mean, that is one thing, the opening up and the avalanche, and that is a challenge. But it is a challenge that perhaps within a settled society with some institutions in which you can turn to, through which you can regulate this, through which you can have countervailing forces and countervailing forums for… that’s one thing. But with the collapse of material institutions that you might have had, it's really creating a bewildering world for people, where you turn back and you have your family that maybe lives two streets away, and this is the circle in which you move, or you feel safe to move.

Of course, for certain communities, right? That is not the condition everywhere. But that is part of what is happening. There's a real sense of bewilderment in the kind of world that you live in. Especially in areas that used to be controlled by the regime where everything that you've known in terms of state authority, from the smallest, the lowliest police officer in your neighborhood, to people, bureaucrats that you would talk to, have changed or your relationship to them has fundamentally changed. There's a real upheaval in your world at different levels. And, you know, and you're faced with a swirling world of information that you can't make sense out of.

JY: I do want to put you on the spot with a question that popped into my head, which is, I often ask people about regulation and depending on where they're working in the world, especially like when I'm talking to folks in Africa and elsewhere. In this case, though, it's a nation-building challenge, right? And so—you're looking at all of these issues and all of these problems—if you were in a position to create press or internet regulation from the ground up in Syria, what do you feel like that should look like? Are there models that you would look to? Are there existing structures or is there something new or?

YB:  I think maybe I don't have a model, but I think maybe a couple of entry points that you would kind of use to think of what model of regulation you want is to understand that there the first challenge is at the level of nation building. Of really recreating a national identity or reimagining a national identity, both in terms of a kind of shared imaginary of what these people are to each other and collectively represent, but also in terms of at the very hyper-local level of how these communities can go back to living together.

And I think that would have to shape how you would approach, say, regulation. I mean, around the Internet, that's a more difficult challenge. But at least in terms of your national media, for example, what is the contribution of the state through its media arm? What kind of national media do you want to put into place? What kind of structures allow for really organic participation in this project or not, right? But also at the level of how do you regulate the market for information in a new state with that level of infrastructural destruction, right? Of the economic circuit in which these networks are in place. How do you want to reconnect Syria to the world? In what ways? For what purposes?

And how do you connect all of these steps to open questions around identity and around that process of national rebuilding, and activating participation in that project, right? Rather than use them to foreclose these questions.

There are also certain challenges that you have in Syria that are endogenous, that are related to the last 14 years, to the societal disintegration and geographic disintegration and economic disintegration, et cetera. But on top of that, of course, we live in an information environment that is, at the level of the global information environment, also structurally cracking down in terms of how we engage with information, how we deal with journalism, how we deal with questions of difference. These are problems that go well beyond Syria, right? These are difficult issues that we don't know how to tackle here in Brussels or in the US, right? And so there's also an interplay between these two. There's an interplay between the fact that even here, we are having to come to terms with some of the myths around liberalism, around journalism, the normative model of journalism, of how to do journalism, right? I mean, we have to come to terms with it. The last two years—of the Gaza genocide—didn't happen in a vacuum. It was earth shattering for a lot of these pretensions around the world that we live in. Which I think is a bigger challenge, but of course it interacts with the kind of challenges that you have in a place like Syria.

JY: To what degree do you feel that the sort of rapid opening up and disinformation and provocations online and offline are contributing to violence?

YB: I think they're at the very least exacerbating the impact of that violence. I can't make claims about how much they're contributing, though I think they are contributing. I think there are clear episodes in which the kind of the circulation of misinformation online, you could directly link it to certain episodes of violence, like what happened in Jaramana before the massacre of the Druze. So a couple of weeks before the Druze, there was this piece of disinformation that led to actual violence and that set the stage to the massive violence later on. During the massacres on the coast, you could also link the kind of panic and the disinformation around the attacks of former regime officers and the effects of that to the mobilization that has happened. The scale of the violence is linked to the circulation of panic and disinformation. So there is a clear contribution. But I think the greater influence is how it exacerbates what happens after that violence, how it exacerbates the depth, for example, of divorce between between the population of Sweida after the massacre, the Druze population of Sweida and the rest of Syria. That is tangible. And that is embedded in the kind of information environment that we have. There are different kinds of material causes for it as well. There is real structural conflict there. But the kind of ideological, discursive, and affective, divorce that has happened over the past six months, that is a product of the information environment that we have.

JY: You are very much a third country, 4th country kid at this point. Like me, you connected to this global community through Global Voices at a relatively young age. In what ways do you feel that global experience has influenced your thinking and your work around these topics, around freedom of expression? How has it shaped you?

YB: I think in a profound way. What it does is it makes you to some extent immune from certain nationalist logics in thinking about the world, right? You have stakes in so many different places. You've built friendships, you've built connections, you've left parts of you in different places. And that is also certainly related to certain privileges, but it also means that you care about different places, that you care about people in many different places. And that shapes the way that you think about the world - it produces commitments that are diffused, complex and at times even contradictory, and it forces you to confront these contradictions. You also have experience, real experience in how much richer the world is if you move outside of these narrow, more nationalist, more chauvinistic ways of thinking about the world. And also you have kind of direct lived experience of the complexity of global circulation in the world and the fact, at a high level, it doesn't produce a homogenized culture, it produces many different things and they're not all equal and they're not all good, but it also leaves spaces for you to contribute to it, to engage with it, to actively try to play within the little spaces that you have.

JY: Okay, here’s my final question that I ask everyone. Do you have a free speech hero? Or someone who's inspired you?

YB: I mean, there are people whose sacrifices humble you. Many of them we don't know by name. Some of them we do know by name. Some of them are friends of ours. I keep thinking of Alaa [Abd El Fattah], who was just released from prison—I was listening to his long interview with Mada Masr (in Arabic) yesterday, and it’s…I mean…is he a hero? I don’t know but he is certainly one of the people I love at a distance and who continues to inspire us.

JY: I think he’d hate to be called a hero.

YB: Of course he would. But in some ways, his story is a tragedy that is inextricable from the drama of the last fifteen years, right? It’s not about turning him into a symbol. He's also a person and a complex person and someone of flesh and blood, etc. But he's also someone who can articulate in a very clear, very simple way, the kind of sense of hope and defeat that we all feel at some level and who continues to insist on confronting both these senses critically and analytically.

JY: I’m glad you said Alaa. He’s someone I learned a lot from early on, and there’s a lot of his words and thinking that have guided me in my practice. 

YB: Yeah, and his story is tragic in the sense that it kind of highlights that in the absence of any credible road towards collective salvation, we're left with little moments of joy when there is a small individual salvation of someone like him. And that these are the only little moments of genuine joy that we get to exercise together. But in terms of a broader sense of collective salvation, I think in some ways our generation has been profoundly and decisively defeated.

JY:  And yet the title of his book, “you have not yet been defeated.”

YB: Yeah, it's true. It's true.

JY: Thank you Yazan for speaking with me.

On Its 30th Birthday, Section 230 Remains The Lynchpin For Users’ Speech

9 February 2026 at 19:53

For thirty years, internet users have benefited from a key federal law that allows everyone to express themselves, find community, organize politically, and participate in society. Section 230, which protects internet users’ speech by protecting the online intermediaries we rely on, is the legal support that sustains the internet as we know it.

Yet as Section 230 turns 30 this week, there are bipartisan proposals in Congress to either repeal or sunset the law. These proposals seize upon legitimate concerns with the harmful and anti-competitive practices of the largest tech companies, but then misdirect that anger toward Section 230.

But rolling back or eliminating Section 230 will not stop invasive corporate surveillance that harms all internet users. Killing Section 230 won’t end to the dominance of the current handful of large tech companies—it would cement their monopoly power

The current proposals also ignore a crucial question: what legal standard should replace Section 230? The bills provide no answer, refusing to grapple with the tradeoffs inherent in making online intermediaries liable for users’ speech.

This glaring omission shows what these proposals really are: grievances masquerading as legislation, not serious policy. Especially when the speech problems with alternatives to Section 230’s immunity are readily apparent, both in the U.S. and around the world. Experience shows that those systems result in more censorship of internet users’ lawful speech.

Let’s be clear: EFF defends Section 230 because it is the best available system to protect users’ speech online. By immunizing intermediaries for their users’ speech, Section 230 benefits users. Services can distribute our speech without filters, pre-clearance, or the threat of dubious takedown requests. Section 230 also directly protects internet users when they distribute other people’s speech online, such as when they reshare another users’ post or host a comment section on their blog.

It was the danger of losing the internet as a forum for diverse political discourse and culture that led to the law in 1996. Congress created Section 230’s limited civil immunity  because it recognized that promoting more user speech outweighed potential harms. Congress decided that when harmful speech occurs, it’s the speaker that should be held responsible—not the service that hosts the speech. The law also protects social platforms when they remove posts that are obscene or violate the services’ own standards. And Section 230 has limits: it does not immunize services if they violate federal criminal laws.

Section 230 Alternatives Would Protect Less Speech

With so much debate around the downsides of Section 230, it’s worth considering: What are some of the alternatives to immunity, and how would they shape the internet?

The least protective legal regime for online speech would be strict liability. Here, intermediaries always would be liable for their users’ speech—regardless of whether they contributed to the harm, or even knew about the harmful speech. It would likely end the widespread availability and openness of social media and web hosting services we’re used to. Instead, services would not let users speak without vetting the content first, via upload filters or other means. Small intermediaries with niche communities may simply disappear under the weight of such heavy liability.

Another alternative: Imposing legal duties on intermediaries, such as requiring that they act “reasonably” to limit harmful user content. This would likely result in platforms monitoring users’ speech before distributing it, and being extremely cautious about what they allow users to say. That inevitably would lead to the removal of lawful speech—probably on a large scale. Intermediaries would not be willing to defend their users’ speech in court, even it is entirely lawful. In a world where any service could be easily sued over user speech, only the biggest services will survive. They’re the ones that would have the legal and technical resources to weather the flood of lawsuits.

Another option is a notice-and-takedown regime, like what exists under the Digital Millennium Copyright Act. That will also result in takedowns of legitimate speech. And there’s no doubt such a system will be abused. EFF has documented how the DMCA leads to widespread removal  https://www.eff.org/takedownsof lawful speech based on frivolous copyright infringement claims. Replacing Section 230 with a takedown system will invite similar behavior, and powerful figures and government officials will use it to silence their critics.

The closest alternative to Section 230’s immunity provides protections from liability until an impartial court has issued a full and final ruling that user-generated content is illegal, and ordered that it be removed. These systems ensure that intermediaries will not have to cave to frivolous claims. But they still leave open the potential for censorship because intermediaries are unlikely to fight every lawsuit that seeks to remove lawful speech. The cost of vindicating lawful speech in court may be too high for intermediaries to handle at scale.

By contrast, immunity takes the variable of whether an intermediary will stand up for their users’ speech out of the equation. That is why Section 230 maximizes the ability for users to speak online.

In some narrow situations, Section 230 may leave victims without a legal remedy. Proposals aimed at those gaps should be considered, though lawmakers should pay careful attention that in vindicating victims, they do not broadly censor users’ speech. But those legitimate concerns are not the criticisms that Congress is levying against Section 230.

EFF will continue to fight for Section 230, as it remains the best available system to protect everyone’s ability to speak online.

Op-ed: Weakening Section 230 Would Chill Online Speech

9 February 2026 at 17:19

(This appeared as an op-ed published Friday, Feb. 6 in the Daily Journal, a California legal newspaper.)

Section 230, “the 26 words that created the internet,” was enacted 30 years ago this week. It was no rush-job—rather, it was the result of wise legislative deliberation and foresight, and it remains the best bulwark to protect free expression online.

The internet lets people everywhere connect, share ideas and advocate for change without needing immense resources or technical expertise. Our unprecedented ability to communicate online—on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive—is not an accident. In writing Section 230, Congress recognized that for free expression to thrive on the internet, it had to protect the services that power users’ speech. Section 230 does this by preventing most civil suits against online services that are based on what users say. The law also protects users who act like intermediaries when they, for example, forward an email, retweet another user or host a comment section on their blog.

The merits of immunity, both for internet users who rely on intermediaries—from ISPs to email providers to social media platforms, and for internet users who are intermediaries—are readily apparent when compared with the alternatives.

One alternative would be to provide no protection at all for intermediaries, leaving them liable for anything and everything anyone says using their service. This legal risk would essentially require every intermediary to review and legally assess every word, sound or image before it’s published—an impossibility at scale, and a death knell for real-time user-generated content.

Another option: giving protection to intermediaries only if they exercise a specified duty of care, such as where an intermediary would be liable if they fail to act reasonably in publishing a user’s post. But negligence and other objective standards are almost always insufficient to protect freedom of expression because they introduce significant uncertainty into the process and create real chilling effects for intermediaries. That is, intermediaries will choose not to publish anything remotely provocative—even if it’s clearly protected speech—for fear of having to defend themselves in court, even if they are likely to ultimately prevail. Many Section 230 critics bemoan the fact that it prevented courts from developing a common law duty of care for online intermediaries. But the criticism rarely acknowledges the experience of common law courts around the world, few of which adopted an objective standard, and many of which adopted immunity or something very close to it.

Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages.

Another alternative is a knowledge-based system in which an intermediary is liable only after being notified of the presence of harmful content and failing to remove it within a certain amount of time. This notice-and-takedown system invites tremendous abuse, as seen under the Digital Millennium Copyright Act’s approach: It’s too easy for someone to notify an intermediary that content is illegal or tortious simply to get something they dislike depublished. Rather than spending the time and money required to adequately review such claims, intermediaries would simply take the content down.

All these alternatives would lead to massive depublication in many, if not most, cases, not because the content deserves to be taken down, nor because the intermediaries want to do so, but because it’s not worth assessing the risk of liability or defending the user’s speech. No intermediary can be expected to champion someone else’s free speech at its own considerable expense.Nor is the United States the only government to eschew “upload filtering,” the requirement that someone must review content before publication. European Union rules avoid this also, recognizing how costly and burdensome it is. Free societies recognize that this kind of pre-publication review will lead risk-averse platforms to nix anything that anyone anywhere could deem controversial, leading us to the most vanilla, anodyne internet imaginable.

The advent of artificial intelligence doesn’t change this. Perhaps there’s a tool that can detect a specific word or image, but no AI can make legal determinations or be prompted to identify all defamation or harassment. Human expression is simply too contextual for AI to vet; even if a mechanism could flag things for human review, the scale is so massive that such human review would still be overwhelmingly burdensome.

Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages. Each of those acts requires numerous layers of online services, all of which face potential liability without immunity.

This law isn’t a shield for “big tech.” Its ultimate beneficiaries are all of us who want to post things online without having to code it ourselves, and so that we can read and watch content that others create. If Congress eliminated Section 230 immunity, for example, we would be asking email providers and messaging platforms to read and legally assess everything a user writes before agreeing to send it. 

For many critics of Section 230, the chilling effect is the point: They want a system that will discourage online services to publish protected speech that some find undesirable. They want platforms to publish less than what they would otherwise choose to publish, even when that speech is protected and nonactionable.

When Section 230 was passed in 1996, about 40 million people used the internet worldwide; by 2025, estimates ranged from five billion to north of six billion. In 1996, there were fewer than 300,000 websites; by last year, estimates ranged up to 1.3 billion. There is no workforce and no technology that can police the enormity of everything that everyone says.

Internet intermediaries—whether social media platforms, email providers or users themselves—are protected by Section 230 so that speech can flourish online.

EFF Statement on ICE and CBP Violence

27 January 2026 at 02:46

Dangerously unchecked surveillance and rights violations have been a throughline of the Department of Homeland Security since the agency’s creation in the wake of the September 11th attacks. In particular, Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have been responsible for countless civil liberties and digital rights violations since that time. In the past year, however, ICE and CBP have descended into utter lawlessness, repeatedly refusing to exercise or submit to the democratic accountability required by the Constitution and our system of laws.  

The Trump Administration has made indiscriminate immigration enforcement and mass deportation a key feature of its agenda, with little to no accountability for illegal actions by agents and agency officials. Over the past year, we’ve seen massive ICE raids in cities from Los Angeles to Chicago to Minneapolis. Supercharged by an unprecedented funding increase, immigration enforcement agents haven’t been limited to boots on the ground: they’ve been scanning faces, tracking neighborhood cell phone activity, and amassing surveillance tools to monitor immigrants and U.S. citizens alike. 

Congress must vote to reject any further funding of ICE and CBP

The latest enforcement actions in Minnesota have led to federal immigration agents killing Renee Good and Alex Pretti. Both were engaged in their First Amendment right to observe and record law enforcement when they were killed. And it’s only because others similarly exercised their right to record that these killings were documented and widely exposed, countering false narratives the Trump Administration promoted in an attempt to justify the unjustifiable.  

These constitutional violations are systemic, not one-offs. Just last week, the Associated Press reported a leaked ICE memo that authorizes agents to enter homes solely based on “administrative” warrants—lacking any judicial involvement. This government policy is contrary to the “very core” of the Fourth Amendment, which protects us against unreasonable search and seizure, especially in our own homes 

These violations must stop now. ICE and CBP have grown so disdainful of the rule of law that reforms or guardrails cannot suffice. We join with many others in saying that Congress must vote to reject any further funding of ICE and CBP this week. But that is not enough. It’s time for Congress to do the real work of rebuilding our immigration enforcement system from the ground up, so that it respects human rights (including digital rights) and human dignity, with real accountability for individual officers, their leadership, and the agency as a whole.

EFF Joins Internet Advocates Calling on the Iranian Government to Restore Full Internet Connectivity

20 January 2026 at 19:00

Earlier this month, Iran’s internet connectivity faced one of its most severe disruptions in recent years with a near-total shutdown from the global internet and major restrictions on mobile access.

EFF joined architects, operators, and stewards of the global internet infrastructure in calling upon authorities in Iran to immediately restore full and unfiltered internet access. We further call upon the international technical community to remain vigilant in monitoring connectivity and to support efforts that ensure the internet remains open, interoperable, and accessible to all.

This is not the first time the people in Iran have been forced to experience this, with the government suppressing internet access in the country for many years. In the past three years in particular, people of Iran have suffered repeated internet and social media blackouts following an activist movement that blossomed after the death of Mahsa Amini, a woman murdered in police custody for refusing to wear a hijab. The movement gained global attention and in response, the Iranian government rushed to control both the public narrative and organizing efforts by banning social media and sometimes cutting off internet access altogether. 

EFF has long maintained that governments and occupying powers must not disrupt internet or telecommunication access. Cutting off telecommunications and internet access is a violation of basic human rights and a direct attack on people's ability to access information and communicate with one another. 

Our joint statement continues:

“We assert the following principles:

  1. Connectivity is a Fundamental Enabler of Human Rights: In the 21st century, the right to assemble, the right to speak, and the right to access information are inextricably linked to internet access.
  2. Protecting the Global Internet Commons: National-scale shutdowns fragment the global network, undermining the stability and trust required for the internet to function as a global commons.
  3. Transparency: The technical community condemns the use of BGP manipulation and infrastructure filtering to obscure events on the ground.”

Read the letter in full here

EFF Condemns FBI Search of Washington Post Reporter’s Home

17 January 2026 at 00:19

Government invasion of a reporter’s home, and seizure of journalistic materials, is exactly the kind of abuse of power the First Amendment is designed to prevent. It represents the most extreme form of press intimidation. 

Yet, that’s what happened on Wednesday morning to Washington Post reporter Hannah Natanson, when the FBI searched her Virginia home and took her phone, two laptops, and a Garmin watch. 

The Electronic Frontier Foundation has joined 30 other press freedom and civil liberties organizations in condemning the FBI’s actions against Natanson. The First Amendment exists precisely to prevent the government from using its powers to punish or deter reporting on matters of public interest—including coverage of leaked or sensitive information. Searches like this threaten not only journalists, but the public’s right to know what its government is doing.

In the statement published yesterday, we call on Congress: 

To exercise oversight of the DOJ by calling Attorney General Pam Bondi before Congress to answer questions about the FBI’s actions; 

To reintroduce and pass the PRESS Act, which would limit government surveillance of journalists, and its ability to compel journalists to reveal sources; 

To reform the 108-year-old Espionage Act so it can no longer be used to intimidate and attack journalists. 

And to pass a resolution confirming that the recording of law enforcement activity is protected by the First Amendment. 

We’re joined on this letter by Free Press Action, the American Civil Liberties Union, PEN America, the NewsGuild-CWA, the Society of Professional Journalists, the Committee to Protect Journalists, and many other press freedom and civil liberties groups.

Further Reading:

Congress Wants To Hand Your Parenting to Big Tech

16 January 2026 at 19:43

Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee held a hearing today on “examining the effect of technology on America’s youth.” Witnesses warned about “addictive” online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and “empower parents.” 

That’s a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill’s press release contains soothing language, KOSMA doesn’t actually give parents more control. 

Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That’s right—this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem.  

Kids Under 13 Are Already Banned From Social Media

One of the main promises of KOSMA is simple and dramatic: it would ban kids under 13 from social media. Based on the language of bill sponsors, one might think that’s a big change, and that today’s rules let kids wander freely into social media sites. But that’s not the case.   

Every major platform already draws the same line: kids under 13 cannot have an account. Facebook, Instagram, TikTok, X, YouTube, Snapchat, Discord, Spotify, and even blogging platforms like WordPress all say essentially the same thing—if you’re under 13, you’re not allowed. That age line has been there for many years, mostly because of how online services comply with a federal privacy law called COPPA

Of course, everyone knows many kids under 13 are on these sites anyways. The real question is how and why they get access. 

Most Social Media Use By Younger Kids Is Family-Mediated 

If lawmakers picture under-13 social media use as a bunch of kids lying about their age and sneaking onto apps behind their parents’ backs, they’ve got it wrong. Serious studies that have looked at this all find the opposite: most under-13 use is out in the open, with parents’ knowledge, and often with their direct help. 

A large national study published last year in Academic Pediatrics found that 63.8% of under-13s have a social media account, but only 5.4% of them said they were keeping one secret from their parents. That means roughly 90% of kids under 13 who are on social media aren’t hiding it at all. Their parents know. (For kids aged thirteen and over, the “secret account” number is almost as low, at 6.9%.) 

Earlier research in the U.S. found the same pattern. In a well-known study of Facebook use by 10-to-14-year-olds, researchers found that about 70% of parents said they actually helped create their child’s account, and between 82% and 95% knew the account existed. Again, this wasn’t kids sneaking around. It was families making a decision together.

A 2022 study by the UK’s media regulator Ofcom points in the same direction, finding that up to two-thirds of social media users below the age of thirteen had direct help from a parent or guardian getting onto the platform. 

The typical under-13 social media user is not a sneaky kid. It’s a family making a decision together. 

KOSMA Forces Platforms To Override Families 

This bill doesn’t just set an age rule. It creates a legal duty for platforms to police families.

Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it “shall terminate any existing account or profile” belonging to that user. And “knows” doesn’t just mean someone admits their age. The bill defines knowledge to include what is “fairly implied on the basis of objective circumstances”—in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13. 

KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won’t be kids sneaking around—it will be minors who are following their parents’ guidance, and the parents themselves. 

Imagine a child using their parent’s YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, “Cool video—I’ll show this to my 6th grade teacher!” and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn’t matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely  lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a “family” account from being shut down.

Violations of KOSMA are enforced by the FTC and state attorneys general. That’s more than enough legal risk to make platforms err on the side of cutting people off.

Platforms have no way to remove “just the kid” from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child’s use, KOSMA forces Big Tech to override that family decision.

Your Family, Their Algorithms

KOSMA doesn’t appoint a neutral referee. Under the law, companies like Google (YouTube), Meta (Facebook and Instagram), TikTok, Spotify, X, and Discord will become the ones who decide whose account survives, whose account gets locked, who has to upload ID, and whose family loses access altogether. They won’t be doing this because they want to—but because Congress is threatening them with legal liability if they don’t. 

These companies don’t know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems. 

What Families Lose 

This debate isn’t really about TikTok trends or doomscrolling. It’s about all the ordinary, boring, parent-guided uses of the modern internet. It’s about a kid watching “How volcanoes work” on regular YouTube, instead of the stripped-down YouTube Kids. It’s about using a shared Spotify account to listen to music a parent already approves. It’s about piano lessons from a teacher who makes her living from YouTube ads.

These aren’t loopholes. They’re how parenting works in the digital age. Parents increasingly filter, supervise, and, usually, decide together with their kids. KOSMA will lead to more locked accounts, and more parents submitting to face scans and ID checks. It will also lead to more power concentrated in the hands of the companies Congress claims to distrust. 

What Can Be Done Instead

KOSMA also includes separate restrictions on how platforms can use algorithms for users aged 13 to 17. Those raise their own serious questions about speech, privacy, and how online services work, and need debate and scrutiny as well. But they don’t change the core problem here: this bill hands control over children’s online lives to Big Tech.

If Congress really wants to help families, it should start with something much simpler and much more effective: strong privacy protections for everyone. Limits on data collection, restrictions on behavioral tracking, and rules that apply to adults as well as kids would do far more to reduce harmful incentives than deputizing companies to guess how old your child is and shut them out.

But if lawmakers aren’t ready to do that, they should at least drop KOSMA and start over. A law that treats ordinary parenting as a compliance problem is not protecting families—it’s undermining them.

Parents don’t need Big Tech to replace them. They need laws that respect how families actually work.

The Year States Chose Surveillance Over Safety: 2025 in Review

2 January 2026 at 07:48

2025 was the year age verification went from a fringe policy experiment to a sweeping reality across the United States. Half of the U.S. now mandates age verification for accessing adult content or social media platforms. Nine states saw their laws take effect this year alone, with more coming in 2026.

The good news is that courts have blocked many of the laws seeking to impose age-verification gates on social media, largely for the same reasons that EFF opposes these efforts.  Age-verification measures censor the internet and burden access to online speech. Though age-verification mandates are often touted as "online safety" measures for young people, the laws actually do more harm than good. They undermine the fundamental speech rights of adults and young people alike, create new barriers to internet access, and put at risk all internet users' privacy, anonymity, and security.

If you're feeling overwhelmed by this onslaught of laws and the invasive technologies behind them, you're not alone. That's why we've launched EFF's Age Verification Resource Hub at EFF.org/Age—a one-stop shop to understand what these laws actually do, what's at stake, why EFF opposes all forms of age verification, how to protect yourself, and how to join the fight for a free, open, private, and safe internet. Moreover, there is hope. Although the Supreme Court ruled that imposing age-verification gates to access adult content does not violate the First Amendment on its face, the legal fight continues regarding whether those laws are constitutional. 

As we built the hub throughout 2025, we also fought state mandates in legislatures, courts, and regulatory hearings. Here's a summary of what happened this year.

The Laws That Took Effect (And Immediately Backfired)

Nine states’ age verification laws for accessing adult content went into effect in 2025:

Predictably, users didn’t stop accessing adult content after the laws went into effect, they just changed how they got to it. As we’ve said elsewhere: the internet always routes around censorship. 

In fact, research from the New York Center for Social Media and Politics and the public policy nonprofit the Phoenix Center confirm what we’ve warned from the beginning: age verification laws don’t work. Their research found:

  • Searches for platforms that have blocked access to residents in states with these laws dropped significantly, while searches for offshore sites surged.
  • Researchers saw a predictable surge in VPN usage following the enactment of age verification laws, where for example, Florida saw a 1,150% increase in VPN demand after its law took effect.

As foretold, when platforms block access or require invasive verification, it drives people to sites that operate outside the law—platforms that often pose greater safety risks. Instead of protecting young people, these laws push them toward less secure, less regulated spaces.

Legislation Watch: Expanding Beyond “Adult Content”

Lawmakers Take Aim at Social Media Platforms

Earlier this year, we raised the alarm that state legislatures wouldn’t stop at adult content. Sure enough, throughout 2025, lawmakers set their sights on young people’s social media usage, passing laws that require platforms to verify users’ ages and obtain parental consent for accounts belonging to anyone under 18. Four states already passed similar laws in previous years.  These laws were swiftly blocked in courts because they violate the First Amendment and subject every user to surveillance as a condition of participation in online speech. 

Warning Labels and Time Limits

​​And it doesn’t stop with age verification. California and Minnesota passed new laws this year requiring social media platforms to display warning labels to users. Virginia’s SB 854, which also passed this year, took a different approach. It requires social media platforms to use “commercially reasonable efforts” to determine a user's age and, if that user is under 16, limits them to one hour per day per application by default unless a parent changes the time allowance.

EFF is opposed to these laws as they have serious First Amendment concerns. And courts have agreed: in November 2025, the U.S. District Court for the District of Colorado temporarily halted Colorado's warning label law, which would have required platforms to display warnings to users under 18 about the negative impacts of social media. We expect courts to similarly halt California and Minnesota’s laws.

App Store and Device-Level Age Verification

2025 also saw the rise of device-level and app-store age verification laws, which shift the obligation to verify users onto app stores and operating system providers. These laws seriously impact users’ (adults and young people alike) from accessing information, particularly since these laws block a much broader swath of content (not only adult or sexual content), but every bit of content provided by every application. In October, California Governor Gavin Newsom signed the Digital Age Assurance Act (AB 1043), which takes a slightly different approach to age verification in that it requires “operating system providers”—not just app stores—to offer an interface at device/account setup that prompts the account holder to indicate the user’s birth date or age. Developers must request an age signal when applications are downloaded and launched. These laws expand beyond earlier legislation passed in other states that mandate individual websites implement the law, and apply the responsibility to app stores, operating systems, or device makers at a more fundamental level.

Again, these laws have drawn legal challenges. In October, the Computer & Communications Industry Association (CCIA) filed a lawsuit arguing that Texas’s SB 2420 is unconstitutional. A separate suit, Students Engaged in Advancing Texas (SEAT) v. Paxton, challenges the same law on First Amendment grounds, arguing it violates the free speech rights of young people and adults alike. Both lawsuits argue that the burdens placed on platforms, developers, and users outweigh any proposed benefits.

From Legislation to Regulation: Rulemaking Processes Begin

States with existing laws have also begun the process of rulemaking—translating broad statutory language into specific regulatory requirements. These rulemaking processes matter, because the specific technical requirements, data—handling procedures, and enforcement mechanisms will determine just how invasive these laws become in practice. 

California’s Attorney General held a hearing in November to solicit public comment on methods and standards for age assurance under SB 976, the “Protecting Our Kids from Social Media Addiction Act,” which will require age verification by the end of 2026. EFF supported the legal challenge to S.B. 976 since its passage, and federal courts have blocked portions of the law from taking effect. Now in the rulemaking process, EFF submitted comments raising concerns about the discriminatory impacts of any proposed regulations.

New York's Attorney General also released proposed rules for the state’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act, describing which companies must comply and the standards for determining users’ age and obtaining parental consent. EFF submitted comments opposing the age verification requirements in September of 2024, and again in December 2025.

Our comments in both states warn that these rules risk entrenching invasive age verification systems and normalizing surveillance as a prerequisite for online participation.

The Boundaries Keep Shifting

As we’ve said, age verification will not stop at adult content and social media. Lawmakers are already proposing bills to require ID checks for everything from skincare products in California to diet supplements in Washington. Lawmakers in Wisconsin and Michigan have set their targets on virtual private networks, or VPNs—proposing various legislation that would ban the use of VPNs to prevent people from bypassing age verification laws. AI chatbots are next on the list, with several states considering legislation that would require age verification for all users. Behind the reasonable-sounding talking points lies a sprawling surveillance regime that would reshape how people of all ages use the internet. EFF remains ready to push back against these efforts in legislatures, regulatory hearings, and court rooms.

2025 showed us that age verification mandates are spreading rapidly, despite clear evidence that they don't work and actively harm the people they claim to protect. 2026 will be the year we push back harder—like the future of a free, open, private, and safe internet depends on it.

This is why we must fight back to protect the internet that we know and love. If you want to learn more about these bills, visit EFF.org/Age

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

States Tried to Censor Kids Online. Courts, and EFF, Mostly Stopped Them: 2025 in Review

31 December 2025 at 18:03

Lawmakers in at least a dozen states believe that they can pass laws blocking  young people from social media or require them to get their parents’ permission before logging on. Fortunately, nearly every trial court to review these laws has ruled that they are unconstitutional.

It’s not just courts telling these lawmakers they are wrong. EFF has spent the past year filing friend-of-the-court briefs in courts across the country explaining how these laws violate young people’s First Amendment rights to speak and get information online. In the process, these laws also burden adults’ rights, and jeopardize everyone’s privacy and data security.

Minors have long had the same First Amendment rights as adults: to talk about politics, create art, comment on the news, discuss or practice religion, and more. The internet simply amplified their ability to speak, organize, and find community.

Although these state laws vary in scope, most have two core features. First, they require social media services to estimate or verify the ages of all users. Second, they either ban minor access to social media, or require parental permission. 

In 2025, EFF filed briefs challenging age-gating laws in California (twice), Florida, Georgia, Mississippi, Ohio, Utah, Texas, and Tennessee. Across these cases we argued the same point: these laws burden the First Amendment rights of both young people and adults. In many of these briefs, the ACLU, Center for Democracy & Technology, Freedom to Read Foundation, LGBT Technology Institute, TechFreedom, and Woodhull Freedom Foundation joined.

There is no “kid exception” to the First Amendment. The Supreme Court has repeatedly struck down laws that restrict minors’ speech or impose parental-permission requirements. Banning young people entirely from social media is an extreme measure that doesn’t match the actual risks. As EFF has urged, lawmakers should pursue strong privacy laws, not censorship, to address online harms.

These laws also burden everyone’s speech requiring users to prove their age. ID-based systems of access can lock people out if they don’t have the right form of ID, and biometric systems are often discriminatory or inaccurate. Requiring users to identify themselves before speaking also chills anonymous speech—protected by the First Amendment, and essential for those who risk retaliation. 

Finally, requiring users to provide sensitive personal information increases their risk of future privacy and security invasions. Most of these laws perversely require social media companies to collect even more personal information from everyone, especially children, who can be more vulnerable to identify theft.

EFF will continue to fight for the rights of minors and adults to access the internet, speak freely, and organize online.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

EFFector Audio Speaks Up for Our Rights: 2025 Year in Review

28 December 2025 at 23:57

This year, you may have heard EFF sounding off about our civil liberties on NPR, BBC Radio, or any number of podcasts. But we also started sharing our voices directly with listeners in 2025. In June, we revamped EFFector, our long-running electronic newsletter, and launched a new audio edition to accompany it.

Providing a recap of the week's most important digital rights news, EFFector's audio companion features exclusive interviews where EFF's lawyers, activists, and technologists can dig deeper into the biggest stories in privacy, free speech, and innovation. Here are just some of the best interviews from EFFector Audio in 2025.

Unpacking a Social Media Spying Scheme

Earlier this year, the Trump administration launched a sprawling surveillance program to spy on the social media activity of millions of noncitizens—and punish those who express views it doesn't like. This fall, EFF's Lisa Femia came onto EFFector Audio to explain how this scheme works, its impact on free speech, and, importantly, why EFF is suing to stop it.

"We think all of this is coming together as a way to chill people's speech and make it so they do not feel comfortable expressing core political viewpoints protected by the First Amendment," Femia said.


Challenging the Mass Surveillance of Drivers

But Lisa was hardly the only guest talking about surveillance. In November, EFF's Andrew Crocker spoke to EFFector about Automated License Plate Readers (ALPRs), a particularly invasive and widespread form of surveillance. ALPR camera networks take pictures of every passing vehicle and upload the location information of millions of drivers into central databases. Police can then search these databases—typically without any judicial approval—to instantly reconstruct driver movements over weeks, months, or even years at a time.

"It really is going to be a very detailed picture of your habits over the course of a long period of time," said Crocker, explaining how ALPR location data can reveal where you work, worship, and many other intimate details about your life. Crocker also talked about a new lawsuit, filed by two nonprofits represented by EFF and the ACLU of Northern California, challenging the city of San Jose's use of ALPR searches without a warrant.

Similarly, EFF's Mario Trujillo joined EFFector in early November to discuss the legal issues and mass surveillance risks around face recognition in consumer devices.

Simple Tips to Take Control of Your Privacy

Online privacy isn’t dead. But tech giants have tried to make protecting it as annoying as possible. To help users take back control, we celebrated Opt Out October, sharing daily privacy tips all month long on our blog. In addition to laying down some privacy basics, EFF's Thorin Klosowski talked to EFFector about how small steps to protect your data can build up into big differences.

"This is a way to kind of break it down into small tasks that you can do every day and accomplish a lot," said Klosowski. "By the end of it, you will have taken back a considerable amount of your privacy."

User privacy was the focus of a number of EFFector interviews. In July, EFF's Lena Cohen spoke about what lawmakers, tech companies, and individuals can do to fight online tracking. That same month, Matthew Guariglia talked about precautions consumers can take before bringing surveillance devices like smart doorbells into their homes.

Digging Into the Next Wave of Internet Censorship

One of the most troubling trends of 2025 was the proliferation of age verification laws, which require online services to check, estimate, or verify users’ ages. Though these mandates claim to protect children, they ultimately create harmful censorship and surveillance regimes that put everyone—adults and young people alike—at risk.

This summer, EFF's Rin Alajaji came onto EFFector Audio to explain how these laws work and why we need to speak out against them.

"Every person listening here can push back against these laws that expand censorship," she said. "We like to say that if you care about internet freedom, this fight is yours."

This was just one of several interviews about free speech online. This year, EFFector also hosted Paige Collings to talk about the chaotic rollout of the UK's Online Safety Act and Lisa Femia (again!) to discuss the abortion censorship crisis on social media.

You can hear all these episodes and future installments of EFFector's audio companion on YouTube or the Internet Archive. Or check out our revamped EFFector newsletter by subscribing at eff.org/effector!

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Lawmakers Must Listen to Young People Before Regulating Their Internet Access: 2025 in Review

27 December 2025 at 23:25

State and federal lawmakers have introduced multiple proposals in 2025 to curtail or outright block children and teenagers from accessing legal content on the internet. These lawmakers argue that internet and social media platforms have an obligation to censor or suppress speech that they consider “harmful” to young people. Unfortunately, in many of these legislative debates, lawmakers are not listening to kids, whose experiences online are overwhelmingly more positive than what lawmakers claim. 

Fortunately, EFF has spent the past year trying to make sure that lawmakers hear young people’s voices. We have also been reminding lawmakers that minors, like everyone else, have First Amendment rights to express themselves online. 

These rights extend to a young person’s ability to use social media both to speak for themselves and access the speech of others online. Young people also have the right to control how they access this speech, including a personalized feed and other digestible and organized ways. Preventing teenagers from accessing the same internet and social media channels that adults use is a clear violation of their right to free expression. 

On top of violating minors’ First Amendment rights, these laws also actively harm minors who rely on the internet to find community, find resources to end abuse, or access information about their health. Cutting off internet access acutely harms LGBTQ+ youth and others who lack familial or community support where they live. These laws also empower the state to decide what information is acceptable for all young people, overriding parents’ choices. 

Additionally, all of the laws that would attempt to create a “kid friendly” internet and an “adults-only” internet are a threat to everyone, adults included. These mandates encourage an adoption of invasive and dangerous age-verification technology. Beyond creepy, these systems incentivize more data collection, and increase the risk of data breaches and other harms. Requiring everyone online to provide their ID or other proof of their age could block legal adults from accessing lawful speech if they don’t have the right form of ID. Furthermore, this trend infringes on people’s right to be anonymous online, and creates a chilling effect which may deter people from joining certain services or speaking on certain topics

EFF has lobbied against these bills at both the state and federal level, and we have also filed briefs in support of several lawsuits to protect the First Amendment Rights of minors. We will continue to advocate for the rights of everyone online – including minors – in the future.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Age Verification Threats Across the Globe: 2025 in Review

25 December 2025 at 19:17

Age verification mandates won't magically keep young people safer online, but that has not stopped governments around the world spending this year implementing or attempting to introduce legislation requiring all online users to verify their ages before accessing the digital space. 

The UK’s misguided approach to protecting young people online took many headlines due to the reckless and chaotic rollout of the country’s Online Safety Act, but they were not alone: courts in France ruled that porn websites can check users’ ages; the European Commission pushed forward with plans to test its age-verification app; and Australia’s ban on under-16s accessing social media was recently implemented. 

Through this wave of age verification bills, politicians are burdening internet users and forcing them to sacrifice their anonymity, privacy, and security simply to access lawful speech. For adults, this is true even if that speech constitutes sexual or explicit content. These laws are censorship laws, and rules banning sexual content usually hurt marginalized communities and groups that serve them the most.

In response, we’ve spent this year urging governments to pause these legislative initiatives and instead protect everyone’s right to speak and access information online. Here are three ways we pushed back [against these bills] in 2025:

Social Media Bans for Young People

Banning a certain user group changes nothing about a platform’s problematic privacy practices, insufficient content moderation, or business models based on the exploitation of people’s attention and data. And assuming that young people will always find ways to circumvent age restrictions, the ones that do will be left without any protections or age-appropriate experiences.

Yet Australia’s government recently decided to ignore these dangers by rolling out a sweeping regime built around age verification that bans users under 16 from having social media accounts. In this world-first ban, platforms are required to introduce age assurance tools to block under-16s, demonstrate that they have taken “reasonable steps” to deactivate accounts used by under-16s, and prevent any new accounts being created or face fines of up to 49.5 million Australian dollars ($32 million USD). The 10 banned platforms—Instagram, Facebook, Threads, Snapchat, YouTube, TikTok, Kick, Reddit, Twitch and X—have each said they’ll comply with the legislation, leading to young people losing access to their accounts overnight

Similarly, the European Commission this year took a first step towards mandatory age verification that could undermine privacy, expression, and participation rights for young people—rights that have been fully enshrined in international human rights law through its guidelines under Article 28 of the Digital Services Act. EFF submitted feedback to the Commission’s consultation on the guidelines, emphasizing a critical point: Mandatory age verification measures are not the right way to protect minors, and any online safety measure for young people must also safeguard their privacy and security. Unfortunately, the EU Parliament already went a step further, proposing an EU digital minimum age of 16 for access to social media, a move that aligns with EU Commission’s president Ursula von der Leyen’s recent public support for measures inspired by Australia’s model.

Push for Age Assurance on All Users 

This year, the UK had a moment—and not a good one. In late July, new rules took effect under the Online Safety Act that now require all online services available in the UK to assess whether they host content considered harmful to children, and if so, these services must introduce age checks to prevent children from accessing such content. Online services are also required to change their algorithms and moderation systems to ensure that content defined as harmful, like violent imagery, is not shown to young people.

The UK’s scramble to find an effective age verification method shows us that there isn't one, and it’s high time for politicians to take that seriously. As we argued throughout this year, and during the passage of the Online Safety Act, any attempt to protect young people online should not include measures that require platforms to collect data or remove privacy protections around users’ identities. The approach that UK politicians have taken with the Online Safety Act is reckless, short-sighted, and will introduce more harm to the very young people that it is trying to protect.

We’re seeing these narratives and regulatory initiatives replicated from the UK to U.S. states and other global jurisdictions, and we’ll continue urging politicians not to follow the UK’s lead in passing similar legislation—and to instead explore more holistic approaches to protecting all users online.

Rushed Age Assurance through the EU Digital Wallet

There is not yet a legal obligation to verify users’ ages at the EU level, but policymakers and regulators are already embracing harmful age verification and age assessment measures in the name of reducing online harms.

These demands steer the debate toward identity-based solutions, such as the EU Digital Identity Wallet, which will become available in 2026. This has come with its own realm of privacy and security concerns, such as long-term identifiers (which could result in tracking) and over-exposure of personal information. Even more concerning is, instead of waiting for the full launch of the EU DID Wallet, the Commission rushed a “mini AV” app out this year ahead of schedule, citing an urgent need to address concerns about children and the harms that may come to them online. 

However, this proposed solution directly tied national ID to an age verification method. This also comes with potential mission creep of what other types of verification could be done in EU member states once this is fully deployed—while the focus of the “mini AV” app is for now on verifying age, its release to the public means that the infrastructure to expand ID checks to other purposes is in place, should the government mandate that expansion in the future.  

Without the proper safeguards, this infrastructure could be leveraged inappropriately—all the more reason why lawmakers should explore more holistic approaches to children's safety

Ways Forward

The internet is an essential resource for young people and adults to access information, explore community, and find themselves. The issue of online safety is not solved through technology alone, and young people deserve a more intentional approach to protecting their safety and privacy online—not this lazy strategy that causes more harm that it solves. 

Rather than weakening rights for already vulnerable communities online, politicians must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms. We encourage politicians to look into what is best, and not what is easy; and in the meantime, we’ll continue fighting for the rights of all users on the internet in 2026.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

Politicians Rushed Through An Online Speech “Solution.” Victims Deserve Better.

24 December 2025 at 17:44

Earlier this year, both chambers of Congress passed the TAKE IT DOWN Act. This bill, while well-intentioned, gives powerful people a new legal tool to force online platforms to remove lawful speech that they simply don't like. 

The bill, sponsored by Senate Commerce Chair Ted Cruz (R-TX) and Rep. Maria Salazar (R-FL), sought to speed up the removal of troubling online content: non-consensual intimate imagery (NCII). The spread of NCII is a serious problem, as is digitally altered NCII, sometimes called “deepfakes.” That’s why 48 states have specific laws criminalizing the distribution of NCII, in addition to the long-existing defamation, harassment, and extortion statutes—all of which can be brought to bear against those who abuse NCII. Congress can and should protect victims of NCII by enforcing and improving these laws. 

Unfortunately, TAKE IT DOWN takes another approach: it creates an unneeded notice-and-takedown system that threatens free expression, user privacy, and due process, without meaningfully addressing the problem it seeks to solve. 

While Congress was still debating the bill, EFF, along with the Center for Democracy & Technology (CDT), Authors Guild, Demand Progress Action, Fight for the Future, Freedom of the Press Foundation, New America’s Open Technology Institute, Public Knowledge, Restore The Fourth, SIECUS: Sex Ed for Social Change, TechFreedom, and Woodhull Freedom Foundation, sent a letter to the Senate outlining our concerns with the proposal. 

First, TAKE IT DOWN’s removal provision applies to a much broader category of content—potentially any images involving intimate or sexual content—than the narrower NCII definitions found elsewhere in the law. We worry that bad-faith actors will use the law’s expansive definition to remove lawful speech that is not NCII and may not even contain sexual content. 

Worse, the law contains no protections against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. The law requires that apps and websites remove content within 48 hours or face significant legal risks. That ultra-tight deadline means that small apps or websites will have to comply so quickly to avoid legal risk, that they won’t be able to investigate or verify claims. 

Finally, there are no legal protections for providers when they believe a takedown request was sent in bad faith to target lawful speech. TAKE IT DOWN is a one-way censorship ratchet, and its fast timeline discourages providers from standing up for their users’ free speech rights. 

This new law could lead to the use of automated filters that tend to flag legal content, from commentary to news reporting. Communications providers that offer users end-to-end encrypted messaging, meanwhile, may be served with notices they simply cannot comply with, given the fact that these providers can’t view the contents of messages on their platforms. Platforms could respond by abandoning encryption entirely in order to be able to monitor content, turning private conversations into surveilled spaces.

We asked for several changes to protect legitimate speech that is not NCII, and to include common-sense safeguards for encryption. Thousands of EFF members joined us by writing similar messages to their Senators and Representatives. That resulted in several attempts to offer common-sense amendments during the Committee process. 

However, Congress passed the bill without those needed changes, and it was signed into law in May 2025. The main takedown provisions of the bill will take effect in 2026. We’ll be pushing online platforms to be transparent about the content they take down because of this law, and will be on the watch for takedowns that overreach and censor lawful speech. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

EFF, Open Rights Group, Big Brother Watch, and Index on Censorship Call on UK Government to Reform or Repeal Online Safety Act

15 December 2025 at 13:20

Since the Online Safety Act took effect in late July, UK internet users have made it very clear to their politicians that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK, and a petition calling for the repeal of the Online Safety Act (OSA) hit over 400,000 signatures. 

In the months since, more than 550,000 people have petitioned Parliament to repeal or reform the Online Safety Act, making it one of the largest public expressions of concern about a UK digital law in recent history. The OSA has galvanized swathes of the UK population, and it’s high time for politicians to take that seriously. 

Last week, EFF joined Open Rights Group, Big Brother Watch, and Index on Censorship in sending a briefing to UK politicians urging them to listen to their constituents and reform or repeal the Online Safety Act ahead of this week’s Parliamentary petition debate on 15 December.

The legislation is a threat to user privacy, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and effectively blocks millions of people without a personal device or form of ID from accessing the internet. The briefing highlights how, in the months since the OSA came into effect, we have seen the legislation:

  1. Make it harder for not-for-profits and community groups to run their own websites. 
  2. Result in the wrong types of content being taken down.
  3. Lead to age-assurance being applied widely to all sorts of content.

Our briefing continues:

“Those raising concerns about the Online Safety Act are not opposing child safety. They are asking for a law that does both: protects children and respects fundamental rights, including children’s own freedom of expression rights.”

The petition shows that hundreds of thousands of people feel the current Act tilts too far, creating unnecessary risks for free expression and ordinary online life. With sensible adjustments, Parliament can restore confidence that online safety and freedom of expression rights can coexist.

If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users—including children—rather than pushing the enforcement of legislation that harms the very people it was meant to protect.

Read the briefing in full here.

Update, 17 Dec 2025: this article was edited to include the word reform alongside repeal. 

❌