The rapidly escalating conflict between Anthropic and the Pentagon, which started when the company refused to let the government use its technology to spy on Americans, has now gone to court. The Department of Defense retaliated by designating the company a “supply chain risk” (SCR). Now, Anthropic is asking courts to block the designation, arguing that the First Amendment does not permit the government to coerce a private actor to rewrite its code to serve government ends.
We agree.
As EFF, the Foundation for Individual Rights and Expression, and multiple other public interest organizations explained in a brief filed in support of Anthropic’s motion, the development and operation of large language models involve multiple expressive choices protected by the First Amendment. Requiring a company to rewrite its code to remove guardrails means compelling different expression, a clear constitutional violation. Further, the public record shows that the SCR designation is intended to punish the company both for pushing back and for its CEO’s public statements explaining that AI may supercharge surveillance practices that current law has proven ill-equipped to address.
As we also explain, the company’s concerns about how the government will use its technology are well-founded. The U.S. government has a long history of illegally surveilling its citizens without adequate judicial oversight based on questionable interpretations of its Constitutional and statutory obligations. The Department of Defense acquires vast troves of personal information from commercial entities, including individuals’ physical location, social media, and web browsing data.Other government agencies continue to collect and query vast quantities of Americans’ information, including by acquiring information from third party data brokers.
A growing body of social science research illustrates the chilling effects of these pervasive activities. Fearing retribution for unpopular views, dissenters stay silent. And AI only exacerbates the problem. AI can quickly analyze the government’s massive datasets or combine that information with data scraped off the internet, purchased through the commercial data broker market, or from local police surveillance devices and use all of that data to construct a comprehensive picture of a person’s life and infer sensitive details like their religious beliefs, medical conditions, political opinions, or even sex partners. For example, an agency could use AI to infer an individual’s association with a particular mosque based on data showing that they visited its website, followed its social media accounts, and were located near the mosque during religious services. AI can also deanonymize online speech by using public information to unmask anonymous users.
It is easy to conceive how an agency, a government employee with improper intent, or a malicious hacker could exploit these capabilities to monitor public discourse, preemptively squelch dissent, or persecute people from marginalized communities. Against this background and absent meaningful changes to the governing national security laws and judicial oversight structure, it is entirely reasonable for Anthropic—or any other company—to insist on its own guardrails.
Without action from Congress, the task of protecting your privacy has fallen in large part to Big Tech—something no one wants, including Big Tech. But if Congress won’t do it, companies like Anthropic must be allowed to step in, without facing retribution.
OpenAI, the maker of ChaptGPT, is rightfully facing widespread criticism for its decisions to fill the gap the U.S. Department of Defense (DoD) created when rival Anthropicrefused to drop its restrictions against using its AI for surveillance and autonomous weapons systems. After protests from bothusers and employees who did not sign up to support government mass surveillance—early reports show that ChaptGPT uninstalls rose nearly300% after the company announced the deal—Sam Altman, CEO of OpenAI, conceded that the initial agreement was “opportunistic and sloppy.” He thenre-published an internal memo on social mediastating that additions to the agreement made clear that “Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, [and] FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
Trouble is, the U.S. government doesn’t believe “consistent with applicable laws” means “no domestic surveillance.” Instead, for the most part, the government has embraced a lax interpretation of “applicable law” that has blessed mass surveillance and large-scale violations of our civil liberties, andthen fought tooth and nail to prevent courts from weighing in.
"After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time."
“Intentionally” is also doing an awful lot of work in that sentence. For years the government has insisted that the mass surveillance of U.S. persons only happens incidentally (read: not intentionally) because their communications with people both inside the United States and overseas are swept up in surveillance programs supposedly designed to only collect communications outside the United States.
The company’samendment to the contract continues in a similar vein, “For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” Here, “deliberate” is the red flag given how often intelligence and law enforcement agencies rely on incidental or commercially purchased data to sidestep stronger privacy protections.
Here’s another one: “The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.” What, one wonders, does “unconstrained” mean, precisely—and according to whom?
Lawyers sometimes call these “weasel words” because they create ambiguity that protects one side or another from real accountability for contract violations. As with theAnthropic negotiations, where the Pentagon reportedly agreed to adhere to Anthropic’s red lines only “as appropriate,” the government is likely attempting to publicly commit to limits in principle, but retain broad flexibility in practice.
OpenAI also notes that the Pentagon promised the NSA would not be allowed to use OpenAI’s tools absent a new agreement, and that its deployment architecture will help it verify that no red lines are crossed. But secret agreements and technical assurances have never been enough to rein in surveillance agencies, and they are no substitute for strong, enforceable legal limits and transparency.
OpenAI executives may indeed be trying, as claimed, to use the company’s contractual relationship with the Pentagon to help ensure that the government should use AI tools only in a way consistent with democratic processes. But based on what we know so far, that hope seems very naïve.
Moreover, that naïvete is dangerous. In a time when governments are willing to embrace extreme and unfounded interpretations of “applicable laws,” companies need to put some actual muscle behind standing by their commitments. After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time. OpenAI promises the public that it will “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” but we know that enabling mass surveillance does both.
OpenAI isn’t the only consumer-facing company that is, on the one hand, seeking to reassure the public that they aren’t participating in actions that violate human rights while, on the other, seeking to cash in on government mass surveillance efforts. Despite this marketing double-speak, it is very clear that companies just cannot do both. It’s also clear that companies shouldn’t be given that much power over the limits of our privacy to begin with. The public should not have to rely on asmall group of people—whether CEOs or Pentagon officials—to protect our civil liberties.
If you don’t go searching for AI services, they’ll find you all the same. Every major tech company feels a moral obligation not just to develop an AI assistant, integrated chatbot, or autonomous agent, but to bake it into their existing mainstream products and forcibly activate it for tens of millions of users. Here are just a few examples from the last six months:
Google activated Gemini for all U.S. Chrome users, cranked its browser functionality to the max, aggressively expanded the reach of AI Overviews in search results, and baked a whole suite of AI features into its online services (Gmail, Google Docs, and others).
Apple integrated its own Apple Intelligence (conveniently sharing the AI acronym) into the latest OS versions across all device types and most of its native apps.
On the flip side, geeks have rushed to build their own “personal Jarvises” by renting VPS instances or hoarding Mac minis to run the OpenClaw AI agent. Unfortunately, OpenClaw’s security issues with default settings turned out to be so massive that it’s already been dubbed the biggest cybersecurity threat of 2026.
Beyond the sheer annoyance of having something shoved down your throat, this AI epidemic brings some very real practical risks and headaches. AI assistants hoover up every bit of data they can get their hands on, parsing the context of the websites you visit, analyzing your saved documents, reading through your chats, and so on. This gives AI companies an unprecedentedly intimate look into every user’s life.
A leak of this data during a cyberattack — whether from the AI provider’s servers or from the cache on your own machine — could be catastrophic. These assistants can see and cache everything you can, including data usually tucked behind multiple layers of security: banking info, medical diagnoses, private messages, and other sensitive intel. We took a deep dive into how this plays out when we broke down the issues with the AI-powered Copilot+ Recall system, which Microsoft also planned to force-feed to everyone. On top of that, AI can be a total resource hog, eating up RAM, GPU cycles, and storage, which often leads to a noticeable hit to system performance.
For those who want to sit out the AI storm and avoid these half-baked, rushed-to-market neural network assistants, we’ve put together a quick guide on how to kill the AI in popular apps and services.
How to disable AI in Google Docs, Gmail, and Google Workspace
Google’s AI assistant features in Mail and Docs are lumped together under the umbrella of “smart features”. In addition to the large language model, this includes various minor conveniences, like automatically adding meetings to your calendar when you receive an invite in Gmail. Unfortunately, it’s an all-or-nothing deal: you have to disable all of the “smart features” to get rid of the AI.
To do this, open Gmail, click the Settings (gear) icon, and then select See all settings. On the General tab, scroll down to Google Workspace smart features. Click Manage Workspace smart feature settings and toggle off two options: Smart features in Google Workspace and Smart features in other Google products. We also recommend unchecking the box next to Turn on smart features in Gmail, Chat, and Meet on the same general settings tab. You’ll need to restart your Google apps afterward (which usually happens automatically).
How to disable AI Overviews in Google Search
You can kill off AI Overviews in search results on both desktops and smartphones (including iPhones), and the fix is the same across the board. The simplest way to bypass the AI overview on a case-by-case basis is to append -ai to your search query — for example, how to make pizza -ai. Unfortunately, this method occasionally glitches, causing Google to abruptly claim it found absolutely nothing for your request.
If that happens, you can achieve the same result by switching the search results page to Web mode. To do this, select the Web filter immediately below the search bar — you’ll often find it tucked away under the More button.
A more radical solution is to jump ship to a different search engine entirely. For instance, DuckDuckGo not only tracks users less and shows little ads, but it also offers a dedicated AI-free search — just bookmark the search page at noai.duckduckgo.com.
How to disable AI features in Chrome
Chrome currently has two types of AI features baked in. The first communicates with Google’s servers and handles things like the smart assistant, an autonomous browsing AI agent, and smart search. The second handles locally more utility-based tasks, such as identifying phishing pages or grouping browser tabs. The first group of settings is labeled AI mode, while the second contains the term Gemini Nano.
To disable them, type chrome://flags into the address bar and hit Enter. You’ll see a list of system flags and a search bar; type “AI” into that search bar. This will filter the massive list down to about a dozen AI features (and a few other settings where those letters just happen to appear in a longer word). The second search term you’ll need in this window is “Gemini“.
After reviewing the options, you can disable the unwanted AI features — or just turn them all off — but the bare minimum should include:
AI Mode Omnibox entrypoint
AI Entrypoint Disabled on User Input
Omnibox Allow AI Mode Matches
Prompt API for Gemini Nano
Prompt API for Gemini Nano with Multimodal Input
Set all of these to Disabled.
How to disable AI features in Firefox
While Firefox doesn’t have its own built-in chatbots and hasn’t (yet) tried to force upon users agent-based features, the browser does come equipped with smart-tab grouping, a sidebar for chatbots, and a few other perks. Generally, AI in Firefox is much less “in your face” than in Chrome or Edge. But if you still want to pull the plug, you’ve two ways to do it.
The first method is available in recent Firefox releases — starting with version 148, a dedicated AI Controls section appeared in the browser settings, though the controls are currently a bit sparse. You can use a single toggle to completely Block AI enhancements, shutting down AI features entirely. You can also specify whether you want to use On-device AI by downloading small local models (currently just for translations) and configure AI chatbot providers in sidebar, choosing between Anthropic Claude, ChatGPT, Copilot, Google Gemini, and Le Chat Mistral.
The second path — for older versions of Firefox — requires a trip into the hidden system settings. Type about:config into the address bar, hit Enter, and click the button to confirm that you accept the risk of poking around under the hood.
A massive list of settings will appear along with a search bar. Type “ML” to filter for settings related to machine learning.
To disable AI in Firefox, toggle the browser.ml.enabled setting to false. This should disable all AI features across the board, but community forums suggest this isn’t always enough to do the trick. For a scorched-earth approach, set the following parameters to false (or selectively keep only what you need):
ml.chat.enabled
ml.linkPreview.enabled
ml.pageAssist.enabled
ml.smartAssist.enabled
ml.enabled
ai.control.translations
tabs.groups.smart.enabled
urlbar.quicksuggest.mlEnabled
This will kill off chatbot integrations, AI-generated link descriptions, assistants and extensions, local translation of websites, tab grouping, and other AI-driven features.
How to disable AI features in Microsoft apps
Microsoft has managed to bake AI into almost every single one of its products, and turning it off is often no easy task — especially since the AI sometimes has a habit of resurrecting itself without your involvement.
How to disable AI features in Edge
Microsoft’s browser is packed with AI features, ranging from Copilot to automated search. To shut them down, follow the same logic as with Chrome: type edge://flags into the Edge address bar, hit Enter, then type “AI” or “Copilot” into the search box. From there, you can toggle off the unwanted AI features, such as:
Enable Compose (AI-writing) on the web
Edge Copilot Mode
Edge History AI
Another way to ditch Copilot is to enter edge://settings/appearance/copilotAndSidebar into the address bar. Here, you can customize the look of the Copilot sidebar and tweak personalization options for results and notifications. Don’t forget to peek into the Copilot section under App-specific settings — you’ll find some additional controls tucked away there.
How to disable Microsoft Copilot
Microsoft Copilot comes in two flavors: as a component of Windows (Microsoft Copilot), and as part of the Office suite (Microsoft 365 Copilot). Their functions are similar, but you’ll have to disable one or both depending on exactly what the Redmond engineers decided to shove onto your machine.
The simplest thing you can do is just uninstall the app entirely. Right-click the Copilot entry in the Start menu and select Uninstall. If that option isn’t there, head over to your installed apps list (Start → Settings → Apps) and uninstall Copilot from there.
In certain builds of Windows 11, Copilot is baked directly into the OS, so a simple uninstall might not work. In that case, you can toggle it off via the settings: Start → Settings → Personalization → Taskbar→ turn off Copilot.
If you ever have a change of heart, you can always reinstall Copilot from the Microsoft Store.
It’s worth noting that many users have complained about Copilot automatically reinstalling itself, so you might want to do a weekly check for a couple of months to make sure it hasn’t staged a comeback. For those who are comfortable tinkering with the System Registry (and understand the consequences), you can follow this detailed guide to prevent Copilot’s silent resurrection by disabling the SilentInstalledAppsEnabled flag and adding/enabling the TurnOffWindowsCopilot parameter.
How to disable Microsoft Recall
The Microsoft Recall feature, first introduced in 2024, works by constantly taking screenshots of your computer screen and having a neural network analyze them. All that extracted information is dumped into a database, which you can then search using an AI assistant. We’ve previously written in detail about the massive security risks Microsoft Recall poses.
Under pressure from cybersecurity experts, Microsoft was forced to push the launch of this feature from 2024 to 2025, significantly beefing up the protection of the stored data. However, the core of Recall remains the same: your computer still remembers your every move by constantly snapping screenshots and OCR-ing the content. And while the feature is no longer enabled by default, it’s absolutely worth checking to make sure it hasn’t been activated on your machine.
To check, head to the settings: Start → Settings → Privacy & Security →Recall & snapshots. Ensure the Save snapshots toggle is turned off, and click Delete snapshots to wipe any previously collected data, just in case.
How to disable AI in Notepad and Windows context actions
AI has seeped into every corner of Windows, even into File Explorer and Notepad. You might even trigger AI features just by accidentally highlighting text in an app — a feature Microsoft calls “AI Actions”. To shut this down, head to Start → Settings → Privacy & Security → Click to Do.
Notepad has received its own special Copilot treatment, so you’ll need to disable AI there separately. Open the Notepad settings, find the AI features section, and toggle Copilot off.
Finally, Microsoft has even managed to bake Copilot into Paint. Unfortunately, as of right now, there is no official way to disable the AI features within the Paint app itself.
How to disable AI in WhatsApp
In several regions, WhatsApp users have started seeing typical AI additions like suggested replies, AI message summaries, and a brand-new Chat with Meta AI button. While Meta claims the first two features process data locally on your device and don’t ship your chats off to their servers, verifying that is no small feat. Luckily, turning them off is straightforward.
To disable Suggested Replies, go to Settings → Chats → Suggestions & smart replies and toggle off Suggested replies. You can also kill off AI Sticker suggestions in that same menu. As for the AI message summaries, those are managed in a different location: Settings → Notifications → AI message summaries.
How to disable AI on Android
Given the sheer variety of manufacturers and Android flavors, there’s no one-size-fits-all instruction manual for every single phone. Today, we’ll focus on killing off Google’s AI services — but if you’re using a device from Samsung, Xiaomi, or others, don’t forget to check your specific manufacturer’s AI settings. Just a heads-up: fully scrubbing every trace of AI might be a tall order — if it’s even possible at all.
In Google Messages, the AI features are tucked away in the settings: tap your account picture, select Messages settings, then Gemini in Messages, and toggle the assistant off.
Broadly speaking, the Gemini chatbot is a standalone app that you can uninstall by heading to your phone’s settings and selecting Apps. However, given Google’s master plan to replace the long-standing Google Assistant with Gemini, uninstalling it might become difficult — or even impossible — down the road.
If you can’t completely uninstall Gemini, head into the app to kill its features manually. Tap your profile icon, select Gemini Apps activity, and then choose Turn off or Turn off and delete activity. Next, tap the profile icon again and go to the Connected Apps setting (it may be hiding under the Personal Intelligence setting). From here, you should disable all the apps where you don’t want Gemini poking its nose in.
Apple’s platform-level AI features, collectively known as Apple Intelligence, are refreshingly straightforward to disable. In your settings — on desktops, smartphones, and tablets alike — simply look for the section labeled Apple Intelligence & Siri. By the way, depending on your region and the language you’ve selected for your OS and Siri, Apple Intelligence might not even be available to you yet.
Other posts to help you tune the AI tools on your devices:
The U.S. military has officially ended its $200 million contract with AI company Anthropic and has ordered all other military contractors to cease use of their products. Why? Because of a dispute over what the government could and could not use Anthropic’s technology to do. Anthropic had made it clear since it first signed the contract with the Pentagon in 2025 that it did not want its technology to be used for mass surveillance of people in the United States or for fully autonomous weapons systems. Starting in January, that became a problem for the Department of Defense, which ordered Anthropic to give them unrestricted use of the technology. Anthropic refused, and the DoD retaliated.
There is a lot we could learn from this conflict, but the biggest take away is this: the state of your privacy is being decided by contract negotiations between giant tech companies and the U.S. government—two entities with spotty track records for caring about your civil liberties. It’s good when CEOs step up and do the right thing—but it's not a sustainable or reliable solution to build our rights on. Given the government’s loose interpretations of the law, ability to find loopholes to surveil you, and willingness to do illegal spying, we needs serious and proactive legal restrictions to prevent it from gobbling up all the personally data it can acquire and using even routine bureaucratic data for punitive ends.
Imposing and enforcing such those restrictions is properly a role for Congress and the courts, not the private sector.
The companies know this. When speaking about the specific risk that AI poses to privacy, the CEO of Anthropic Dario Amodei said in an interview, “I actually do believe it is Congress’s job. If, for example, there are possibilities with domestic mass surveillance—the government buying of bulk data has been produced on Americans, locations, personal information, political affiliations, to build profiles, and it’s not possible to analyze all of that with AI—the fact that that is legal—that seems like the judicial interpretation of the Fourth Amendment has not caught up or the laws passed by Congress have not caught up.”
The example he cites here is a scarily realistic one—because it’s already happening. Customs and Border Protection has tapped into the online advertising world to buy data on Americans for surveillance purposes. Immigration and Customs Enforcement has been using a tool that maps millions of peoples’ devices based on purchased cell phone data. The Office of the Director of National Intelligence has proposed a centralized data broker marketplace to make it easier for intelligence agencies to buy commercially available data. Considering the government’s massive contracts with a bunch of companies that could do analysis, including Palantir, a company which does AI-enabled analysis of huge amounts of data, then the concerns are incredibly well founded.
But Congress is sadly neglecting its duties. For example, a bill that would close the loophole of the government buying personal information passed the House of Representatives in 2024, but the Senate stopped it. And because Congress did not act, Americans must rely on a tech company CEO has to try to protect our privacy—or at least refuse to help the government violate it.
EFF has, and always will, fight for real and sustainable protections for our civil liberties including a world where our privacy does not rest upon the whims of CEOs and back room deals with the surveillance state.
Modern software development relies on containers and the use of third-party software modules. On the one hand, this greatly facilitates the creation of new software, but on the other, it gives attackers additional opportunities to compromise the development environment. News about attacks on the supply chain through the distribution of malware via various repositories appears with alarming regularity. Therefore, tools that allow the scanning of images have long been an essential part of secure software development.
Our portfolio has long included a solution for protecting container environments. It allows the scanning of images at different stages of development for malware, known vulnerabilities, configuration errors, the presence of confidential data in the code, and so on. However, in order to make an informed decision about the state of security of a particular image, the operator of the cybersecurity solution may need some more context. Of course, it’s possible to gather this context independently, but if a thorough investigation is conducted manually each time, development may be delayed for an unpredictable period of time. Therefore, our experts decided to add the ability to look at the image from a fresh perspective; of course, not with a human eye — AI is indispensable nowadays.
OpenAI API
Our Kaspersky Container Security solution (a key component of Kaspersky Cloud Workload Security) now supports an application programming interface for connecting external large language models. So, if a company has deployed a local LLM (or has a subscription to connect a third-party model) that supports the OpenAI API, it’s possible to connect the LLM to our solution. This gives a cybersecurity expert the opportunity to get both additional context about uploaded images and an independent risk assessment by means of a full-fledged AI assistant capable of quickly gathering the necessary information.
The AI provides a description that clearly explains what the image is for, what application it contains, what it does specifically, and so on. Additionally, the assistant conducts its own independent analysis of the risks of using this image and highlights measures to minimize these risks (if any are found). We’re confident that this will speed up decision-making and incident investigations and, overall, increase the security of the development process.
What else is new in Cloud Workload Security?
In addition to adding API to connect the AI assistant, our developers have made a number of other changes to the products included in the Kaspersky Cloud Workload Security offering. First, they now support single sign-on (SSO) and a multi-domain Active Directory, which makes it easier to deploy solutions in cloud and hybrid environments. In addition, Kaspersky Cloud Workload Security now scans images more efficiently and supports advanced security policy capabilities. You can learn more about the product on its official page.
The Secretary of Defense has given an ultimatum to the artificial intelligence company Anthropic in an attempt to bully them into making their technology available to the U.S. military without any restrictions for their use. Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance. The Department of Defense has reportedly threatened to label Anthropic a “supply chain risk,” in retribution for not lifting restrictions on how their technology is used. According to WIRED, that label would be, “a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic’s AI in their defense work.”
Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance.
In 2025, reportedly Anthropic became the first AI company cleared for use in relation to classified operations and to handle classified information. This current controversy, however, began in January 2026 when, through a partnership with defense contractor Palantir, Anthropic came to suspect their AI had been used during the January 3 attack on Venezuela. In January 2026, Anthropic CEO Dario Amodei wrote to reiterate that surveillance against US persons and autonomous weapons systems were two “bright red lines” not to be crossed, or at least topics that needed to be handled with “extreme care and scrutiny combined with guardrails to prevent abuses.” You can also read Anthropic’s self-proclaimed core views on AI safety here, as well as their LLM, Claude’s, constitution here.
Now, the U.S. government is threatening to terminate the government’s contract with the company if it doesn’t switch gears and voluntarily jump right across those lines.
Companies, especially technology companies, often fail to live up to their public statements and internal policies related to human rights and civil liberties for all sorts of reasons, including profit. Government pressure shouldn’t be one of those reasons.
Whatever the U.S. government does to threaten Anthropic, the AI company should know that their corporate customers, the public, and the engineers who make their products are expecting them not to cave. They, and all other technology companies, would do best to refuse to become yet another tool of surveillance.
We recently introduced a policy governing large language model (LLM) assisted contributions to EFF's open-source projects. At EFF, we strive to produce high quality software tools, rather than simply generating more lines of code in less time. We now explicitly require that contributors understand the code they submit to us and that comments and documentation be authored by a human.
LLMs excel at producing code that looks mostly human generated, but can often have underlying bugs that can be replicated at scale. This makes LLM-generated code exhausting to review, especially with smaller, less resourced teams. LLMs make it easy for well-intentioned people to submit code that may suffer from hallucination, omission, exaggeration, or misrepresentation.
It is with this in mind that we introduce a new policy on submitting LLM-assisted contributions to our open-source projects. We want to ensure that our maintainers spend their time reviewing well thought out submissions. We do not completely outright ban LLMs, as their use has become so pervasive a blanket ban is impractical to enforce.
Banning a tool is against our general ethos, but this class of tools comes with an ecosystem of problems. This includes issues with code reviews turning into code refactors for our maintainers if the contributor doesn’t understand the code they submitted. Or the sheer scale of contributions that could come in as AI generated code but is only marginally useful or potentially unreviewable. By disclosing when you use LLM tools, you help us spend our time wisely.
EFF has described how extending copyright is an impractical solution to the problem of AI generated content, but it is worth mentioning that these tools raise privacy, censorship, ethical, and climatic concerns for many. These issues are largely a continuation of tech companies’ harmful practices that led us to this point. LLM generated code isn’t written on a clean slate, but born out of a climate of companies speedrunning their profits over people. We are once again in “just trust us” territory of Big Tech being obtuse about the power it wields. We are strong advocates of using tools to innovate and come up with new ideas. However, we ask you to come to our projects knowing how to use them safely.
The evolution of vulnerability management in the agentic era is characterized by continuous telemetry, contextual prioritization and the ultimate goal of agentic remediation.