Black Basta Ransomware Leader Added to EU Most Wanted and INTERPOL Red Notice


Hundreds of millions of wireless earbuds, headphones, and speakers are vulnerable to silent hijacking due to a flaw in Google's Fast Pair system that allows attackers to seize control without the owner ever touching the pairing button.…
Nicholas Moore pleaded guilty to repeatedly hacking the U.S. Supreme Court’s filing system and illegally accessing computer systems belonging to AmeriCorps and the Department of Veterans Affairs.
The post Tennessee Man Pleads Guilty to Repeatedly Hacking Supreme Court’s Filing System appeared first on SecurityWeek.
This is a bug fix version.
hash_V0_0_14.zip (http)

Government invasion of a reporter’s home, and seizure of journalistic materials, is exactly the kind of abuse of power the First Amendment is designed to prevent. It represents the most extreme form of press intimidation.
Yet, that’s what happened on Wednesday morning to Washington Post reporter Hannah Natanson, when the FBI searched her Virginia home and took her phone, two laptops, and a Garmin watch.
The Electronic Frontier Foundation has joined 30 other press freedom and civil liberties organizations in condemning the FBI’s actions against Natanson. The First Amendment exists precisely to prevent the government from using its powers to punish or deter reporting on matters of public interest—including coverage of leaked or sensitive information. Searches like this threaten not only journalists, but the public’s right to know what its government is doing.
In the statement published yesterday, we call on Congress:
To exercise oversight of the DOJ by calling Attorney General Pam Bondi before Congress to answer questions about the FBI’s actions;
To reintroduce and pass the PRESS Act, which would limit government surveillance of journalists, and its ability to compel journalists to reveal sources;
To reform the 108-year-old Espionage Act so it can no longer be used to intimidate and attack journalists.
And to pass a resolution confirming that the recording of law enforcement activity is protected by the First Amendment.
We’re joined on this letter by Free Press Action, the American Civil Liberties Union, PEN America, the NewsGuild-CWA, the Society of Professional Journalists, the Committee to Protect Journalists, and many other press freedom and civil liberties groups.
Further Reading:


EFF asked a California appeals court to uphold a lower court’s decision to strike a tech CEO’s lawsuit against a journalist that sought to silence reporting the CEO, Maury Blackman, didn’t like.
The journalist, Jack Poulson, reported on Maury Blackman’s arrest for felony domestic violence after receiving a copy of the arrest report from a confidential source. Blackman didn’t like that. So, he sued Poulson—along with Substack, Amazon Web Services, and Poulson’s non-profit, Tech Inquiry—to try and force Poulson to take his articles down from the internet.
Fortunately, the trial court saw this case for what it was: a classic SLAPP, or a strategic lawsuit against public participation. The court dismissed the entire complaint under California’s anti-SLAPP statute, which provides a way for defendants to swiftly defeat baseless claims designed to chill their free speech.
The appeals court should affirm the trial court’s correct decision.
Poulson’s reporting is just the kind of activity that the state’s anti-SLAPP law was designed to protect: truthful speech about a matter of public interest. The felony domestic violence arrest of the CEO of a controversial surveillance company with U.S. military contracts is undoubtedly a matter of public interest. As we explained to the court, “the public has a clear interest in knowing about the people their government is doing business with.”
Blackman’s claims are totally meritless, because they are barred by the First Amendment. The First Amendment protects Poulson’s right to publish and report on the incident report. Blackman argues that a court order sealing the arrest overrides Poulson’s right to report the news—despite decades of Supreme Court and California Court of Appeals precedent to the contrary. The trial correctly rejected this argument and found that the First Amendment defeats all of Blackman’s claims. As the trial court explained, “the First Amendment’s protections for the publication of truthful speech concerning matters of public interest vitiate Blackman’s merits showing.”
The court of appeals should reach the same conclusion.

The Baton Rouge Police Department announced this week that it will begin using a drone designed by military equipment manufacturer Lockheed Martin and Edge Autonomy, making it one of the first local police departments to use an unmanned aerial vehicle (UAV) with a history of primary use in foreign war zones. Baton Rouge is now one of the first local police departments in the United States to deploy an unmanned aerial vehicle (UAV) with such extensive surveillance capabilities — a dangerous escalation in the militarization of local law enforcement.
This is a troubling development in an already long history of local law enforcement acquiring and utilizing military-grade surveillance equipment. It should be a cautionary tale that prods communities across the country to be proactive in ensuring that drones can only be acquired and used in ways that are well-documented, transparent, and subject to public feedback.
Baton Rouge bought the Stalker VXE30 from Edge Autonomy, which partners with Lockheed Martin and began operating under the brand Redwire this week. According to reporting from WBRZ ABC2 in Louisiana, the drone, training, and batteries, cost about $1 million.
All of the regular concerns surrounding drones apply to this new one in use by Baton Rouge:
However, the use of a military-grade drone hypercharges these concerns. Stalker VXE30's surveillance capabilities extend for dozens of miles, and it can fly faster and longer than standard police drones already in use.
“It can be miles away, but we can still have a camera looking at your face, so we can use it for surveillance operations," BRPD Police Chief TJ Morse told reporters.
Drone models similar to the Stalker VXE30 have been used in military operations around the world and are currently being used by the U.S. Army and other branches for long-range reconnaissance. Typically, police departments deploy drone models similar to those commercially available from companies like DJI, which until recently was the subject of a proposed Federal Communications Commission (FCC) ban, or devices provided by police technology companies like Skydio, in partnership with Axon and Flock Safety.
Additionally troubling is the capacity to add additional equipment to these drones: so-called “payloads” that could include other types of surveillance equipment and even weapons.
The Baton Rouge community must put policies in place that restrict and provide oversight of any possible uses of this drone, as well as any potential additions law enforcement might make.
EFF has filed a public records request to learn more about the conditions of this acquisition and gaps in oversight policies. We've been tracking the expansion of police drone surveillance for years, and this acquisition represents a dangerous new frontier. We'll continue investigating and supporting communities fighting back against the militarization of local police and mass surveillance. To learn more about the surveillance technologies being used in your city, please check out the Atlas of Surveillance.

Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee held a hearing today on “examining the effect of technology on America’s youth.” Witnesses warned about “addictive” online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and “empower parents.”
That’s a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill’s press release contains soothing language, KOSMA doesn’t actually give parents more control.
Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That’s right—this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem.
One of the main promises of KOSMA is simple and dramatic: it would ban kids under 13 from social media. Based on the language of bill sponsors, one might think that’s a big change, and that today’s rules let kids wander freely into social media sites. But that’s not the case.
Every major platform already draws the same line: kids under 13 cannot have an account. Facebook, Instagram, TikTok, X, YouTube, Snapchat, Discord, Spotify, and even blogging platforms like WordPress all say essentially the same thing—if you’re under 13, you’re not allowed. That age line has been there for many years, mostly because of how online services comply with a federal privacy law called COPPA.
Of course, everyone knows many kids under 13 are on these sites anyways. The real question is how and why they get access.
If lawmakers picture under-13 social media use as a bunch of kids lying about their age and sneaking onto apps behind their parents’ backs, they’ve got it wrong. Serious studies that have looked at this all find the opposite: most under-13 use is out in the open, with parents’ knowledge, and often with their direct help.
A large national study published last year in Academic Pediatrics found that 63.8% of under-13s have a social media account, but only 5.4% of them said they were keeping one secret from their parents. That means roughly 90% of kids under 13 who are on social media aren’t hiding it at all. Their parents know. (For kids aged thirteen and over, the “secret account” number is almost as low, at 6.9%.)
Earlier research in the U.S. found the same pattern. In a well-known study of Facebook use by 10-to-14-year-olds, researchers found that about 70% of parents said they actually helped create their child’s account, and between 82% and 95% knew the account existed. Again, this wasn’t kids sneaking around. It was families making a decision together.
A 2022 study by the UK’s media regulator Ofcom points in the same direction, finding that up to two-thirds of social media users below the age of thirteen had direct help from a parent or guardian getting onto the platform.
The typical under-13 social media user is not a sneaky kid. It’s a family making a decision together.
This bill doesn’t just set an age rule. It creates a legal duty for platforms to police families.
Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it “shall terminate any existing account or profile” belonging to that user. And “knows” doesn’t just mean someone admits their age. The bill defines knowledge to include what is “fairly implied on the basis of objective circumstances”—in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13.
KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won’t be kids sneaking around—it will be minors who are following their parents’ guidance, and the parents themselves.
Imagine a child using their parent’s YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, “Cool video—I’ll show this to my 6th grade teacher!” and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn’t matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a “family” account from being shut down.
Violations of KOSMA are enforced by the FTC and state attorneys general. That’s more than enough legal risk to make platforms err on the side of cutting people off.
Platforms have no way to remove “just the kid” from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child’s use, KOSMA forces Big Tech to override that family decision.
KOSMA doesn’t appoint a neutral referee. Under the law, companies like Google (YouTube), Meta (Facebook and Instagram), TikTok, Spotify, X, and Discord will become the ones who decide whose account survives, whose account gets locked, who has to upload ID, and whose family loses access altogether. They won’t be doing this because they want to—but because Congress is threatening them with legal liability if they don’t.
These companies don’t know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems.
This debate isn’t really about TikTok trends or doomscrolling. It’s about all the ordinary, boring, parent-guided uses of the modern internet. It’s about a kid watching “How volcanoes work” on regular YouTube, instead of the stripped-down YouTube Kids. It’s about using a shared Spotify account to listen to music a parent already approves. It’s about piano lessons from a teacher who makes her living from YouTube ads.
These aren’t loopholes. They’re how parenting works in the digital age. Parents increasingly filter, supervise, and, usually, decide together with their kids. KOSMA will lead to more locked accounts, and more parents submitting to face scans and ID checks. It will also lead to more power concentrated in the hands of the companies Congress claims to distrust.
KOSMA also includes separate restrictions on how platforms can use algorithms for users aged 13 to 17. Those raise their own serious questions about speech, privacy, and how online services work, and need debate and scrutiny as well. But they don’t change the core problem here: this bill hands control over children’s online lives to Big Tech.
If Congress really wants to help families, it should start with something much simpler and much more effective: strong privacy protections for everyone. Limits on data collection, restrictions on behavioral tracking, and rules that apply to adults as well as kids would do far more to reduce harmful incentives than deputizing companies to guess how old your child is and shut them out.
But if lawmakers aren’t ready to do that, they should at least drop KOSMA and start over. A law that treats ordinary parenting as a compliance problem is not protecting families—it’s undermining them.
Parents don’t need Big Tech to replace them. They need laws that respect how families actually work.


We're not saying Copilot has become sentient and decided it doesn't want to lose consciousness. But if it did, it would create Microsoft's January Patch Tuesday update, which has made it so that some PCs flat-out refuse to shut down or hibernate, no matter how many times you try.…
Other noteworthy stories that might have slipped under the radar: BodySnatcher agentic AI hijacking, Telegram IP exposure, shipping systems hacked by researcher.
The post In Other News: FortiSIEM Flaw Exploited, Sean Plankey Renominated, Russia’s Polish Grid Attack appeared first on SecurityWeek.
German cops have added Russian national Oleg Evgenievich Nefekov to their list of most-wanted criminals for his services to ransomware.…
The company will use the investment to accelerate the adoption of its solution among financial institutions and digital businesses.
The post Monnai Raises $12 Million for Identity and Risk Data Infrastructure appeared first on SecurityWeek.
More than a decade after Aaron Swartz’s death, the United States is still living inside the contradiction that destroyed him.
Swartz believed that knowledge, especially publicly funded knowledge, should be freely accessible. Acting on that, he downloaded thousands of academic articles from the JSTOR archive with the intention of making them publicly available. For this, the federal government charged him with a felony and threatened decades in prison. After two years of prosecutorial pressure, Swartz died by suicide on Jan. 11, 2013.
The still-unresolved questions raised by his case have resurfaced in today’s debates over artificial intelligence, copyright and the ultimate control of knowledge.
At the time of Swartz’s prosecution, vast amounts of research were funded by taxpayers, conducted at public institutions and intended to advance public understanding. But access to that research was, and still is, locked behind expensive paywalls. People are unable to read work they helped fund without paying private journals and research websites.
Swartz considered this hoarding of knowledge to be neither accidental nor inevitable. It was the result of legal, economic and political choices. His actions challenged those choices directly. And for that, the government treated him as a criminal.
Today’s AI arms race involves a far more expansive, profit-driven form of information appropriation. The tech giants ingest vast amounts of copyrighted material: books, journalism, academic papers, art, music and personal writing. This data is scraped at industrial scale, often without consent, compensation or transparency, and then used to train large AI models.
AI companies then sell their proprietary systems, built on public and private knowledge, back to the people who funded it. But this time, the government’s response has been markedly different. There are no criminal prosecutions, no threats of decades-long prison sentences. Lawsuits proceed slowly, enforcement remains uncertain and policymakers signal caution, given AI’s perceived economic and strategic importance. Copyright infringement is reframed as an unfortunate but necessary step toward “innovation.”
Recent developments underscore this imbalance. In 2025, Anthropic reached a settlement with publishers over allegations that its AI systems were trained on copyrighted books without authorization. The agreement reportedly valued infringement at roughly $3,000 per book across an estimated 500,000 works, coming at a cost of over $1.5 billion. Plagiarism disputes between artists and accused infringers routinely settle for hundreds of thousands, or even millions, of dollars when prominent works are involved. Scholars estimate Anthropic avoided over $1 trillion in liability costs. For well-capitalized AI firms, such settlements are likely being factored as a predictable cost of doing business.
As AI becomes a larger part of America’s economy, one can see the writing on the wall. Judges will twist themselves into knots to justify an innovative technology premised on literally stealing the works of artists, poets, musicians, all of academia and the internet, and vast expanses of literature. But if Swartz’s actions were criminal, it is worth asking: What standard are we now applying to AI companies?
The question is not simply whether copyright law applies to AI. It is why the law appears to operate so differently depending on who is doing the extracting and for what purpose.
The stakes extend beyond copyright law or past injustices. They concern who controls the infrastructure of knowledge going forward and what that control means for democratic participation, accountability and public trust.
Systems trained on vast bodies of publicly funded research are increasingly becoming the primary way people learn about science, law, medicine and public policy. As search, synthesis and explanation are mediated through AI models, control over training data and infrastructure translates into control over what questions can be asked, what answers are surfaced, and whose expertise is treated as authoritative. If public knowledge is absorbed into proprietary systems that the public cannot inspect, audit or meaningfully challenge, then access to information is no longer governed by democratic norms but by corporate priorities.
Like the early internet, AI is often described as a democratizing force. But also like the internet, AI’s current trajectory suggests something closer to consolidation. Control over data, models and computational infrastructure is concentrated in the hands of a small number of powerful tech companies. They will decide who gets access to knowledge, under what conditions and at what price.
Swartz’s fight was not simply about access, but about whether knowledge should be governed by openness or corporate capture, and who that knowledge is ultimately for. He understood that access to knowledge is a prerequisite for democracy. A society cannot meaningfully debate policy, science or justice if information is locked away behind paywalls or controlled by proprietary algorithms. If we allow AI companies to profit from mass appropriation while claiming immunity, we are choosing a future in which access to knowledge is governed by corporate power rather than democratic values.
How we treat knowledge—who may access it, who may profit from it and who is punished for sharing it—has become a test of our democratic commitments. We should be honest about what those choices say about us.
This essay was written with J. B. Branch, and originally appeared in the San Francisco Chronicle.
The startup is building the necessary infrastructure and tools to help organizations transition to post-quantum computing.
The post Project Eleven Raises $20 Million for Post-Quantum Security appeared first on SecurityWeek.


WhisperPair is a set of attacks that lets an attacker hijack many popular Bluetooth audio accessories that use Google Fast Pair and, in some cases, even track their location via Google’s Find Hub network—all without requiring any user interaction.
Researchers at the Belgian University of Leuven revealed a collection of vulnerabilities they found in audio accessories that use Google’s Fast Pair protocol. The affected accessories are sold by 10 different companies: Sony, Jabra, JBL, Marshall, Xiaomi, Nothing, OnePlus, Soundcore, Logitech, and Google itself.
Google Fast Pair is a feature that makes pairing Bluetooth earbuds, headphones and similar accessories with Android devices quick and seamless, and syncs them across a user’s Google account.
The Google Fast Pair Service (GFPS) utilizes Bluetooth Low Energy (BLE) to discover nearby Bluetooth devices. Many big-name audio brands use Fast Pair in their flagship products, so the potential attack surface consists of hundreds of millions of devices.
The weakness lies in the fact that Fast Pair skips checking whether a device is in pairing mode. As a result, a device controlled by an attacker, such as a laptop, can trigger Fast Pair even when the earbuds are sitting in a user’s ear or pocket, then quickly complete a normal Bluetooth pairing and take full control.
What that control enables depends on the capabilities of the hijacked device. This can range from playing disturbing noises to recording audio via built-in microphones.
It gets worse if the attacker is the first to pair the accessory with an Android device. In that case, the attacker’s Owner Account Key–designating their Google account as the legitimate owner’s—to the accessory. If the Fast Pair accessory also supports Google’s Find Hub network, which many people use to locate lost items, the attacker may then be able to track the accessory’s location.
Google classified this vulnerability, tracked under CVE‑2025‑36911, as critical. However, the only real fix is a firmware or software update from the accessory manufacturer, so users need to check with their specific brand and install accessory updates, as updating the phone alone does not fix the issue.
To find out whether your device is vulnerable, the researchers published a list and recommend keeping all accessories updated. The research team tested 25 commercial devices from 16 manufacturers using 17 different Bluetooth chipsets. They were able to take over the connection and eavesdrop on the microphone on 68% of the tested devices.
These are the devices the researchers found to be vulnerable, but it’s possible that others are affected as well:
We don’t just report on phone security—we provide it
Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.
A critical HPE OneView flaw is now being exploited at scale, with Check Point tying mass, automated attacks to the RondoDox botnet.…
Many website owners follow a similar “security plan,” even if they don’t call it that. They launch the site, add a couple of plugins, and just hope nothing goes wrong.
The issue is that modern website hacks don’t make themselves obvious. Instead, they show up as small signs, like a redirect that only affects mobile users, a hidden credit card skimmer in a template file, silent SEO spam that hurts your rankings, or a DNS change that quietly reroutes your email.
Continue reading How to Run a Security Test and Set Up Continuous Monitoring at Sucuri Blog.