Reading view

Understanding GRC: How to Navigate Risks and Compliance Standards

“GRC” isn’t all witchcraft and administrative nonsense — it’s the core that drives security initiatives, connects security spend to business outcomes, and powers a well-functioning security team.

The post Understanding GRC: How to Navigate Risks and Compliance Standards appeared first on Black Hills Information Security, Inc..

  •  

How AI Assistants are Moving the Security Goalposts

AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.

The OpenClaw logo.

If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done.

“The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.”

You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.

“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”

Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox.

There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations.

Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys.

With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic.

“You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.”

O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications.

WHEN AI INSTALLS AI

One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines.

A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rogue instance of OpenClaw with full system access installed on their device without consent.

According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile.

“On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update.

“This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.”

VIBE CODING

AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.

The Moltbook homepage.

Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.

Moltbook’s creator Matt Schlicht said on social media that he didn’t write a single line of code for the project.

“I just had a vision for the technical architecture and AI made it a reality,” Schlicht said. “We’re in the golden ages. How can we not give AI a place to hang out.”

ATTACKERS LEVEL UP

The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period.

AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication.

“One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.”

“This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.”

For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network.

“By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.”

BEWARE THE ‘LETHAL TRIFECTA’

This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out.

“I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.”

One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.

Image: simonwillison.net.

“If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025.

As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review.

The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity.

“The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.”

DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said.

“The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”

  •  

The SOC Is Now Agentic — Introducing the Next Evolution of Cortex

See the agentic SOC come to life at Cortex® Symphony 2026, the ultimate SOC event.

Today, the Cortex® platform takes a massive step toward delivering the perfect union of human expertise and agentic AI across all of security operations. Our latest release embeds immersive, context-aware agentic AI across the platform, from code to cloud to SOC, delivering an agentic-first analyst experience for our customers.

With new Cortex AgentiX™ agents built to tackle more use cases and an expanded AI-ready data foundation, this release slashes response times and redefines what high-efficiency SOC operations look like.

Attack Velocity Has Fundamentally Changed

Not long ago, adversaries took days to move from initial access to impact. Today, they weaponize AI across the attack lifecycle to operate up to 4x faster than just one year ago, executing end-to-end attacks in as little as 72 minutes, according to Unit 42® research.

These attacks are making manual response obsolete. Teams need the next generation of AI technology that can analyze, decide and act in real time. Our latest innovations, fueled by unified, high-fidelity data, help give defenders the edge they need to outmaneuver modern attacks.

An AI-Ready Data Foundation for the Agentic SOC

Agentic AI depends on data that is fast, flexible and built for scale. Cortex Extended Data Lake™ (XDL) provides that data foundation for Cortex XSIAM and the broader Cortex platform, serving as a single source of truth for security operations. Built for AI and analytics, it ingests more than 15 PB of telemetry daily across 1,100+ integrations, and is designed to provide the comprehensive data required for effective detection, investigation, and response.

With the introduction of Cortex XDL 2.0, we are revolutionizing how organizations store, access and manage data, enabling new levels of flexibility and control.

Cortex XDL 2.0: The open Data Lake built for AI-driven insights.

New capabilities added with the Cortex XDL 2.0 release:

  • Cost-efficient data lake tier that can lower SOC costs with flexible long-term retention for compliance, forensics and investigations.
  • Federated search to query distributed data sources without incurring additional ingestion or storage costs.
  • Native Chronosphere Telemetry Pipeline integration to filter and route telemetry at the source
  • AI-driven parsing that automatically builds production-ready parsers from sample logs using generative AI, removing hours of manual effort and accelerating time to value.

Together, these capabilities power AI agents with critical security signals and give security teams the data they need, when and where they need it, while controlling costs.

Redefining How Analysts Work in the SOC

Cortex introduces an agentic-first analyst experience that embeds advanced AI directly into the analyst’s daily workflow. Designed to reduce investigation time, the elevated experience brings together automatically generated case summaries, visualized issue relationships, and a centralized Resolution Center within a unified case management workspace.

 

AI now spans the Cortex console, allowing context-aware agents to work in real time alongside analysts. Using the Cortex Agentic Assistant, teams can call on agents to plan and execute investigation workflows directly within their cases.

This release also doubles the number of AI agents who are purpose-built for SecOps and Cloud Security. Here are three of the newest additions.

  • The Case Investigation agent delivers context-aware assistance that analyzes case artifacts and complex signals to accelerate triage. It recommends next steps, highlights critical evidence, builds AI case summaries, and takes action with analyst oversight.
  • The Cloud Posture agent helps teams uncover, triage and resolve misconfigurations and posture risks across cloud environments. It streamlines analyst workflows by proactively prioritizing risk, enriching exposures and applying approved fixes.
  • The Automation Engineer agent tackles one of automation’s biggest pain points: Building and maintaining complex workflows. With simple natural language prompts, teams can generate working code and scripts for agents or playbooks.
Screenshot of PowerShell reverse shell activity with Mimikatz and Rubeus tools on EC2AMA...
The new Case Management Workspace provides full investigative context to streamline case analysis.

Our new agentic playbooks bring AI directly into automation workflows, embedding AI tasks that adapt in real time to help teams resolve incidents faster. They automate complex operations, analyze inputs with large language models (LLMs), and produce context-specific outputs.

Matt Bunch, Global CISO, Tyson Foods:

At Tyson Foods, protecting a complex global supply chain in an era of AI-driven threats requires us to move with the same machine speed as our adversaries. By consolidating onto the Palo Alto Networks Cortex platform, we’ve effectively closed the gap between detection and response. The impact has been transformative as we’ve increased our log visibility by 40% while reducing median time to respond by 50%. The agentic capabilities in the platform have allowed our teams to move from manual triage to high-level strategic defense, ensuring our global operations remain resilient and secure.

The Cortex Agentix Platform Has Arrived

The standalone Cortex Agentix platform brings the power of AI to everyone, delivering advanced orchestration and automation for the modern SOC. For Cortex XSOAR® customers, this marks the natural evolution of our market-leading SOAR platform, now enhanced with agentic intelligence to unlock meaningful productivity gains.

With more than 1,300 playbooks, 1,100 integrations, and built-in MCP support, Cortex Agentix combines over a decade of SOAR leadership with powerful AI capabilities to help security teams operate with greater speed, coordination and efficiency across the SOC.

Securing the Agentic Endpoint

As users increasingly run AI-powered code packages, browser extensions, plugins and more, they are opening the door to a new class of AI-driven threats at the endpoint. That is why we announced our intent to acquire Koi to help secure the emerging agentic endpoint. Once completed, the acquisition will strengthen our visibility and protection at the endpoint, extending our ironclad protection from the SOC to where AI code actually runs.

See the Agentic SOC Take Center Stage at Cortex Symphony 2026

To experience these innovations firsthand, join Lee Klarich, Chief Product and Technology Officer, and Gonen Fink, EVP of Products, alongside other industry leaders at Cortex Symphony 2026, the ultimate SOC event.


Forward-Looking Statements (unreleased feature only)

This blog contains forward-looking statements that involve risks, uncertainties and assumptions, including, without limitation, statements regarding the benefits, impact, or performance or potential benefits, impact or performance of our products and technologies or future products and technologies. Any unreleased services or features (and any services or features not generally available to customers) referenced in this or other press releases or public statements are not currently available (or are not yet generally available to customers) and may not be delivered when expected or at all. Customers who purchase Palo Alto Networks applications should make their purchase decisions based on services and features currently generally available.

The post The SOC Is Now Agentic — Introducing the Next Evolution of Cortex appeared first on Palo Alto Networks Blog.

  •  

How to 10x Your Vulnerability Management Program in the Agentic Era

The evolution of vulnerability management in the agentic era is characterized by continuous telemetry, contextual prioritization and the ultimate goal of agentic remediation.

The post How to 10x Your Vulnerability Management Program in the Agentic Era appeared first on SecurityWeek.

  •  

Introducing Unit 42 Managed XSIAM 2.0

24/7 Managed SOC Built for Tomorrow's Threats

The window for defense has collapsed, and most SOCs weren’t built for the speed of today’s attacks. According to the 2026 Unit 42® Global Incident Response Report, some end-to-end attacks now unfold in under an hour. Attacks that used to take days or weeks now happen in minutes.

Most traditional SOC models are trapped in a cycle of alert overload, fragmented tools and limited engineering capacity that slow investigations and delay response. Traditional SIEM and MDR models were designed to react to alerts. They were not designed to continuously improve detections, correlations and response with threats that move at machine speed. Over time, that gap between attacker speed and defender capability keeps widening, and it’s exactly why we built Unit 42 Managed XSIAM 2.0 (MSIAM).

Today marks the availability of the next evolution of our managed SOC offering – one that reflects how modern security operations must run in today’s threat landscape. MSIAM 2.0 is built on Cortex XSIAM®, Palo Alto Networks SOC transformation platform, and operated by Unit 42 analysts, threat hunters, responders and SOC engineers who handle the most complex incidents in the world. With this solution, Unit 42 provides organizations with a 24/7 managed SOC that delivers continuous detection, investigation and full-cycle remediation across the entire attack surface while improving operations over time.

We don’t just manage alerts. Unit 42 continuously engineers detections, correlations and response playbooks within XSIAM, refining them as attacker behavior evolves. This ongoing engineering ensures defenses improve over time, driven by real-world incidents and frontline threat intelligence, not static rules that quickly fall behind.

Why Managed XSIAM 2.0 Is Different

Elite SOC on Day One

We want SOC teams up and running as fast as possible. Experts lead onboarding, data mapping and configuration, and then your managed SOC team takes responsibility for operating and optimizing XSIAM on a day-to-day basis. The result is a SOC that improves over time without adding operational burden.

Every Threat Exposed

Unit 42 goes beyond reactive monitoring with continuous, proactive threat hunting across the entire attack surface. When a new threat is found in the wild, we produce threat impact reports that show how those techniques apply to each customer’s environment. We then translate those insights into custom detections and automated response actions, while also monitoring and investigating the correlation rules your team creates. Both the global threat intelligence and your unique use cases are backed by our 24/7 analysis, closing gaps quickly and strengthening defenses over time.

We also now support both native and third-party EDR telemetry, so organizations can benefit from Unit 42 expertise and Cortex® AI-driven analytics, regardless of the security technologies they use today. This enables customers to receive the strongest possible managed defense now, while creating a natural, low-friction path toward deeper platform consolidation as their environment evolves.

Machine-Speed Response

When incidents escalate, we don’t just hand you a ticket; we take ownership. Collaborating with your team, we establish pre-authorized workflows to execute immediate responses across your entire environment, from endpoints and firewalls to identity and cloud. We pair the platform’s native speed with expert oversight. By validating threat context and business impact, every response action is precise and safe, giving you the confidence to unleash full-cycle remediation. This allows MSIAM 2.0 to move seamlessly from detection to resolution with both velocity and precision.

And we stand behind our solution with a Breach Response Guarantee. If a complex incident strikes, you have the world’s best responders in your corner with up to 250 hours of Unit 42 Incident Response included. This built-in coverage removes the administrative hurdles of crisis response, enabling our experts to immediately transition from monitoring to deep forensic investigation and complete eradication, so you can focus on recovery. 

Proven in the Real World with the Green Bay Packers

Working with Unit 42 and the Cortex XSIAM platform, the Green Bay Packers modernized their security across a complex hybrid environment, demonstrating what Unit 42's managed services deliver in real-world operations. By consolidating telemetry and accelerating investigation and response, they reduced response times from hours to minutes, investigated 54% more alerts and saved over 120 hours of analyst time without adding headcount.

These outcomes reflect the key benefits of MSIAM: Unit 42 experts working to apply frontline intelligence as new attacker behavior emerges, translating it into reporting and tailored detections that improve response where it matters most. When a machine-speed platform is operated by experts handling real incidents every day, defenses continuously strengthen as threats evolve.

The Future of the SOC

Unit 42 MSIAM 2.0 helps your SOC operate as it should by combining AI-driven analytics and automation with expert-led operations and engineering. This combination provides teams with the confidence that their defenses are always on, always improving and ready when it matters most. That’s the SOC that security leaders need today, and the one we’re building for tomorrow.

MSIAM is now delivered through two service tiers, Pro and Premium. Organizations can start where they are and grow at their own pace. Pro provides AI-driven managed SOC operations with continuous detection, investigation and response. Premium extends into full-lifecycle SOC engineering, with designated experts and customized detections, automation and tailored response playbooks as your security maturity grows.

To learn more about Managed XSIAM 2.0, join us at Symphony 2026, a Palo Alto Networks premier virtual SOC event, where Unit 42 and Cortex® experts will share frontline threat intelligence from the new 2026 Unit 42 Incident Response Report alongside real-world SOC transformation insights from organizations operating at machine speed.

The post Introducing Unit 42 Managed XSIAM 2.0 appeared first on Palo Alto Networks Blog.

  •  

The Human Element: Turning Threat Actor OPSEC Fails into Investigative Breakthroughs

Blogs

Blog

The Human Element: Turning Threat Actor OPSEC Fails into Investigative Breakthroughs

In this post, we explore how the psychological traps of operational security can unmask even the most sophisticated actors.

SHARE THIS:
Default Author Image
February 13, 2026
Table Of Contents

The threat intelligence landscape is often dominated with talks of sophisticated TTPs (tactics, tools, and procedures), zero-day vulnerabilities, and ransomware. While these technical threats are formidable, they are still managed by human beings, and it is the human element that often provides the most critical breakthroughs in attributing these attacks and de-anonymizing the threat actors behind them.

In our latest webinar, “OPSEC Fails: The Secret Weapon for People-Centric OSINT”,  Flashpoint was joined by Joshua Richards, founder of OSINT Praxis. Josh shared an intriguing case study where an attacker’s digital breadcrumbs led to a life-saving intervention. 

Here is how OSINT techniques, leveraged by Flashpoint’s expansive data capabilities, can dismantle illegal threat actor campaigns by turning a technical investigation into a human one.

Leveraging OPSEC as a Mindset

In a technical context, OPSEC is a risk management process that identifies seemingly innocuous pieces of information that, when gathered by an adversary, could be pieced together to reveal a larger, sensitive picture.

In the webinar, we break down the OPSEC mindset into three core pillars that every practitioner, and threat actor, must navigate. When these pillars fail, the investigation begins.

  • Analyzing the Signature: Every human has a digital signature, such as the way they type (stylometry), the times they are active, and the tools they prefer.
  • Identity Masking & Persona Management: This involves ensuring that your investigative identity has zero overlap with your real life. A common failure includes using the same browser for personal use and investigative research, which allows cookies to bridge the two identities.
  • Traffic Obfuscation: Even with a VPN, certain behaviors such as posting on a dark web forum and then using that same connection to check personal banking can expose an IP address, linking it to a practitioner or threat actor.

“Effective OPSEC isn’t about the tools you use; it’s about what breadcrumbs you are leaving behind that hackers, investigation subjects, or literally anyone could find about you.”

Joshua Richards, founder of Osint Praxis

Leveraging the Mindset for CTI

Understanding the OPSEC mindset allows security teams to think like the target. When we know the psychological traps attackers fall in, we know exactly where to look for their mistakes.

AssumptionThe Mindset TrapThe Investigative Reality
Insignificant“I’m not a high-value target; no one is looking for me.”Automated Aggression: Hackers use scripts to scan millions of accounts. You aren’t “chosen”; you are “discovered” via automation.
Invisible“I don’t have a LinkedIn or X account, so I don’t have a footprint.”Shadow Data: Public birth records, property taxes, and historical data breaches create a footprint you didn’t even build yourself.
Invincible“I have 2FA and complex passwords; I’m unhackable.”Session Hijacking: Infostealer malware steals “session tokens” (cookies). This allows an actor to be you in a browser without ever needing your 2FA code.

During the webinar, Joshua shares a masterclass in how leveraging these concepts can turn a vague dark web threat into a real-world arrest. Check out the on-demand webinar to see exactly how the investigation started on Torum, a dark web forum, and ended with an arrest that saved the lives of two individuals.

Turn the Tables Using Flashpoint

The insights shared in this session powerfully illustrate that even the most dangerous threat actors are rarely as anonymous as they believe. Their downfall isn’t usually a failure of their technical prowess, but a failure of their mindset. By understanding these OSINT techniques, intelligence practitioners can transform a sea of digital noise into a clear path toward attribution.

The most effective way to dismantle threats is to bridge the gap between technical indicators and human behavior. Whether your teams are conducting high-stakes OSINT or protecting your own organization’s digital footprint, every breadcrumb counts. By leveraging Flashpoint’s expansive threat intelligence collections and real-time data, you can stay one step ahead of adversaries. Request a demo to learn more.

Request a demo today.

The post The Human Element: Turning Threat Actor OPSEC Fails into Investigative Breakthroughs appeared first on Flashpoint.

  •  

When the SOC Goes to Deadwood: A Night to Remember 

Hear a tale about the time the BHIS SOC team conducted a 14-hour overnight incident response... from the Wild West Hackin' Fest conference in Deadwood, South Dakota.

The post When the SOC Goes to Deadwood: A Night to Remember  appeared first on Black Hills Information Security, Inc..

  •  

PwC and Google Cloud Ink $400 Million Deal to Scale AI-Powered Defense

The announcement comes just weeks after Palo Alto Networks and Google Cloud announced a multibillion-dollar AI and cloud security deal.

The post PwC and Google Cloud Ink $400 Million Deal to Scale AI-Powered Defense appeared first on SecurityWeek.

  •  

Burner phones and lead-lined bags: a history of UK security tactics in China

Starmer’s team is wary of spies but such fears are not new – with Theresa May once warned to get dressed under a duvet

When prime ministers travel to China, heightened security arrangements are a given – as is the quiet game of cat and mouse that takes place behind the scenes as each country tests out each other’s tradecraft and capabilities.

Keir Starmer’s team has been issued with burner phones and fresh sim cards, and is using temporary email addresses, to prevent devices being loaded with spyware or UK government servers being hacked into.

Continue reading...

© Photograph: Simon Dawson/Simon Dawson/10 Downing Street

© Photograph: Simon Dawson/Simon Dawson/10 Downing Street

© Photograph: Simon Dawson/Simon Dawson/10 Downing Street

  •  
❌