Windows Vista moved the shared start menu from "%ALLUSERSPROFILE%\Start Menu\"
to "%ProgramData%\Microsoft\Windows\Start Menu\", with some shortcuts (*.lnk)
"reflected" from the (immutable) component store below %SystemRoot%\WinSxS\
CVE IDs *can* be assigned for SaaS or similarly "cloud only" software. For a period of time, there was a restriction
that only the provider could make or request such an assignment. But the current CVE rules remove this restriction:
4.2.3 CNAs MUST NOT consider the type of technology (e.g., cloud, on-premises, artificial intelligence, machine
learning) as the sole basis for determining assignment.
A stack-based buffer overflow vulnerability exists in the RIOT OS ethos
utility due to missing bounds checking when processing incoming serial
frame data. The vulnerability occurs in the _handle_char() function, where
incoming frame bytes are appended to a fixed-size stack buffer
(serial->frame) without verifying that the current write index
(serial->framebytes) remains within bounds. An attacker capable of sending
crafted serial or...
A stack-based buffer overflow vulnerability exists in the tapslip6 utility
distributed with RIOT OS (and derived from the legacy uIP/Contiki
networking tools). The vulnerability is caused by unsafe string
concatenation in the devopen() function, which constructs a device path
using unbounded user-controlled input.
Specifically, tapslip6 uses strcpy() and strcat() to concatenate the fixed
prefix "/dev/" with a user-supplied device name...
A stack-based buffer overflow vulnerability exists in the mcp2200gpio
utility due to unsafe use of strcpy() and strcat() when constructing device
paths during automatic device discovery. A local attacker can trigger the
vulnerability by creating a specially crafted filename under /dev/usb/,
resulting in stack memory corruption and a process crash. In non-hardened
builds, this may lead to arbitrary code execution.
A global buffer overflow vulnerability exists in the TinyOS printfUART
implementation used within the ZigBee / IEEE 802.15.4 networking stack. The
issue arises from an unsafe custom sprintf() routine that performs
unbounded string concatenation using strcat() into a fixed-size global
buffer. The global buffer debugbuf, defined with a size of 256 bytes, is
used as the destination for formatted output. When a %s format specifier is
supplied with a...
Letβs go through all 5 pillars aka readiness dimensions and see what we can actually do to make your SOC AI-ready.
#1 SOC Data Foundations
As I said before, this one is my absolute favorite and is at the center of most βAI in SOCβ (as you recall, I want AI in my SOC, but I dislike the βAI SOCβ concept) successes (if done well) and failures (if not done atΒ all).
Reminder: pillar #1 is βsecurity context and data are available and can be queried by machines (API, Model Context Protocol (MCP), etc) in a scalable and reliable manner.β Put simply, for the AI to work for you, it needs your data. As our friends say here, βContext engineering focuses on what information the AI has available. [β¦] For security operations, this distinction is critical. Get the context wrong, and even the most sophisticated model will arrive at inaccurate conclusions.β
Readiness check: Security context and data are available and can be queried by machines in a scalable and reliable manner. This is very easy to check, yet not easy to achieve for many types ofΒ data.
For example, βgive AI access to past incidentsβ is very easy in theory (βah, just give it old ticketsβ) yet often very hard in reality (βwhat tickets?β βarenβt some too sensitive?β, βwaitβ¦this ticket didnβt record what happened afterwards and it totally changed the outcomeβ, βwell, these tickets are in another systemβ, etc,Β etc)
Steps to getΒ ready:
Conduct an βAPI or Dieβ data access audit to inventory critical data sources (telemetry and context) and stress-test their APIs (or other access methods) under load to ensure they can handle frequent queries from an AI agent. This is important enough to be a Part 3 blog after thisΒ oneβ¦
Establish or refine unified, intentional data pipelines for the data you need. This may be your SIEM, this may be a separate security pipeline tool, this may be magick for all I careΒ β¦ but it needs to exist. I met people who use AI to parse human analyst screen videos to understand how humans access legacy data sources, and this is very cool, but perhaps not what you want inΒ prod.
Revamp case management to force structured data entry (e.g., categorized root causes, tagged MITRE ATT&CK techniques) instead of relying on garbled unstructured text descriptions, which provides clean training data for future AI learning. And, yes, if you have to ask: modern gen AI can understand your garbled stream of consciousness ticket descriptionβ¦. but what it makes of it, you will neverΒ knowβ¦
Where you arrive: your AI component, AI-powered tool or AI agent can get the data it needs nearly every time. The cases where it cannot become visible, and obvious immediately.
#2 SOC Process Framework andΒ Maturity
Reminder: pillar #2 is βCommon SOC workflows do NOT rely on human-to-human communication are essential for AI success.β As somebody called it, you need βmachine-intelligible processes.β
Readiness check: SOC workflows are defined as machine-intelligible processes that can be queried programmatically, and explicit, structured handoff criteria are established for all Human-in-the-Loop (HITL) processes, clearly delineating what is handled by the agent versus the person. Examples for handoff to human may include high decision uncertainty, lack of context to make a call (see pillar #1), extra-sensitive systems,Β etc.
Common investigation and response workflows do not rely on ad-hoc, human-to-human communication or βtribal knowledge,β such knowledge is discovered and brought toΒ surface.
Steps to getΒ ready:
Codify the βTribal Knowledgeβ into APIs: Stop burying your detection logic in dusty PDFs or inside the heads of your senior analysts. You must document workflows in a structured, machine-readable format that an AI can actually query. If your contextβββlike CMDB or asset inventoryβββisnβt accessible via API (BTW MCP is not magic!), your AI is essentially flyingΒ blind.
Draw a Hard Line Between Agent and Human: Donβt let the AI βguessβ its level of authority. Explicitly delegate the high-volume drudgery (log summarization, initial enrichment, IP correlation) to the agent, while keeping high-stakes βkill switchesβ (like shutting down production servers) firmly in humanΒ hands.
Implement a βGradingβ System for Continuous Learning: AI shouldnβt just execute tasks; it needs to go to school. Establish a feedback loop where humans actively βgradeβ the AIβs triage logic based on historical resolution data. This transforms the system from a static script into a living βrecipeβ that refines itself overΒ time.
Target Processes for AI-Driven Automation: Stop trying to βAI all the things.β Identify specific investigation workflows that are candidates for automation and use your historical alert triage data as a training ground to ensure the agent actually learns what βgoodβ looksΒ like.
Where you arrive: The βtribal knowledgeβ that previously drove your SOC is recorded for machine-readable workflows. Explicit, structured handoff points are established for all Human-in-the-Loop processes, and the system uses human grading to continuously refine its logic and improve its βrecipeβ over time. This does not mean that everything is rigid; βVisio diagram or deathβ SOC should stay in the 1990s. Recorded and explicit beats rigid and unchanging.
#3 SOC Human Element andΒ Skills
Reminder: pillar #3 is βCultivating a culture of augmentation, redefining analyst roles, providing training for human-AI collaboration, and embracing a leadership mindset that accepts probabilistic outcomes.βYou say βfluffy management crapβ? Well, I say βignore this and your SOC isΒ dead.β
Readiness check: Leaders have secured formal CISO sign-off on a quantified βAI Error Budget,β defining an acceptable, measured, probabilistic error rate for autonomously closed alerts (that is definitely not zero, BTW). The team is evolving to actively review, grade, and edit AI-generated logic and detection output.
Steps to getΒ ready:
Implement the βAI Error Budgetβ: Stop pretending AI will be 100% accurate. You must secure formal CISO sign-off on a quantified βAI Error Budgetββββa predefined threshold for acceptable mistakes. If an agent automates 1,000 hours of labor but has a 5% error rate, the leadership needs to acknowledge that trade-off upfront. Itβs better to define βallowable failureβ now than to explain a hallucination during an incident post-mortem.
Pivot from βRobot Workβ to Agent Shepherding: The traditional L1/L2 analyst role is effectively dead; long live the βAgent Supervisor.β Instead of manually sifting through logsβββwork that is essentially βrobot workβ anywayβββyour team must be trained to review, grade, and edit AI-generated logic. They are no longer just consumers of alerts; they are the βEditors-in-Chiefβ of the SOCβs intelligence.
Rebuild the SOC Org Chart and RACI: Adding AI isnβt a βplug and playβ software update; itβs an organizational redesign. You need to redefine roles: Detection Engineers become AI Logic Editors, and analysts become Supervisors. Most importantly, your RACI must clearly answer the uncomfortable question: If the AI misses a breach, is the accountability with the person who trained the model or the person who supervised theΒ output?
Where you arrive: well, you arrive at a practical realization that you have βAI in SOCβ (and not AI SOC). The tools augment people (and in some cases, do the work end to end too). No pro- (βAI SOC means all humans can go homeβ) or contra-AI (βit makes mistakes and this means we cannot use itβ) craziesΒ nearby.
#4 Modern SOC Technology Stack
Reminder: pillar #4 is βModern SOC Technology Stack.β If your tools lack APIs, take them and go back to the 1990s from whence you came! Destroy your time machine when you arrive, donβt come back toΒ 2026!
Readiness check: The security stack is modern, fast (βno multi-hour data queriesβ) interoperable and supports new AI capabilities to integrate seamlessly, tools can communicate without a human acting as a manual bridge and can handle agentic AI requestΒ volumes.
Steps to getΒ ready:
Mandate βDetection-as-Codeβ (DaC): This is no longer optional. To make your stack machine-readable, you must implement version control (Git), CI/CD pipelines, and automated testing for all detections. If your detection logic isnβt codified, your AI agent has nothing to interact with except a brittle GUIβββand that is a recipe forΒ failure.
Find Your βInteroperability Ceilingβ via Stress Testing: Before you go live, simulate reality. Have an agent attempt to enrich 50 alerts simultaneously to see where the pipes burst. Does your SOAR tool hit a rate limit? Does your threat intel provider cut you off? You need to find the breaking point of your tech stackβs interoperability before an actual incident does it forΒ you.
Decouple βNativeβ from βCustomβ Agents: Donβt reinvent the wheel, but donβt expect a vendorβs βnativeβ agent to understand your weird, proprietary legacy systems. Define a clear strategy: use native agents for standard tool-specific tasks, and reserve your engineering resources for custom agents designed to navigate your unique compliance requirements and internal βsecretΒ sauce.β
Where you arrive: this sounds like a perfect quote from Captain Obvious but you arrive at the SOC powered by tools that work with automation, and not with βhuman bridgeβ or βswivelΒ chair.β
#5 SOC Metrics and FeedbackΒ Loop
Reminder: pillar #5 is βYou are ready for AI if you can, after adding AI, answer the βwhat got better?β question. You need metrics and a feedback loop to getΒ better.β
Readiness check: Hard baseline metrics (MTTR, MTTD, false positive rates) are established before AI deployment, and the team has a way to quantify the value and improvements resulting from AI. When things get better, you will knowΒ it.
Steps to getΒ ready:
Establish the βBeforeβ Baseline and Fix the Data Slop: You cannot claim victory if you donβt know where the goalposts were to begin with. Measure your current MTTR and MTTD rigorously before the first agent is deployed. Simultaneously, force your analysts to stop treating case notes like a private diary. Standardize on structured data entryβββcategorized root causes and MITRE tagsβββso the machine has βclean fuelβ to learn from rather than a collection of βfixed itβ or βclosedβ comments.
Build an βAI Gymβ Using Your βGolden Setβ: Do not throw your agents into the deep end of live production traffic on day one. Curate a βGolden Setβ of your 50β100 most exemplary past incidentsβββthe ones with flawless notes, clean data, and correct conclusions. This serves as your benchmark; if the AI canβt solve these βsolvedβ problems correctly, it has no business touching your live environment.
Adopt Agent-Specific KPIs for Performance Management: Traditional SOC metrics like βnumber of alerts closedβ are insufficient for an AI-augmented team. You need to track Agent Accuracy Rate, Agent Time Savings, and Agent Uptime as religiously as you track patch latency. If your agent is hallucinating 5% of its summaries, that needs to be a visible red flag on your dashboard, not a surprise you discover during an incident post-mortem.
Close the Loop with Continuous Tuning: Ensure triage results arenβt just filed away to die in an archive. Establish a feedback loop where the results of both human and AI investigations are automatically routed back to tune the underlying detection rules. This transforms your SOC from a static βfilterβ into a learning system that evolves with everyΒ alert.
Where you arrive: you have a fact-based visual that shows your SOC becoming better in ways important to your mission after you add AI (in fact, you SOC will get better even before AI but after you do the prep-work from this document)
As a result, we can hopefully get to thisΒ instead:
Better introduction of AI intoΒ SOC
The path to an AI-ready SOC isnβt paved with new tools; itβs paved with better data, cleaner processes, and a fundamental shift in how we think about human-machine collaboration. If you ignore these pillars, your AI journey will be a series of expensive lessons in why βmagicβ isnβt a strategy.
But if you get these right? You move from a SOC that is constantly drowning in alerts to a SOC that operates truly 10X effectiveness.
Random cool visual because Nano BananaΒ :)
P.S. Anton, you said β10Xβ, so how does this relate to ASO and βengineering-ledβ D&R? I am glad you asked. The five pillars we outlined are not just steps for AI; they are the also steps on the road to ASO (see original 2021 paper which is still βthe futureβ forΒ many).
ASO is the vision for a 10X transformation of the SOC, driven by an adaptive, agile, and highly automated approach to threats. The focus on codified, machine-intelligible workflows, a modern stack supporting Detection-as-Code, and reskilling analysts as βAgent Supervisorsβ directly supports the core of engineering-led D&R. So focusing on these five readiness dimensions, you move from a traditional operations room (lots of βOβ for operations) to a scalable, engineering-centric D&R function (where βEβ for engineering dominates).
So, which pillar is your SOCβs current βweakest linkβ? Letβs discuss in the comments and onΒ socials!
The BeeS Examination Tool (BET) portal from BeeS Software Solutions contains an SQL injection vulnerability in its website login functionality. More than 100 universities use the BET portal for test administration and other academic tasks. The vulnerability enables arbitrary SQL commands to be executed on the back-end database, making an attacker able to manipulate the database, extract sensitive student data, and further compromise the host infrastructure. BeeS Software Solutions has since remediated the vulnerability, and no actions are necessary for customers at this time.
Description
Numerous universities implement the BET portal to unify the various tasks associated with administering examinations to students. Each university maintains their own instance of the BET portal, receiving updates from BeeS Software Solutions.
A vulnerability, tracked as CVE-2025-14598, was discovered within the login functionality of the portal. This vulnerability, facilitated by insufficient user input validation, enables arbitrary SQL injection. When exploited, an attacker can manipulate the backend database, steal student data (including credentials), and perform lateral movement, further compromising the host infrastructure.
BeeS Software Solutions issued a patch to all instances using the BET portal, changing code, enabling input validation, and changing various security settings to prevent exploitation and unauthorized access. All BET clients automatically received these changes.
Impact
The vulnerability permits an unauthenticated, remote attacker to achieve various results, including unauthorized database access, credential theft, potential lateral movement into infrastructure, acquisition of sensitive student and institutional data, and system-level access to the affected server.
Solution
No actions are needed by clients, as configurations and updated dynamic link libraries (DLLs) have been automatically installed and updated through ePortal : Secure Build (October 2025). Testing indicates that the changes successfully mitigated the vulnerability.
Acknowledgements
Thanks to the reporter, Mohammed Afnaan Ahmed, for reporting these vulnerabilities. This document was written by Christopher Cullen.
Vendor Information
One or more vendors are listed for this advisory. Please reference the full report for more information.
Event Date: 10 January 2026 Venue: T-Hub, Hyderabad
AI CyberCon Summit 2026 is Indiaβs leading summit on Artificial Intelligence, Cybersecurity, Fraud Prevention, Digital Trust & Compliance, bringing together:
Topic: SigInt-Hombre v1 / dynamic Suricata detection rules from real-time threat feeds Risk: Medium Text:SigInt-Hombre, generates derived Suricata detection rules from live URLhaus threat indicators at runtime and deploy them to th...
A flaw in the firmware-upload error-handling logic of the TOTOLINK EX200 extender can cause the device to unintentionally start an unauthenticated root-level telnet service. This condition may allow a remote authenticated attacker to gain full system access.
Description
In the End-of-Life (EoL) TOTOLINK EX200 firmware, the firmware-upload handler enters an abnormal error state when processing certain malformed firmware files. When this occurs, the device launches a telnet service running with root privileges and does not require authentication. Because the telnet interface is normally disabled and not intended to be exposed, this behavior creates an unintended remote administration interface.
To exploit this vulnerability, an attacker must already be authenticated to the web management interface to access the firmware-upload functionality. Once the error condition is triggered, the resulting unauthenticated telnet service provides full control of the device.
CVE-2025-65606
An authenticated attacker can trigger an error condition in the firmware-upload handler that causes the device to start an unauthenticated root telnet service, granting full system access.
Impact
A remote authenticated attacker may be able to activate a root telnet service and subsequently take complete control of the device. This may lead to configuration manipulation, arbitrary command execution, or establishing a persistent foothold on the network.
Solution
TOTOLINK has not released an update addressing this issue, and the product is no longer maintained. Users should restrict administrative access to trusted networks, prevent untrusted users from accessing the management interface, monitor for unexpected telnet activity, and plan to replace the vulnerable device.
Acknowledgements
Thanks to the reporter Leandro Kogan for bringing this to our attention. This document was written by Timur Snoke.
Vendor Information
One or more vendors are listed for this advisory. Please reference the full report for more information.
A vulnerability in the Forcepoint One DLP Client allows bypass of the vendor-implemented Python restrictions designed to prevent arbitrary code execution. By reconstructing the ctypes FFI environment and applying a version-header patch to the ctypes.pyd module, an attacker can restore ctypes functionality within the bundled Python 2.5.4 runtime, enabling direct invocation of DLLs, memory manipulation, and execution of arbitrary code.
Description
The Forcepoint One DLP Client (version 23.04.5642 and potentially subsequent versions) shipped with a constrained Python 2.5.4 runtime that omitted the ctypes foreign function interface (FFI) library. Although this limitation appeared intended to mitigate malicious use, it was demonstrated that the restriction could be bypassed by transferring compiled ctypes dependencies from another system and applying a version-header patch to the ctypes.pyd module. Once patched and correctly positioned on the search path, the previously restrained Python environment would successfully load ctypes, permitting execution of arbitrary shellcode or DLL-based payloads.
Forcepoint acknowledged the issue and indicated that a fix would be included in an upcoming release. According to the Forcepointβs published knowledge base article (KB 000042256), the vulnerable Python runtime has been removed from Forcepoint One Endpoint (F1E) builds after version 23.11 associated with Forcepoint DLP v10.2.
Impact
Arbitrary code execution within the DLP client may allow an attacker to interfere with or bypass data loss prevention enforcement, alter client behavior, or disable security monitoring functions. Because the client operates as a security control on enterprise endpoints, exploitation may reduce the effectiveness of DLP protections and weaken overall system security.
The complete scope of impact in enterprise environments has not been fully determined.
Solution
Forcepoint reports that the vulnerable Python runtime has been removed in Endpoint builds after version 23.11 (Forcepoint DLP v10.2).
Users should upgrade to Endpoint versions which have been validated to no longer contain python.exe.
Acknowledgements
Thanks to the reporter, Keith Lee.
This document was written by Timur Snoke.
Vendor Information
One or more vendors are listed for this advisory. Please reference the full report for more information.