Cardwell started her career at Netscape, become a VP of engineering at American Express, CISO at UnitedHealth Group, and now CISO in Residence at Transcend.
To implement effective cybersecurity programs and keep the security team deeply integrated into all business processes, the CISO needs to regularly demonstrate the value of this work to senior management. This requires speaking the language of business, but a dangerous trap awaits those who try.Β Security professionals and executives often use the same words, but for entirely different things. Sometimes, a number of similar terms are used interchangeably. As a result, top management may not understand which threats the security team is trying to mitigate, what the companyβs actual level of cyber-resilience is, or where budget and resources are being allocated. Therefore, before presenting sleek dashboards or calculating the ROI of security programs, itβs worth subtly clarifying these important terminological nuances.
By clarifying these terms and building a shared vocabulary, the CISO and the Board can significantly improve communication and, ultimately, strengthen the organizationβs overall security posture.
Why cybersecurity vocabulary matters for management
Varying interpretations of terms are more than just an inconvenience; the consequences can be quite substantial. A lack of clarity regarding details can lead to:
Misallocated investments. Management might approve the purchase of a zero trust solution without realizing itβs only one piece of a long-term, comprehensive program with a significantly larger budget. The money is spent, yet the results management expected are never achieved. Similarly, with regard to cloud migration, management may assume that moving to the cloud automatically transfers all security responsibility to the provider, and subsequently reject the cloud security budget.
Blind acceptance of risk. Business unit leaders may accept cybersecurity risks without having a full understanding of the potential impact.
Lack of governance. Without understanding the terminology, management canβt ask the right β tough β questions, or assign areas of responsibility effectively. When an incident occurs, it often turns out that business owners believed security was entirely within the CISOβs domain, while the CISO lacked the authority to influence business processes.
Information security risks are often lumped in with IT concerns like uptime and service availability.Β In reality, cyberrisk is a strategic business risk linked to business continuity, financial loss, and reputational damage.
IT risks are generally operational in nature, affecting efficiency, reliability, and cost management. Responding to IT incidents is often handled entirely by IT staff. Major cybersecurity incidents, however, have a much broader scope; they require the engagement of nearly every department, and have a long-term impact on the organization in many ways β including as regards reputation, regulatory compliance, customer relationships, and overall financial health.
Compliance vs. security
Cybersecurity is integrated into regulatory requirements at every level β from international directives like NIS2 and GDPR, to cross-border industry guidelines like PCI DSS, plus specific departmental mandates. As a result, company management often views cybersecurity measures as compliance checkboxes, believing that once regulatory requirements are met, cybersecurity issues can be considered resolved. This mindset can stem from a conscious effort to minimize security spending (βweβre not doing more than what weβre required toβ) or from a sincere misunderstanding (βweβve passed an ISO 27001 audit, so weβre unhackableβ).
In reality, compliance is meeting the minimum requirements of auditors and government regulators at a specific point in time. Unfortunately, the history of large-scale cyberattacks on major organizations proves that βminimumβ requirements have that name for a reason. For real protection against modern cyberthreats, companies must continuously improve their security strategies and measures according to the specific needs of the given industry.
Threat, vulnerability, and risk
These three terms are often used synonymously, which leads to erroneous conclusions made by management: βThereβs a critical vulnerability on our server? That means we have a critical risk!β To avoid panic or, conversely, inaction, itβs vital to use these terms precisely and understand how they relate to one another.
A vulnerability is a weakness β an βopen doorβ. This could be a flaw in software code, a misconfigured server, an unlocked server room, or an employee who opens every email attachment.
A threat is a potential cause of an incident. This could be a malicious actor, malware, or even a natural disaster. A threat is what might βwalk through that open doorβ.
Risk is the potential loss. Itβs the cumulative assessment of the likelihood of a successful attack, and what the organization stands to lose as a result (the impact).
The connections among these elements are best explained with a simple formula:
Risk = (Threat Γ Vulnerability) Γ Impact
This can be illustrated as follows. Imagine a critical vulnerability with a maximum severity rating is discovered in an outdated system. However, this system is disconnected from all networks, sits in an isolated room, and is handled by only three vetted employees. The probability of an attacker reaching it is near zero. Meanwhile, the lack of two-factor authentication in the accounting systems creates a real, high risk, resulting from both a high probability of attack and significant potential damage.
Incident response, disaster recovery, and business continuity
Managementβs perception of security crises is often oversimplified: βIf we get hit by ransomware, weβll just activate the IT Disaster Recovery plan and restore from backupsβ. However, conflating these concepts β and processes β is extremely dangerous.
Incident Response (IR) is the responsibility of the security team or specialist contractors. Their job is to localize the threat, kick the attacker out of the network, and stop the attack from spreading.
Disaster Recovery (DR) is an IT engineering task. Itβs the process of restoring servers and data from backups after the incident response has been completed.
Business Continuity (BC) is a strategic task for top management. Itβs the plan for how the company continues to serve customers, ship goods, pay compensation, and talk to the press while its primary systems are still offline.
If management focuses solely on recovery, the company will lack an action plan for the most critical period of downtime.
Security awareness vs. security culture
Leaders at all levels sometimes assume that simply conducting security training guarantees results: βThe employees have passed their annual test, so now they wonβt click on a phishing linkβ. Unfortunately, relying solely on training organized by HR and IT wonβt cut it. Effectiveness requires changing the teamβs behavior, which is impossible without the engagement of business management.
Awareness is knowledge. An employee knows what phishing is and understands the importance of complex passwords.
Security culture refers to behavioral patterns. Itβs what an employee does in a stressful situation or when no oneβs watching. Culture isnβt shaped by tests, but by an environment where itβs safe to report mistakes and where itβs customary to identify and prevent potentially dangerous situations. If an employee fears punishment, theyβll hide an incident. In a healthy culture, theyβll report a suspicious email to the SOC, or nudge a colleague who forgets to lock their computer, thereby becoming an active link in the defense chain.
Detection vs. prevention
Business leaders often think in outdated βfortress wallβ categories: βWe bought expensive protection systems, so there should be no way to hack us. If an incident occurs, it means the CISO failedβ. In practice, preventing 100% of attacks is technically impossible and economically prohibitive. Modern strategy is built on a balance between cybersecurity and business effectiveness. In a balanced system, components focused on threat detection and prevention work in tandem.
Prevention deflects automated, mass attacks.
Detection and Response help identify and neutralize more professional, targeted attacks that manage to bypass prevention tools or exploit vulnerabilities.
The key objective of the cybersecurity team today isnβt to guarantee total invulnerability, but to detect an attack at an early stage and minimize the impact on the business. To measure success here, the industry typically uses metrics like Mean Time to Detect (MTTD) and Mean Time to Respond (MTTR).
Zero-trust philosophy vs. zero-trust products
The zero trust concept β which implies βnever trust, always verifyβ for all components of IT infrastructure β has long been recognized as relevant and effective in corporate security. It requires constant verification of identity (user accounts, devices, and services) and context for every access request based on the assumption that the network has already been compromised.
However, the presence of βzero trustβ in the name of a security solution doesnβt mean an organization can adopt this approach overnight simply by purchasing the product.
Zero trust isnβt a product you can βturn onβ; itβs an architectural strategy and a long-term transformation journey. Implementing zero trust requires restructuring access processes and refining IT systems to ensure continuous verification of identity and devices. Buying software without changing processes wonβt have a significant effect.
Security of the cloud vs. security in the cloud
When migrating IT services to cloud infrastructure like AWS or Azure, thereβs often an illusion of a total risk transfer: βWe pay the provider, so security is now their headacheβ. This is a dangerous misconception, and a misinterpretation of what is known as the Shared Responsibility Model.
Security of the cloud is the providerβs responsibility. It protects the data centers, the physical servers, and the cabling.
Security in the cloud is the clientβs responsibility.
Discussions regarding budgets for cloud projects and their security aspects should be accompanied by real life examples. The provider protects the database from unauthorized access according to the settings configured by the clientβs employees. If employees leave a database open or use weak passwords, and if two-factor authentication isnβt enabled for the administrator panel, the provider canβt prevent unauthorized individuals from downloading the information β an all-too-common news story. Therefore, the budget for these projects must account for cloud security tools and configuration management on the company side.
Vulnerability scanning vs. penetration testing
Leaders often confuse automated checks, which fall under cyber-hygiene, with assessing IT assets for resilience against sophisticated attacks: βWhy pay hackers for a pentest when we run the scanner every week?β
Vulnerability scanning checks a specific list of IT assets for known vulnerabilities. To put it simply, itβs like a security guard doing the rounds to check that the office windows and doors are locked.
Penetration testing (pentesting) is a manual assessment to evaluate the possibility of a real-world breach by exploiting vulnerabilities. To continue the analogy, itβs like hiring an expert burglar to actually try and break into the office.
One doesnβt replace the other; to understand its true security posture, a business needs both tools.
Managed assets vs. attack surface
A common and dangerous misconception concerns the scope of protection and the overall visibility held by IT and Security. A common refrain at meetings is, βWe have an accurate inventory list of our hardware. Weβre protecting everything we ownβ.
Managed IT assets are things the IT department has purchased, configured, and can see in their reports.
An attack surface is anything accessible to attackers: any potential entry point into the company. This includes Shadow IT (cloud services, personal messaging apps, test serversβ¦), which is basically anything employees launch themselves in circumvention of official protocols to speed up or simplify their work. Often, itβs these βinvisibleβ assets that become the entry point for an attack, as the security team canβt protect what it doesnβt know exists.
To all those who are fighting the good fight in the world of cyber, keep collaborating to ensure our world never succumbs to the chaos of the Upside Down.
Millions of IT systems β some of them industrial and IoT β may start behaving unpredictably on January 19. Potential failures include: glitches in processing card payments; false alarms from security systems; incorrect operation of medical equipment; failures in automated lighting, heating, and water supply systems; and many more or less serious types of errors. The catch is β it will happen on January 19, 2038. Not that thatβs a reason to relaxΒ β the time left to prepare may already be insufficient. The cause of this mass of problems will be an overflow in the integers storing date and time. While the root cause of the error is simple and clear, fixing it will require extensive and systematic efforts on every level β from governments and international bodies and down to organizations and private individuals.
The unwritten standard of the Unix epoch
The Unix epoch is the timekeeping system adopted by Unix operating systems, which became popular across the entire IT industry. It counts the seconds from 00:00:00 UTC on January 1, 1970, which is considered the zero point. Any given moment in time is represented as the number of seconds that have passed since that date. For dates before 1970, negative values are used. This approach was chosen by Unix developers for its simplicityΒ β instead of storing the year, month, day, and time separately, only a single number is needed. This facilitates operations like sorting or calculating the interval between dates. Today, the Unix epoch is used far beyond Unix systems: in databases, programming languages, network protocols, and in smartphones running iOS and Android.
The Y2K38 time bomb
Initially, when Unix was developed, a decision was made to store time as a 32-bit signed integer. This allowed for representing a date range from roughly 1901 to 2038. The problem is that on January 19, 2038, at 03:14:07 UTC, this number will reach its maximum value (2,147,483,647 seconds) and overflow, becoming negative, and causing computers to βteleportβ from January 2038 back to December 13, 1901. In some cases, however, shorter βtime travelβ might happenΒ β to point zero, which is the year 1970.
This event, known as the βyear 2038 problemβ, βEpochalypseβ, or βY2K38β, could lead to failures in systems that still use 32-bit time representationΒ β from POS terminals, embedded systems, and routers, to automobiles and industrial equipment. Modern systems solve this problem by using 64 bits to store time. This extends the date range to hundreds of billions of years into the future. However, millions of devices with 32-bit dates are still in operation, and will require updating or replacement before βday Yβ arrives.
In this context, 32 and 64 bits refer specifically to the date storage format. Just because an operating system or processor is 32-bit or 64-bit, it doesnβt automatically mean it stores the date in its βnativeβ bit format. Furthermore, many applications store dates in completely different ways, and might be immune to the Y2K38 problem, regardless of their bitness.
In cases where thereβs no need to handle dates before 1970, the date is stored as an unsigned 32-bit integer. This type of number can represent dates from 1970 to 2106, so the problem will arrive in the more distant future.
Differences from the year 2000 problem
The infamous year 2000 problem (Y2K) from the late 20th century was similar in that systems storing the year as two digits could mistake the new date for the year 1900. Both experts and the media feared a digital apocalypse, but in the end there were just numerous isolated manifestations that didnβt lead to global catastrophic failures.
The key difference between Y2K38 and Y2K is the scale of digitization in our lives. The number of systems that will need updating is way higher than the number of computers in the 20th century, and the count of daily tasks and processes managed by computers is beyond calculation. Meanwhile, the Y2K38 problem has already been, or will soon be, fixed in regular computers and operating systems with simple software updates. However, the microcomputers that manage air conditioners, elevators, pumps, door locks, and factory assembly lines could very well chug along for the next decade with outdated, Y2K38-vulnerable software versions.
Potential problems of the Epochalypse
The dateβs rolling over to 1901 or 1970 will impact different systems in different ways. In some cases, like a lighting system programmed to turn on every day at 7pm, it might go completely unnoticed. In other systems that rely on complete and accurate timestamps, a full failure could occurΒ β for example, in the year 2000, payment terminals and public transport turnstiles stopped working. Comical cases are also possible, like issuing a birth certificate with a date in 1901. Far worse would be the failure of critical systems, such as a complete shutdown of a heating system, or the failure of a bone marrow analysis system in a hospital.
Cryptography holds a special place in the Epochalypse. Another crucial difference between 2038 and 2000 is the ubiquitous use of encryption and digital signatures to protect all communications. Security certificates generally fail verification if the deviceβs date is incorrect. This means a vulnerable device would be cut off from most communications β even if its core business applications donβt have any code that incorrectly handles the date.
Unfortunately, the full spectrum of consequences can only be determined through controlled testing of all systems, with separate analysis of a potential cascade of failures.
The malicious exploitation of Y2K38
IT and InfoSec teams should treat Y2K38 not as a simple software bug, but as a vulnerability that can lead to various failures, including denial of service. In some cases, it can even be exploited by malicious actors. To do this, they need the ability to manipulate the time on the targeted system. This is possible in at least two scenarios:
Interfering with NTP protocol data by feeding the attacked system a fake time server
Spoofing the GPS signalΒ β if the system relies on satellite time
Exploitation of this error is most likely in OT and IoT systems, where vulnerabilities are traditionally slow to be patched, and the consequences of a failure can be far more substantial.
An example of an easily exploitable vulnerability related to time counting is CVE-2025-55068 (CVSSv3 8.2, CVSSv4 base 8.8) in Dover ProGauge MagLink LX4 automatic fuel-tank gauge consoles. Time manipulation can cause a denial of service at the gas station, and block access to the deviceβs web management panel. This defect earned its own CISA advisory.
The current status of Y2K38 mitigation
The foundation for solving the Y2K38 problem has been successfully laid in major operating systems. The Linux kernel added support for 64-bit time even on 32-bit architectures starting with version 5.6 in 2020, and 64-bit Linux was always protected from this issue. The BSD family, macOS, and iOS use 64-bit time on all modern devices. All versions of Windows released in the 21st century arenβt susceptible to Y2K38.
The situation at the data storage and application level is far more complex. Modern file systems like ZFS, F2FS, NTFS, and ReFS were designed with 64-bit timestamps, while older systems like ext2 and ext3 remain vulnerable. Ext4 and XFS require specific flags to be enabled (extended inode for ext4, and bigtime for XFS), and might need offline conversion of existing filesystems. In the NFSv2 and NFSv3 protocols, the outdated time storage format persists. Itβs a similar patchwork landscape in databases: the TIMESTAMP type in MySQL is fundamentally limited to the year 2038, and requires migration to DATETIME, while the standard timestamp types in PostgreSQL are safe. For applications written in C, pathways have been created to use 64-bit time on 32-bit architectures, but all projects require recompilation. Languages like Java, Python, and Go typically use types that avoid the overflow, but the safety of compiled projects depends on whether they interact with vulnerable libraries written in C.
A massive number of 32-bit systems, embedded devices, and applications remain vulnerable until theyβre rebuilt and tested, and then have updates installed by all their users.
Various organizations and enthusiasts are trying to systematize information on this, but their efforts are fragmented. Consequently, thereβs no βcommon Y2K38 vulnerability databaseβ out there (1, 2, 3, 4, 5).
Approaches to fixing Y2K38
The methodologies created for prioritizing and fixing vulnerabilities are directly applicable to the year 2038 problem. The key challenge will be that no tool today can create an exhaustive list of vulnerable software and hardware. Therefore, itβs essential to update inventory of corporate IT assets, ensure that inventory is enriched with detailed information on firmware and installed software, and then systematically investigate the vulnerability question.
The list can be prioritized based on the criticality of business systems and the data on the technology stack each system is built on. The next steps are: studying the vendorβs support portal, making direct inquiries to hardware and software manufacturers about their Y2K38 status, and, as a last resort, verification through testing.
When testing corporate systems, itβs critical to take special precautions:
Never test production systems.
Create a data backup immediately before the test.
Isolate the system being tested from communications so it canβt confuse other systems in the organization.
If changing the date uses NTP or GPS, ensure the 2038 test signals cannot reach other systems.
After testing, set the systems back to the correct time, and thoroughly document all observed system behaviors.
If a system is found to be vulnerable to Y2K38, a fixing timeline should be requested from the vendor. If a fix is impossible, plan a migration; fortunately, the time we have left still allows for updating even fairly complex and expensive systems.
The most important thing in tackling Y2K38 is not to think of it as a distant future problem whose solution can easily wait another five to eight years. Itβs highly likely that we already have insufficient time to completely eradicate the defect. However, within an organization and its technology fleet, careful planning and a systematic approach to solving the problemΒ will allow to actually make it in time.
Here we examine the CISO Outlook for 2026, with the purpose of evaluating what is happening now and preparing leaders for what lies ahead in 2026 and beyond.
Attackers often go after outdated and unused test accounts, or stumble upon publicly accessible cloud storage containing critical data thatβs a bit dusty. Sometimes an attack exploits a vulnerability in an app component that was actually patched, say, two years ago. As you read these breach reports, a common theme emerges: the attacks leveraged something outdated: a service, a server, a user accountβ¦ Pieces of corporate IT infrastructure that sometimes fall off the radar of IT and security teams. They become, in essence, unmanaged, useless, and simply forgotten. These IT zombies create risks for information security, regulatory compliance, and lead to unnecessary operational costs. This is generally an element of shadow IT β with one key difference: nobody wants, knows about, or benefits from these assets.
In this post, we try to identify which assets demand immediate attention, how to identify them, and what a response should look like.
Physical and virtual servers
Priority: high. Vulnerable servers are entry points for cyberattacks, and they continue consuming resources while creating regulatory compliance risks.
Prevalence:high. Physical and virtual servers are commonly orphaned in large infrastructures following migration projects, or after mergers and acquisitions. Test servers no longer used after IT projects go live, as well as web servers for outdated projects running without a domain, are also frequently forgotten. The scale of the problem is illustrated by Lets Encrypt statistics: in 2024, half of domain renewal requests came from devices no longer associated with the requested domain. And there are roughly a million of these devices in the world.
Detection: the IT department needs to implement an Automated Discovery and Reconciliation (AD&R) process that combines the results of network scanning and cloud inventory with data from the Configuration Management Database (CMDB). It enables the timely identification of outdated or conflicting information about IT assets, and helps locate the forgotten assets themselves.
This data should be supplemented by external vulnerability scans that cover all of the organizationβs public IPs.
Response: establish a formal, documented process for decommissioning/retiring servers. This process needs to include verification of complete data migration, and verified subsequent destruction of data on the server. Following these steps, the server can be powered down, recycled, or repurposed. Until all procedures are complete, the server needs to be moved to a quarantined, isolated subnet.
To mitigate this issue for test environments, implement an automated process for their creation and decommission. A test environment should be created at the start of a project, and dismantled after a set period or following a certain duration of inactivity. Strengthen the security of test environments by enforcing their strict isolation from the primary (production) environment, and by prohibiting the use of real, non-anonymized business data in testing.
Forgotten user, service, and device accounts
Priority: critical. Inactive and privileged accounts are prime targets for attackers seeking to establish network persistence or expand their access within the infrastructure.
Prevalence:very high. Technical service accounts, contractor accounts, and non-personalized accounts are among the most commonly forgotten.
Detection: conduct regular analysis of the user directory (Active Directory in most organizations) to identify all types of accounts that have seen no activity over a defined period (a month, quarter, or year). Concurrently, itβs advisable to review the permissions assigned to each account, and remove any that are excessive or unnecessary.
Response: after checking with the relevant service owner on the business side or employee supervisor, outdated accounts should be simply deactivated or deleted. A comprehensive Identity and Access Management system (IAM) offers a scalable solution to this problem. In this system, the creation, deletion, and permission assignment for accounts are tightly integrated with HR processes.
For service accounts, itβs also essential to routinely review both the strength of passwords, and the expiration dates for access tokens β rotating them as necessary.
Forgotten data stores
Priority: critical. Poorly controlled data in externally accessible databases, cloud storage and recycle bins, and corporate file-sharing services β even βsecureβ ones β has been a key source of major breaches in 2024β2025. The data exposed in these leaks often includes document scans, medical records, and personal information. Consequently, these security incidents also lead to penalties for non-compliance with regulations such as HIPAA, GDPR, and other data-protection frameworks governing the handling of personal and confidential data.
Prevalence:high. Archive data, data copies held by contractors, legacy database versions from previous system migrations β all of these often remain unaccounted for and accessible for years (even decades) in many organizations.
Detection: given the vast variety of data types and storage methods, a combination of tools is essential for discovery:
Native audit subsystems within major vendor platforms, such as AWS Macie, and Microsoft Purview
Specialized Data Discovery and Data Security Posture Management solutions
Automated analysis of inventory logs, such as S3 Inventory
Unfortunately, these tools are of limited use if a contractor creates a data store within its own infrastructure. Controlling that situation requires contractual stipulations granting the organizationβs security team access to the relevant contractor storage, supplemented by threat intelligence services capable of detecting any publicly exposed or stolen datasets associated with the companyβs brand.
Response: analyze access logs and integrate the discovered storage into your DLP and CASB tools to monitor its usage β or to confirm itβs truly abandoned. Use available tools to securely isolate access to the storage. If necessary, create a secure backup, then delete the data. At the organizational policy level, itβs crucial to establish retention periods for different data types, mandating their automatic archiving and deletion upon expiry. Policies must also define procedures for registering new storage systems, and explicitly prohibit the existence of ownerless data thatβs accessible without restrictions, passwords, or encryption.
Unused applications and services on servers
Priority: medium. Vulnerabilities in these services increase the risk of successful cyberattacks, complicate patching efforts, and waste resources.
Prevalence:very high. services are often enabled by default during server installation, remain after testing and configuration work, and continue to run long after the business process they supported has become obsolete.
Detection: through regular audits of software configurations. For effective auditing, servers should adhere to a role-based access model, with each server role having a corresponding list of required software. In addition to the CMDB, a broad spectrum of tools helps with this audit: tools like OpenSCAP and Lynis β focused on policy compliance and system hardening; multi-purpose tools like OSQuery; vulnerability scanners such as OpenVAS; and network traffic analyzers.
Response: conduct a scheduled review of server functions with their business owners. Any unnecessary applications or services found running should be disabled. To minimize such occurrences, implement the principle of least privilege organization-wide and deploy hardened base images or server templates for standard server builds. This ensures no superfluous software is installed or enabled by default.
Outdated APIs
Priority: high. APIs are frequently exploited by attackers to exfiltrate large volumes of sensitive data, and to gain initial access into the organization. In 2024, the number of API-related attacks increased by 41%, with attackers specifically targeting outdated APIs, as these often provide data with fewer checks and restrictions. This was exemplified by the leak of 200 million records from X/Twitter.
Prevalence:high. When a service transitions to a new API version, the old one often remains operational for an extended period, particularly if itβs still used by customers or partners. These deprecated versions are typically no longer maintained, so security flaws and vulnerabilities in their components go unpatched.
Detection: at the WAF or NGFW level, itβs essential to monitor traffic to specific APIs. This helps detect anomalies that may indicate exploitation or data exfiltration, and also identify APIs that get minimal traffic.
Response: for the identified low-activity APIs, collaborate with business stakeholders to develop a decommissioning plan, and migrate any remaining users to newer versions.
For organizations with a large pool of services, this challenge is best addressed with an API management platform in conjunction with a formally approved API lifecycle policy. This policy should include well-defined criteria for deprecating and retiring outdated software interfaces.
Software with outdated dependencies and libraries
Priority: high. This is where large-scale, critical vulnerabilities like Log4Shell hide, leading to organizational compromise and regulatory compliance issues.
Prevalence:Very high, especially in large-scale enterprise management systems, industrial automation systems, and custom-built software.
Detection: use a combination of vulnerability management (VM/CTEM) systems and software composition analysis (SCA) tools. For in-house development, itβs mandatory to use scanners and comprehensive security systems integrated into the CI/CD pipeline to prevent software from being built with outdated components.
Response: company policies must require IT and development teams to systematically update software dependencies. When building internal software, dependency analysis should be part of the code review process. For third-party software, itβs crucial to regularly audit the status and age of dependencies.
For external software vendors, updating dependencies should be a contractual requirement affecting support timelines and project budgets. To make these requirements feasible, itβs essential to maintain an up-to-date software bill of materials (SBOM).
Priority: medium. Forgotten web assets can be exploited by attackers for phishing, hosting malware, or running scams under the organizationβs brand, damaging its reputation. In more serious cases, they can lead to data breaches, or serve as a launchpad for attacks against the given company. A specific subset of this problem involves forgotten domains that were used for one-time activities, expired, and werenβt renewed β making them available for purchase by anyone.
Prevalence:high β especially for sites launched for short-term campaigns or one-off internal activities.
Detection: the IT department must maintain a central registry of all public websites and domains, and verify the status of each with its owners on a monthly or quarterly basis. Additionally, scanners or DNS monitoring can be utilized to track domains associated with the companyβs IT infrastructure. Another layer of protection is provided by threat intelligence services, which can independently detect any websites associated with the organizationβs brand.
Response: establish a policy for scheduled website shutdown after a fixed period following the end of its active use. Implement an automated DNS registration and renewal system to prevent the loss of control over the companyβs domains.
Unused network devices
Priority: high. Routers, firewalls, surveillance cameras, and network storage devices that are connected but left unmanaged and unpatched make for the perfect attack launchpad. These forgotten devices often harbor vulnerabilities, and almost never have proper monitoring β no EDR or SIEM integration β yet they hold a privileged position in the network, giving hackers an easy gateway to escalate attacks on servers and workstations.
Prevalence:medium. Devices get left behind during office moves, network infrastructure upgrades, or temporary workspace setups.
Detection: use the same network inventory tools mentioned in the forgotten servers section, as well as regular physical audits to compare network scans against whatβs actually plugged in. Active network scanning can uncover entire untracked network segments and unexpected external connections.
Response: ownerless devices can usually be pulled offline immediately. But beware: cleaning them up requires the same care as scrubbing servers β to prevent leaks of network settings, passwords, office video footage, and so on.
Engaging with the C-suite is not just about addressing security concerns or defending budget requests. It's about establishing and maintaining an ongoing discussion that aims to align security objectives with the interests of the business.Β Β
A number closer to 4 out of 5 companies use Exchange (cloud or onprem). Think about the massive infrastructure. Imagine how important and prevalent it is for businesses....and realize why it is constantly targeted by attackers. Businesses should carefully evaluate the risks and limitations of relying on Exchange as their security email solution as well as their storage and communication email solution.