Enterprises will spend $240 billion on cybersecurity in 2026. That figure comes from Gartner, and it climbs every year — regardless of whether the previous year’s investment actually reduced risk. The vendors selling into that budget — Zscaler, Forcepoint, CrowdStrike, SentinelOne, and dozens like them — do not compete on price. They compete on trust. The pitch is consistent: deploy our platform, get visibility, get protection, watch the dashboard go green.
What happens when the dashboard lies?
This is not a theoretical question. In the past eighteen months, researchers and incident responders have documented significant bypasses or failures in nearly every major endpoint, DLP, and zero trust vendor in the market. The bugs are different. The products are different. What is identical across all of them is what happens next: a support ticket that gets closed, a partner that gets blamed, a contract clause that caps liability at three months of fees, and a management console that keeps reporting the agent as Connected.
I have experienced this firsthand. After discovering and filing CERT/CC case VRF#26-02-JDFCX against Forcepoint — a complete DLP bypass requiring zero admin rights — I watched the vendor close my support ticket as “third-party intervention,” deflect to a partner organization, miss two stated deadlines, and go public with no patch after forty-five days of coordinated disclosure. The specifics matter and I will get to them. But the specifics are not the story. The pattern is the story.
What $240 Billion Buys You (In Theory)
The sales motion for enterprise security tools follows a predictable arc. A vendor comes in, often with a certified partner, and maps their product to whatever the organization is worried about most — data exfiltration, ransomware, compliance, insider threat. The pitch includes a dashboard demo. The dashboard is always impressive. It has heat maps, threat scores, real-time telemetry, and a prominent status indicator that says something like “Agent Health: Optimal.”

Gartner projects global cybersecurity spending will reach $240 billion in 2026, a 12.5% increase over 2025. The typical enterprise splits that budget: roughly 40% to security software and platforms, 30% to personnel, and the remainder to hardware and outsourced services. The software share keeps growing as organizations move from appliance-based to platform-based security. Every dollar in that software budget is a bet that the platform does what the vendor says it does.
The problem is that the vendor is often the only one who knows if the platform is actually working.
This is the quiet structural flaw that most organizations never look at directly. You deploy an endpoint agent. The agent reports its own health back to the management console. The management console tells you the agent is healthy. But if the mechanism by which the agent detects threats is broken — silently, completely, without triggering any alert — the management console has no way of knowing. You are trusting the product to tell you whether the product is working.
The Bypass Problem Is Not a Vendor Problem. It’s an Industry Pattern.
In May 2025, researchers at Aon’s Stroz Friedberg published findings from an active incident investigation: a threat actor had fully bypassed SentinelOne’s endpoint detection and response solution to deploy a variant of Babuk ransomware. The technique was straightforward.
The method, called Bring Your Own Installer, circumvents SentinelOne’s anti-tamper feature by exploiting a flaw within the upgrade/downgrade process of the SentinelOne agent, resulting in an unprotected endpoint. During an agent upgrade, all SentinelOne processes terminate for roughly 55 seconds before new processes start. If the Windows Installer process was terminated during this window — before the new agent version started — both the outgoing and incoming SentinelOne agents would remain inactive, leaving the system without EDR protection.
The feature that would have prevented this — Online Authorization, which requires upgrades to be approved through the management console — was not enabled by default at the time of the incident. SentinelOne has since changed that default for new customers, and they handled the disclosure professionally: they collaborated with Stroz Friedberg to privately notify other EDR vendors before the public report. That matters. But it also means that every customer who deployed SentinelOne before that change, with default settings, had a door open that they did not know about.
At DEF CON 33 in August 2025, researchers from AmberWolf presented a seven-month investigation into zero trust network access platforms. In Zscaler’s implementation, researchers discovered CVE-2025-54982, a SAML authentication bypass vulnerability where the system failed to validate that SAML assertions were properly signed. This flaw enabled complete authentication bypass, granting attackers access to both web proxies and Private Access services that route traffic to internal enterprise resources. Zscaler issued the CVE and patched. Netskope, which had two separate authentication bypass issues found in the same research, maintained a policy of not issuing CVEs for server-side vulnerabilities at all — which means their customers had no standard way to track, assess, or prioritize the exposure.
The pattern extends to the flagship products. On July 19, 2024, CrowdStrike distributed a faulty configuration update for its Falcon sensor software running on Windows PCs and servers. Roughly 8.5 million systems crashed and were unable to properly restart in what has been called the largest outage in the history of information technology. Fortune 500 companies lost an estimated $5.4 billion from the disruption.
None of this is to say these products are useless. SentinelOne does catch threats. Zscaler does enforce policy. CrowdStrike does detect adversarial behavior. The point is different: every one of these products has had a documented moment where it stopped doing what the dashboard said it was doing, and in several cases, the customer had no indication that anything was wrong.
My Finding: When Zero Privilege Gets You Full Bypass
In my own case, the issue is more direct than any of the above, and I think that makes it more instructive.
Forcepoint DLP Endpoint on macOS routes all browser data through two user-space helper processes before it reaches the root-level classification daemon wsdlpd. Those helpers — Websense Endpoint Helper and SafariExtension — run as the current user. Not as root. Not as a privileged service account. As whatever standard user is logged in.
SIGSTOP is a POSIX signal that suspends a process. It is uncatchable — no signal handler installed by the target process can intercept it. On macOS, same-user signal delivery is not restricted by sandbox profiles or entitlements. Sending SIGSTOP to both helpers takes one line of Python and requires zero admin rights.
When those helpers are suspended, the classification daemon never receives a policy scan request. The product’s fail-open / fail-closed configuration setting becomes irrelevant, because no scan is ever triggered. Data that should be blocked transmits silently. No popup appears. No audit log entry is written. No SOC email is sent. And the management console, reading from the daemon which has seen nothing unusual, continues to report: Connected.
The video demonstration I recorded shows this in full: a baseline where uploads are correctly blocked and alerts fire, the script running, the same upload succeeding with zero response, and the console status unchanged throughout.
This vulnerability class is not new for Forcepoint. CVE-2019-6144, assigned by Forcepoint themselves for versions 19.04–19.08, describes an identical outcome: “This vulnerability allows a normal (non-admin) user to disable the Forcepoint One Endpoint and bypass DLP and Web protection.” They patched it in 2019. The protection regressed in the 2025 release.
Contrast that with CrowdStrike Falcon and Jamf Protect, both of which implement Apple’s Endpoint Security Framework to intercept SIGSTOP at the kernel level before it reaches their processes. When a SIGSTOP attempt hits a Jamf Protect process, the SOC gets an alert: TamperKillSignalAttempt at HIGH severity. CrowdStrike generates GenericDisableSecurityToolsDefenseEvasionMac. These are not classified features. They are standard implementations of a public Apple API. Forcepoint implements none of it.
This is not a configuration issue. There is no configuration change that makes SIGSTOP catchable by a process. The fix requires Forcepoint to implement ESF signal authorization, add a watchdog daemon, and fix the console status. Those are product decisions, not deployment decisions.
The Partner Shuffle
Every major enterprise security vendor sells through a partner ecosystem. A certified implementation partner handles deployment, policy configuration, integration with existing systems, and ongoing management. The model has genuine advantages — deep specialization, local support, faster time to value.
It also creates a liability structure that vendors have learned to exploit.
When something goes wrong, the first response is almost always a variation of: your implementation is not correct. Forcepoint told me the bypass I demonstrated was “third-party intervention” and recommended MDM policies as a mitigation — meaning, build controls around the product to compensate for the product’s failure. When I filed with PSIRT, I was redirected to “Dedicated support team.” When I pushed back and PSIRT committed to routing internally, the March 17 deadline passed with no response. Forty-five days after filing with CERT/CC, I published.
CrowdStrike ran a version of this at a larger scale. When Delta Air Lines filed a lawsuit over the outage, CrowdStrike counter-sued, saying that damages Delta suffered were primarily the result of “Delta’s own negligence.” The company that pushed a kernel-level update with no staged rollout, a missing array bounds check, and no content validation — to 8.5 million machines simultaneously — cited the customer’s incident response as the primary problem.
The pattern is structurally similar across incidents of very different scales and different vendors. The vendor’s product fails. The vendor points to the implementation. The partner absorbs or deflects the claim. The enterprise, which has no contractual relationship with the software’s actual developers, is left with the loss.

The “Misconfiguration” Defense
There is a word that appears with suspicious regularity in vendor responses to security findings: misconfiguration.
It is a useful word because it contains a seed of truth. Real misconfigurations exist and cause real breaches. CISA’s own advisories identify misconfiguration as one of the most common attack vectors in enterprise environments. So when a vendor tells you that a finding is a misconfiguration, they are standing next to a real and documented phenomenon.
But there is a distinction that vendors regularly blur: the difference between a misconfiguration in the customer’s environment and a security feature that the vendor shipped disabled by default.
In the SentinelOne case, the Online Authorization feature — the one that would have prevented the bypass — was not enabled by default. A security control that exists but ships disabled is different from a customer misconfiguring a working control. Who chose the default? The vendor. Who has the obligation to decide whether a feature that prevents tamper attacks should be on or off out of the box? The vendor. Calling the absence of that default a customer misconfiguration is technically defensible and practically dishonest.
This distinction matters because the downstream consequence is different. If the customer misconfigured a working control, the fix is a configuration change. If the vendor shipped a security regression — as Forcepoint did by not carrying forward the protections introduced to address CVE-2019-6144 — no configuration change fixes it. The customer has no lever to pull. They are waiting for a patch that may not come with any urgency, because the vendor has already classified the finding as someone else’s problem.
What the Contract Actually Says
The conversation about vendor accountability almost always stays at the technical level. It should occasionally visit the legal level, because what the contract says is where the accountability question is formally settled.
All vendor agreements contain a limitation of liability clause, and the wording in the overwhelming majority of vendor form agreements is written such that the vendor has little and sometimes no liability for breaches of the agreement, including security breaches. The standard structure: no consequential damages (lost profits, reputational harm, regulatory fines), and a maximum aggregate liability capped at fees paid in the preceding twelve months — sometimes three months.
That cap is not incidental. Most vendor contracts cap any amount the vendor will pay in the event of a claim to the amount of fees the vendor has received from the customer during the twelve months prior to the event. For a large enterprise paying several million dollars annually for a security stack, the cap is a fraction of the potential loss from a breach the product failed to prevent.

Delta’s lawsuit against CrowdStrike claimed $500 million in losses. CrowdStrike’s contract almost certainly capped their liability at a small multiple of Delta’s annual subscription fees — a number that bears no relationship to $500 million. The courts will determine what is ultimately owed, but the contract structure means that even a vendor that unambiguously breaks something starts from a position of near-zero financial exposure.
Almost all commercial contracts have a cap on the limitation of liability. As a starting position, customers should request vendors to exclude breaches of a vendor’s security-related obligations from this cap. Most enterprises do not ask for this. Most vendors would push back hard if they did.
The Thing Nobody Mentions: The Dashboard Is the Problem
There is a detail in my Forcepoint finding that gets less attention than the bypass mechanics, but I think it is the most important part.
During the entire bypass window — from the moment the helper processes were suspended, through the file uploads, through the simulated exfiltration — the Forcepoint management console reported the agent status as Connected. Not degraded. Not tampered. Not warning. Connected.
The management console is reading from the root daemon, wsdlpd, which saw nothing wrong because no classification request ever reached it. From the daemon’s perspective, there was nothing to report. So it reported nothing. And the console, reading that nothing, translated it into health.
This matters beyond Forcepoint. In the SentinelOne bypass, when the installer process was terminated during the upgrade window, the system went offline in the SentinelOne management console — which is better than reporting healthy, but only slightly. An offline agent might be a network issue, an agent crash, or an active tamper attack. Without additional context, a SOC analyst looking at an offline agent might not act immediately.
The broader problem is that management consoles report what the agent tells them. If the agent is compromised, stopped, or bypassed, the console’s data quality is only as good as the agent’s. This is not a new concept in security — it is why you do not trust logs from a compromised host. But it is a concept that enterprise security buyers rarely interrogate when they are watching the dashboard demo.
The management console is not a source of truth about your security posture. It is a readout of the agent’s self-reported state. Those two things can diverge significantly, silently, and with no indication on the dashboard.
What Actually Changes This
The technical fixes for the problems I have described are known and achievable. Apple’s Endpoint Security Framework provides kernel-level signal interception. Watchdog daemons can detect stopped processes. Management consoles can be coded to report “Enforcement Degraded” instead of “Connected” when the enforcement chain is broken. These are engineering problems with existing solutions.
The harder problem is that vendors have limited financial incentive to prioritize them quickly. Liability caps protect vendors from the economic consequences of their own failures. The partner model gives them a credible deflection when customers complain. The disclosure process — when it works — can take months, and vendors who do not engage with CERT/CC timelines pay no penalty for missing them.
A few things that could move this:
Read the contract before you sign, not after something breaks. Ask for security-related failures to be carved out from the standard liability cap. Ask for a super-cap — a higher limit for security breaches specifically. Vendors who are confident in their products should not resist this. The degree of resistance is itself a signal.
Build an independent verification layer for your security tools. Do not trust the management console alone. Run a behavioral canary — a monitored test upload to a known-blocked destination — on a regular schedule. If the DLP stops the test, enforcement is live. If it does not, the console’s Connected status is not what it appears to be. Do this separately from whatever the vendor’s health check reports.
When a vendor classifies your finding as a misconfiguration, ask for the specific configuration change. If the product has a vulnerability that requires a code fix, no configuration change will close it. Vendors who cannot point to a specific setting change — with documentation showing it prevents the issue — are using the misconfiguration framing as a stall.
Use CERT/CC when direct disclosure stalls. A vendor PSIRT that closes your support ticket in five days, redirects to a partner, and misses its own stated deadline is not coordinating disclosure — it is running out the clock. CERT/CC exists for exactly this. Coordinated public disclosure is not adversarial. It is the mechanism by which the rest of the industry learns what is broken.
Track CVE histories for your vendors, including regression patterns. A vendor that fixed a vulnerability class, received a CVE assignment for it, and then shipped a product that regresses to the same outcome has demonstrated a gap in their security development lifecycle. That is relevant information when evaluating whether to renew a contract.

The Accountability Gap Will Not Close Itself
Global cybersecurity spending is heading toward $240 billion in 2026. More money is flowing into this industry than at any point in its history. Breaches are also more expensive than at any point in history. The average cost of a data breach now exceeds $4.8 million globally. Both numbers keep going up, and the relationship between them is not what the vendor pitch decks suggest.
Some of that disconnect is about threat actors getting more sophisticated. Some of it is about attack surface expanding faster than defenses. And some of it is about a market structure where the people selling security tools carry almost none of the financial risk when those tools fail.
The SentinelOne researchers who found the BYOI bypass were doing incident response for a company that had already been hit by ransomware. The Zscaler SAML bypass was found by a team doing a seven-month research campaign that got presented at DEF CON. My Forcepoint finding came from being a security professional inside an organization actually deploying and testing the product.
In all three cases: standard users, standard deployments, documented bypasses, management consoles showing green.
The industry spent decades arguing that security is a shared responsibility — that vendors provide tools and customers are responsible for using them correctly. That framing has merit in genuinely shared-responsibility situations. It has no merit when the product’s failure mode is invisible, when the dashboard actively misreports the product’s state, when the vendor has a contractual cap that insulates them from the financial consequences of that failure, and when the only path to accountability is a researcher spending months working through disclosure processes that vendors can simply ignore.
Someone has to be responsible for the green dashboard. Right now, nobody is.
Manish Tripathy is an independent security researcher. The Forcepoint DLP bypass described in this article is documented at CERT/CC VRF#26-02-JDFCX and published at https://gist.github.com/usualdork/4a29935545d70f9d57f621438d2ef214, including proof-of-concept code and a video demonstration. CVE assignment is pending. No patch is available as of the date of publication.