What We Learned Red Teaming the Pentagon

A post-mortem on the state of access control, session architecture, and app hardening in 2026.

From 2020 to 2025, Silent Breach Labs identified three vulnerabilities affecting production US Department of Defense systems: two Insecure Direct Object Reference findings, one of which enabled unauthenticated account takeover, and a zero-day in an Adobe ColdFusion deployment that allowed unauthenticated arbitrary file read and exposed administrator credential material. This post is a post-mortem on the classes of failure these findings represent and what they should tell practitioners about the state of access control, session architecture, and application hardening in 2026.

IDOR Is Not a Web 1.0 Problem

The first finding was an Insecure Direct Object Reference on a profile endpoint: an authenticated user could retrieve another user's account data by modifying a UID2 cookie value. The system used sequential integers as user identifiers and resolved object access based on that client-supplied value without verifying that the requesting session held authorization over the referenced object.

The persistence of this vulnerability class across mature, well-resourced organizations is not attributable to developer ignorance. It is a structural consequence of how authorization is typically implemented. When access control is handled locally, at the endpoint or controller level, it is treated as a feature property rather than a system property. Engineers implementing a profile retrieval endpoint write the data access logic and may or may not add an ownership check, depending on threat model awareness, code review rigor, and time constraints. At scale, across hundreds of endpoints developed by multiple teams over time, the probability that every object-level operation includes a correctly implemented ownership check approaches zero.

The architectural correction is to enforce object-level authorization in a centralized layer that all data access routes through, with an explicit deny-by-default posture. Frameworks that require developers to opt into access control, rather than opt out of it, consistently produce better outcomes than those that leave it as a per-endpoint responsibility. The relevant pattern in API design is to derive the authorized principal from the verified session context on the server side and resolve object ownership from that principal, never from a client-supplied identifier treated as ground truth. Any system where the backend's authorization decision is downstream of an unvalidated client input has this class of vulnerability in its attack surface, regardless of current test coverage.

The Authentication/Authorization Category Error

The second finding composed directly with the first. The IDOR primitive that disclosed user profile data, including email addresses, provided the enumeration capability necessary to target arbitrary accounts in a subsequent request. A password modification endpoint accepted a UID2 value and a new credential without verifying that the initiating session had ownership of the referenced account, without requiring the existing credential as a proof-of-intent control, and without issuing a secondary verification challenge. The result was unauthenticated account takeover at arbitrary scale, rated Critical.

This failure pattern reflects a category error that is common in access control implementations: conflating authentication state with authorization scope. A session that has passed authentication is trusted to identify a principal. It is not, without additional authorization logic, trusted to operate on any object reachable through the API. The distinction matters most at state-changing operations that affect security-sensitive resources, particularly credential modification, because the consequence of an incorrect authorization decision in that context is permanent account compromise that may go undetected.

The correct model treats credential mutation as a privileged operation requiring independent proof of ownership, typically the current credential, distinct from the session token that proves authentication. This is not belt-and-suspenders engineering; it is the recognition that a session token is a proof of identity, not a proof of entitlement. The authorization decision for a destructive or security-sensitive operation must be made against explicit entitlement criteria, not inferred from the presence of a valid session. Implementations that do not make this distinction will produce account takeover primitives wherever object references are enumerable and mutation endpoints lack ownership verification.

Unauthenticated Endpoints and Framework Opacity

The ColdFusion finding required no user account. The vulnerable endpoint was a publicly accessible CFC method, iedit.cfc invoked with method=wizardHash, which accepted a _metadata.classname parameter. By supplying a directory traversal path in that field, an unauthenticated caller caused the server to resolve and load an arbitrary file from disk within the permission scope of the ColdFusion service account. In validation this retrieved lib/password.properties, containing the SHA-256 hash of the administrator password, database connection strings, and API keys.

The mechanism is a variant of a well-documented class: application frameworks that construct file resolution paths from request parameters without path canonicalization or allowlist enforcement. What makes this instance instructive is where the failure originates. The vulnerable behavior was not in application code written by a DoD developer. It was in how ColdFusion's CFC method dispatch mechanism, under a specific combination of _cfclient=true, WDDX serialization, and the wizardHash execution path, processes attacker-controlled metadata fields and passes them into a file resolution operation. The application code was an unknowing participant in the exploitation path.

This is a systemic risk introduced by framework opacity. Complex application servers, particularly legacy stacks with large internal execution surfaces, contain behaviors that application developers are not expected to know about and that are not visible in normal code review. The attack surface of a deployed application is not limited to the code its developers wrote; it includes every internal behavior of the runtime that attacker-controlled input can reach. This has two practical implications. First, the external attack surface of any ColdFusion deployment must be treated as including all accessible CFC endpoints regardless of whether they appear in documented API references, and all such endpoints must be tested for unauthenticated method invocation and adversarial path handling. Second, file path construction from request parameters is categorically unsafe unless the resolved canonical path is verified against an explicit root allowlist after resolution. Pattern matching against traversal sequences is not a reliable control: encoding variants, platform-specific path separators, and application-layer normalization differences create bypass vectors that allowlist enforcement at the resolved path level eliminates by construction.

Credential Exposure and the Blast Radius of File Read Primitives

The operational impact of the ColdFusion file read was substantially amplified by two compounding factors in the deployment architecture. The ColdFusion service account had read access to configuration files containing credential material, and that material included password hashes constructed with SHA-256.

SHA-256 is not a password hashing function. It is a general-purpose cryptographic hash optimized for throughput. On current GPU hardware, unaccelerated SHA-256 can be computed at rates exceeding ten billion candidates per second. A hash recovered through a file read primitive that would provide no useful online attack surface due to rate limiting and lockout controls becomes a high-value offline target: the entire complexity budget of the password is exposed to brute-force at hardware speed, with no interaction with the target system required after the initial exfiltration. The effective security of a SHA-256 password hash under these conditions is bounded by the entropy of the password, not by any property of the hash construction.

The appropriate response is not solely a remediation of the hash algorithm, though migrating to bcrypt, Argon2id, or scrypt with appropriate cost parameters is a necessary baseline. The deeper issue is architectural: the presence of credential material in files accessible to an application service account means that any file read primitive, regardless of its origin, has a direct path to credential exfiltration. Secrets management infrastructure, whether HashiCorp Vault, AWS Secrets Manager, or a comparable system, exists to break this dependency. Credentials injected at runtime through a secrets manager are not present as flat-file artifacts on the application host and are therefore not recoverable through file read primitives that operate within the application's file system permissions. The blast radius of a file read vulnerability is a function of deployment architecture, not of the vulnerability itself. Reducing that blast radius is an engineering decision that must be made before a vulnerability is found, not after.

Vulnerability Chaining and the Limits of Per-Finding Severity Assessment

The relationship between the two IDOR findings illustrates a systematic limitation of standard vulnerability triage. Report #1004745, assessed in isolation, was High-severity information disclosure: an authenticated session could read another user's profile data. Under a normal remediation prioritization, this would be scheduled on a standard patch cycle. Report #1004750 used the output of #1004745 as a direct input: user email addresses harvested through the first finding were required to construct a targeted account takeover request through the second. The combined severity of the chain was Critical, with an unauthenticated precondition, but neither finding individually carried that rating or that precondition.

This is structurally predictable. Common Vulnerability Scoring System ratings and most organizational triage frameworks assess findings against a fixed threat model that treats each vulnerability as a discrete, independently exploited primitive. Real exploitation chains do not operate within that constraint. Attackers compose primitives across findings, systems, and time. An information disclosure finding that exposes identifiers used as trust inputs elsewhere in the application is not correctly characterized as a bounded disclosure. Its true severity is a function of the attack surface those identifiers unlock, which requires reasoning about information flow across the application rather than within the scope of a single endpoint.

Operationally, this means triage must include a compositional analysis step: for each finding that exposes identifiers, metadata, or state information, the question is where else in the application that information is used as a trust input, and what operations become accessible to an attacker who possesses it. This is not a theoretical exercise. It is the difference between assigning a finding a High rating and patching it in thirty days, and recognizing that it forms the first stage of a Critical unauthenticated takeover chain that requires immediate remediation.

Conclusion

None of these findings required novel exploitation techniques. What they required was a systematic adversarial analysis of the gap between the access control model the developers intended and the access control model the system actually enforced. That gap exists in most production systems and widens over time as systems grow, teams change, and the accumulated surface area of untested object references and unauthenticated endpoints increases. The work of a research-grade offensive team is to find and characterize that gap before an adversary does. The work of the engineering organization is to close it structurally, not symptomatically, so that the next generation of the same vulnerability class does not reappear in the next assessment cycle.

About Silent Breach:

Silent Breach is an award-winning provider of cyber security services. Our global team provides cutting-edge insights and expertise across the Data Center, Enterprise, SME, Retail, Government, Finance, Education, Automotive, Hospitality, Healthcare and IoT industries.

Learn more about our cybersecurity services

Our 24/7/365 Security Operations Centers (SOCs) are ready to serve you any time of the day, anywhere in the world.

Contact specialist
Subscribe to Our Newsletter: Stay informed. Stay secure.

Get the latest security insights, threat updates, and exclusive offers - straight to your inbox.

Thank you! You have subscribed!
Oops! Something went wrong while submitting the form.