All adversarial assessments (red teaming, pentesting, threat modeling) knowingly or unknowingly employ an engagement model, defined here as a set of implicit and explicit parameters governing the nature, scope, and procedures of the assessment. These parameters include the following:
- Requirements, constraints, and business rationale;
- The system of interest and its boundaries;
- Contexts and scenarios;
- Rules of engagement and degrees of reciprocity;
- Adversary modeling;
- Attack modeling;
- Risk analysis;
- Deliverables; and
Even if the engagement designers don’t specify or build out all aspects of the engagement model, they should still consider each of them when planning the adversarial assessment. Doing so helps ensure that fewer “gotchas” emerge downstream.
We’ve encountered multiple red team engagements in which the designers failed to acknowledge, let alone address, key aspects of the standard engagement model. While at first glance this might leave the red team “free to explore,” in practice it means that one or more of the following problems is likely to emerge.
- Not specifying requirements, constraints, or business rationale: It’s not clear if and when the engagement has met its requirements or even how those requirements match the effort’s business rationale. Worse, the engagement matches no appropriate business rationale.
- Not specifying processes. The red team reduces the prospect of repeatability across engagements and finds it difficult to untangle post-engagement questions and concerns.
- Not specifying the system of interest and its boundaries. The engagement is too narrow, too broad, or dithering and unfocused (but the engagement manager and client can’t really tell). The relative risk to the various facets of the overall system is difficult to analyze.
- Not specifying contexts and scenarios. The engagement addresses implausible or improbable circumstances. The engagement extrapolates from a narrow or subjective case to a general case.
- Not specifying rules of engagement or degrees of reciprocity. The engagement is unnecessary loose or tight. Risks to the client and the client’s systems are not addressed before the engagement begins. The engagement team fails to anticipate simple countermeasures to their actions.
- Not specifying an adversary model. The red team implicitly models an adversary that is the mirror image of itself (the red team), though the belief persists that the team accurately represents one or more real-world adversaries.
- Not specifying an attack modeling approach. The attack space is explored haphazardly and attacks are not documented adequately for further review and analysis. Future engagements revisit the same attack space while overlooking other areas.
- Not specifying a defensible risk analysis approach. Real risk analysis fails to occur. Attacks are highlighted for the wrong reasons, and ultimately the client is induced to apply resources to the “wrong” countermeasures.
- Not specifying deliverables. The engagement output is inappropriate as an input to the risk analysis or fails to address a meaningful requirement.
- Not considering issues of perception. Potential misperceptions and deceptions are overlooked and the effort points the client in the wrong direction (or worse, in a direction that directly benefits an adversary).
The point isn’t to spend all the client’s money and time designing the engagement, leaving no money for the engagement team to explore actual attacks; rather, the point is to spend a bit more time up front to help ensure that that client’s money and time are focused on the right things. Jumping immediately into the attack phase of an adversarial assessment might seem like a great way to short-circuit burdensome “paperwork,” but it usually leads to ambiguity and, in many cases, a dangerous sense of false confidence.