Systems-Aware Red Teaming (SysART)

The need to red team systems has long been an Red Team Journal theme. Are we overstating the obvious? Maybe, maybe not. The systems we’re talking about are broader than just hardware, software, or even “cyber” systems. Just about any such system connects to other such systems, and, like it or not, your system, if even minimally useful, is a node in a broader system, linked in sometimes unexpected ways to other systems (themselves nodes in broader systems). And, like it or not, your adversaries are not going to limit themselves to your definitions or boundaries.

The Nature of Systems

All of this, of course, begs the question What is a system? To a systems engineer, a system is “An integrated set of elements, subsystems, or assemblies that accomplish a defined objective. These elements include products (hardware, software, firmware), processes, people, information, techniques, facilities, services, and other support elements.”1 In other words, a (designed) system is a chunk of the world that does something.
      Most red teamers will be comfortable with this definition because it describes the sorts of systems they red team. Ask a systems engineer, for example, what they’re working on, and they’ll say something like “System X.” Ask a red teamer what they’re red teaming and they’ll say something like “System Y.”
      This is too limiting. Systems are more than just the “things” we build and red team. Peter Checkland’s definition of system points us toward a broader view. According to Checkland, a system is

A model of a whole entity; when applied to human activity, the model is characterized fundamentally in terms of hierarchical structure, emergent properties, communication, and control. An observer may choose to relate this model to real-world activity. When applied to natural or man-made entities, the crucial characteristic is the emergent properties of the whole.2

      Crucially, he characterizes a system as a “model of a whole entity.” A model is “an intellectual construct,”3 a slice of the world that allows us to describe and, ideally, influence what we model. But what is “a whole entity”? That’s not so simple, and the task of defining the relevant “whole” from the perspective of an observer is what Checkland spends many articles and books explaining. Unfortunately, it’s something many red teamers neglect. After all, if you’re paid to red team “System Y,” why bother to build an “intellectual construct” of “a whole entity”? Well, for one, your adversary might be playing a game that involves much more than “System Y.” If so, you’re at an immediate disadvantage. Your adversary’s view, for example, might involve a “whole entity” in which he or she easily negates or circumvents “System Y.”
      Checkland’s definition is relevant for another reason: interesting “emergent properties of the whole” might not exist at the level of “System Y.” In other words, “System Y” on its own might not exhibit many emergent properties, but “System Y” interconnected with other enterprise systems and linked upstream and downstream to other enterprises just might. But you won’t know that if you don’t—at the very least—consider the larger system in which “System Y” operates.
      Finally, note Checkland’s condition “when applied to human activity.” Even if “System Y” is only, say, a hardware system, it’s almost certainly—via its hierarchical relationships, emergent properties, communication, and control—relevant to human activity. Come to think of it, is there anything worth red teaming that isn’t relevant to human activity? And remember, just about anything that’s relevant to human activity can be impaired, broken, or exploited.
      Yet Checkland’s definition doesn’t really address what might be the most complex, meaningful, and important system of all: the subjective perceptual system, which will vary not just between attacker and defender but between attackera, attackerb, and attackerc. This system, or set of systems, is difficult to condense in a model and, what’s more, is subject to reciprocity of a sort that compounds complexity and amplifies uncertainty. While the defender attempts to hide and show information to his or her advantage (relative to the attacker), the attacker is simultaneously trying to hide and show information to his or her advantage (relative to the defender). It’s the essence of strategy, and it’s also in many ways the essence of red teaming.
      Ultimately, the real challenge isn’t to red team the “whole entity” in an unconstrained global sense; the real challenge is to decide where to draw the boundary between the “whole entity” and the system to be red teamed. Clearly, we can’t red team the whole world; it’s not just infeasible, and no client has the funds to pay for such an effort. Besides, it would be out of date before it was finished.
      What we can do is spend some time up front to understand the relevant “whole entity” before we settle on the system to be red teamed. It might change the scope of the red teaming effort, but even if not, it will allow us to put the red team’s findings in context, which is especially important if we plan to run more than one red teaming engagement.
      Red teams often resist this level of analysis. It requires a different set of skills than most conventional red teamers possess (or even desire to possess). “Just let me at ‘System Y,’ and I’ll show you what I can do!” says the conventional red teamer,4 who probably knows a lot about how to break “System Y” but likely knows far less about what breaking “System Y” means within the broader “whole entity.” That’s fine, as long as someone is thinking about what it means. Simply hiring a red team and pointing them at “System Y” is no longer good enough. (Odds are that your adversaries, for example, are thinking about what exploiting “System Y” means.)

SysART

Again, we’ve written a lot about this issue over the years. We plan to keep writing about it, but we’re also ready to start doing something about it. Systems-Aware Red Teaming (SysART) distills our long-standing philosophy into a working method. With SysART, we help clients move intentionally from the “whole entity,”or conceptual frame, to a workable focus, or system of interest. Along the way, we apply our red teaming tools to posit and consider alternative frames based on the schema of potential adversaries.
      A key aspect of SysART is the engagement model, which we’ve discussed previously. Together, the conceptual frame, the system of interest, and the engagement model set the stage for the actual red team brainstorm and follow-on analysis.
      In SysART, we emphasize the subjective perceptual plane (perception, misperception, deception, self-deception) using a set of proprietary tools. These tools apply during both the framing and brainstorming phases and help participants unfamiliar with deception, stratagem, and indirect strategies get up to speed quickly.
      For those who equate red teaming with pentesting, you’ve no doubt noticed by now that SysART is not pentesting. It’s also not a method for sending snake eaters over, under, and around your guards, gates, and guns. That said, we believe that every pentesting and operational red teaming should be nested within a systems-aware framework.
      It’s a hard sell, we know; as we mentioned earlier, conventional red teamers often resist framing their efforts within the “whole entity,” arguing instead that framing distracts from the immediate hunt. To a degree, they’re right; every day is a skirmish, and companies can’t afford to disengage from the tactical struggle. At the same time, a systems context can facilitate a strategic perspective and enable strategic, system-wide decisions. Compared to the cost (pre- and post-breach) of ongoing tactical operations, an investment in a strategic framework is minimal. We believe it’s worth it.

Notes:

  1. INCOSE Systems Engineering Handbook, p. 265
  2. Systems Thinking, Systems Practice, 317–318
  3. Ibid., p. 315.
  4. As far as we know, no conventional red teamers read RTJ or Reciprocal Strategies