The red teamer is that rare contrarian who cares more about getting the job done right than poking someone in the eye or winning the corporate game. Red teamers mine information from everywhere (but view all information as suspect). Red teamers know when to talk and when to listen (and generally listen more than they …
The need to red team systems has long been an Red Team Journal theme. Are we overstating the obvious? Maybe, maybe not. The systems we’re talking about are broader than just hardware, software, or even “cyber” systems. Just about any such system connects to other such systems, and, like it or not, your system, if …
Muddying the waters today is the growing gap between cybersecurity red teamers and red teamers of other kinds. I’ve done my best to define the differences, but it wasn’t until this week that I finally realized explicitly that the confusion isn’t just about types of red teaming, it’s also about red teaming roles.
All adversarial assessments (red teaming, pentesting, threat modeling) employ an engagement model, defined here as a set of implicit and explicit parameters governing the nature, scope, and procedures of the assessment. Knowingly managing the engagement model is a key to success in nearly all forms of adversarial assessment. Ignoring and skipping aspects of the engagement model can lead to ambiguity, a false sense of confidence, and other downstream problems.
Whether knowingly or not, red teamers, pentesters, and threat modelers employ what we call engagement models: the implicit and explicit frames governing the red teaming engagement. These models commonly include defining details such as the threat actor to be emulated, the threat actor’s assumed operational code, the engagement’s time horizon and scenario, the rules of engagement, and so on.
It is our opinion—and probably our opinion only—that a red teamer will learn more about red teaming by reading the short stories of Jorge Luis Borges than by reading any number of books on technical topics. “But how can it be,” the hardened technophile will ask, “that I have anything to learn from Borges, who lived and died in an era before big data, online shopping, and dank memes?” Of course, this question merely confirms the asker's status as a hardened technophile and underscores his/her urgent need to read Borges.
I went on a bit of a Gene Wolfe binge recently. If you like speculative fiction and haven't read any of his books or short stories, give them a try. One of the things you'll notice is that you can't trust the narrator. This is true of his highly regarded longer works such as The Book of the New Sun and The Fifth Head of Cerberus and of (most of?) his short stories such as "The Ziggurat," a disquieting and much-debated yarn.
Despite the fact that we pride ourselves on thinking laterally and creatively, we red teamers are still human, and as humans, we share a host of "wetware" issues with our non-red teaming colleagues. The difference? We're aware of the issues (or at least we should be), and we (usually) try to do something about them. Even so, the issues persist.