Introduction
Antifragile was written by Nassim Taleb and published in 2012. Taleb correctly predicted the 2008 economic crisis but not by using established predictive risk methods. He coined the term ‘antifragile’ as response to ineffective risk practices. ‘Antifragile’ as outlined by Taleb is beyond something that can withstand adversity and its something that can thrive and improve.
“Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.”
Fragility is summarised by Taleb as,
“vulnerability to the volatility of the things that affect it.”
How does fragility and anti-fragility apply in the context of a corporate security function? Can it be implemented in a security function? Is a security function itself fragile?
As I read the prologue of Antifragile by Taleb I liked the concept but as I fought through one dry page after the next, I saw an elegant concept degrade. Antifragile talks about fragility that is manifest within complex systems. This is in part due to interdependence. The concepts within antifragility demonstrate a high level of interdependence leading the construct to become complex and subject to its own critique. As Taleb says, simplicity is sophistication, so what does the emergent complexity within antifragility say about antifragility?
But it’s within the constituent parts that we find value in the concepts, or at least I do. I find the whole to be less than the sum of its parts. So, we can dissect, and learn in a more meaningful way from the isolated concepts rather than the total understanding of antifragile which, if held to the same standard Taleb lays out about predictive risk, cannot be understood. My cynical side wants to level the accusation of domain dependence at the author . . . which is the failure to recognise failures outside of the context on which they are usually seen.
There are some immediate consequences of Taleb’s conceptualisation to the practical aspects of a security function but the well runs deeper. The implications of Taleb’s concepts can apply to the security practitioner as well as the security function and the tasks it executes. Some of the accepted cornerstones of a security function such as risk management are placed on the altar.
The Industry
How would we describe the security industry? Would it be fragile, or antifragile? I would suggest it is fragile. It is not able to withstand volatility very well and places practitioners under immense pressure when there are black swan events. We’ll place to one side that security tends to focus on the wrong problems (we should choose better ones), and the issue of personal boundaries that practitioners should adhere to. Rather, we can consider that when there is significant volatility the sector jumps to respond in an undisciplined manner. But it fails to learn or improve lurching from Solarwinds to log4j and onwards having not learnt a damn thing. But this is the environment that is created for the industry. If we look to Zimbardo and the Stanford Prison Experiment, what is clear is that people within an environment will conform themselves to the expectations of that environment. And its in that environment we find our security function.
Risk
“Fragility can be measured; risk is not measurable” is the strong proclamation made by Taleb. And there is good reason for this claim. The non-deterministic nature of randomness means that prediction is impossible with any degree of fidelity. Infrequent severe events aren’t predictable. We can look to risk approaches like FAIR and see the current state of thinking in the industry. Taleb talks about complex systems and how the interdependencies severely compromise the ability to understand them. Risk models tend to repackage the gambler’s fallacy as they assume determinism owing to historical data being used to model future events.
We can look to many recent failures of predictive models to know that they are nothing more than speculative. How many accurately predicted the outcome of the Brexit referendum, or the election of Trump? These are binary considerations where the number of possible outcomes is small. How well could we fare when we are attempting to predict an unknown event at an unknown time? We are talking about the black swan events Taleb refers to. The complexity problem alone slams the lid, and nails it shut on risk as a credible discipline. That line of conversation is not one of rational conclusion but one of a wilful disregard for the fundamental nature of existence.
There is a paradox, however. Taleb describes Seneca’s barbell which promotes antifragility as needing a balance between low-risk pursuits and high-risk ones. But not a whole of moderate risk, as this does not give the requisite bimodality to become antifragile. So, there is to be a minimisation in exposure to risk and an appetite to take smaller risks in higher volume. Would these be the risks that cannot be measured I wonder? I get what Taleb is pointing to but the incoherence in the use of language is frustrating.
This leaves us in an uneasy place when discussing how this can be applied within a security function. For many security functions this is the fundamental basis under which they operate. A shift from viewing risk to viewing fragility would be seismic. It would mean that we aren’t looking at what could happen, but we are looking at problems that demand resolution. A fragile system, process, structure requires an immediacy to fixing them. As Taleb rightly states “a system built on the illusion of understanding probability is bound to collapse”. Isn’t this what we have done with risk management? Isn’t this why we keep failing?
Simplicity, Optionality, and the Barbel
Taleb introduces a triad of sorts (yay) but there are concepts that are probably more useful when placed together as implementable constructs within a security function. As a function we make assessments of IT systems, business processes, organisational structures, among many other things. If we look to understand the fragility of systems and processes, we can understand if they demonstrate simplicity, optionality, or the barbel.
As a specific example we can apply this to assessing solution architecture from a security perspective. We can see if that architecture is simple or demonstrate design economy. We might seek to understand if we are duplicating functionality or if we are sending information across boundaries where it’s not necessary or if there is an ongoing need for manual intervention to perform maintenance. We can also see if that design reduces our optionality, if tool choices lock us in to a specific vendor or product set? The optionality gives us the means to respond to volatility as it leaves paths open. We might consider simplicity and optionality to be underpinned by the barbel which reduces exposure on one side and maximises benefit on the other.
There are many ways these concepts can be applied to processes and systems and by understanding these aspects fragility becomes something we can understand with some degree of intuition. This is a concept that can be applied throughout the layers of an organisation. From the organisational structure where we might take Garratt’s self-learning board to be antifragile, or business processes where we might see data driven error correction and optimisations such as lean six sigma, or incident management where we seek to understand the root causes of issues. Some of these might be robust rather than antifragile, but they are not fragile unless the interactions between these things creates layered complexity. But these methods have accommodated and are designed to deal with failure and have built in sentiment that seeks to improve themselves as an outcome of an experienced volatility.
But we could take stressors on processes, or experienced incidents to be the required input to drive refinement speaking to Taleb’s point. But in the case of incidents (security or otherwise), we don’t need to know when the incidents will be received, rather, we need to be ready to respond. This is non-deterministic. Incident managers don’t make predictions about how many disruptive incidents they will need to handle, they aren’t encumbered by the same hubris as risk managers. This would be an inversion of domain dependence where we aren’t able to see problems in our own practice by learning form others.
Adversity and the practitioner
Taleb argues that there are biological reasons why randomness and stressors promote antifragility. The selfish gene is given by way of example and how biological evolution works in response to random variables. There are snuck premises or implied assumptions in his argument which are,
Evolution is progress towards antifragility, and
It is the fittest that survive.
But there are many such creatures that are so specialised through the evolutionary process that they can only exist in a specific time and place. They are fragile. You might go as far as to say that less complex creatures are better examples of antifragile such as thermophile bacteria as they truly thrive under extreme stress and environmental volatility, but you might also state they are fragile as they fail out of their specific context. The invocation of survival of the fittest is a poor one to make as it’s incorrect. What Taleb is invoking is the social Darwinism of the 80’s, the mindset of Gordon Gecko. To paraphrase Darwin, it’s survival of the most adaptable to change which is not what Taleb draws on to make his point. But if we lean into critical thinking, we can see this line of argument is built on non-sequitur and false premise. But if we accept his analogy in good faith, we can acknowledge the point he is trying to make.
Taleb make the inference that it is adversity that promotes growth. Our struggles determine our success as Mark Manson might describe or the obstacle in the path becomes the path as Ryan Holiday might articulate. They are both talking to the same fundamental truth that humanity is ill equipped to handle a peaceful existence, disturbance is required in some capacity. In an Adlerian sense how we react to volatility is not dependent on what has come before and we have the agency to choose.
The asymmetry of Seneca’s barbell articulated by Taleb essentially speaks to the reduction in things to lose whilst maximising what can be gained. The asymmetry lends itself to interpersonal relationships where there is typically an asymmetry. Even within peer groups a hierarchy will form for many reasons. We understand from Adlerian psychology that achieving anything cannot be done in isolation so it would be optimal to encourage relationships where we have little to lose from holding those relationships yet may yield significant future benefits. This speaks to the concept of optionality too.
So how do we embed this in a security function? We can create the environment to which the function will conform. By removing the crutch of compliance to frameworks and by promoting originality based on critical thinking we can promote fulfilment through solving problems. This will have the benefit of the promotion of growth within the practitioners. The practitioners themselves will become antifragile and their contribution to the organisation will be less susceptible to fragility. Taleb describes that curiosity is antifragile, yet I find that many practitioners have an overreliance on frameworks and failed procedures. Curiosity is the path to truth, and truth is one of the fundamental values which we should be operating from as practitioners.
If we ourselves are able to be effective leaders, then those who we lead will emulate our behaviour. But then we must be permissive of volatility to a point whilst being seen to be a paragon of stability. Machiavelli understand this about incongruence of outward perception when compared back to action. The creative destruction Taleb references is reminiscent of Greene’s law, recreate yourself or Maxwell’s law of sacrifice, or even the pain of development that Mason discusses. The point is one that has been known since antiquity, that suffering is a required part of the human experience.
Neomania
The ongoing march to version 2.0 is an affliction of modernity. Within security I suspect this is partly the mindset of IT and partly a byproduct of how we perceive ourselves to be ‘mercenaries in an international geo-political conflict between nation states’ while we thumb through our obligatory copy of Sun Tsu’s the Art of War. We covet the latest and greatest technology as we set the paradigm that we are in an arms race with the ‘hackers’ yet unironically complain as they keep compromising systems with the same decades old SQL injection. Failing to learn is fragility.
Taleb discusses how in the modern world increasing technological knowledge is making things a lot more unpredictable. This is the same conclusion if we look back to cybernetics and the work of Norbert Wiener. Although Taleb is casually dismissive of Wiener he misses the point that they are both talking to the increasing entropy embedded within increasing complexity. As Wiener discusses, the more links that are present in the chain of communication the greater the entropy of the system will be. Then the order we think the modern world gives is illusory as the entropy is increasing. As the randomness of events that occur is increasing, the predictability of future events becomes even less certain. This is an obvious point when you work it through as each additional interaction or node in the network of connection increases the complexity exponentially.
Taleb makes a claim about Neomania that does not sit well with security. The claim is that ‘what survives must be good at serving some purpose’. Well, no. This is not the case in security, and it is predicated on a similar assumption to the evolution argument Taleb makes. It is prone to the same type of flawed premise as survival of the fittest rather than the correct citation of survival of the most adaptable to change. We can point to many examples of security doctrine that are dicey.
Security has built itself out of false premises that are hostile to critique. Here we see the ingroup preference, the language of the tribe in play. Too many egos need to maintain the status quo, I am reminded of Maxwell and how he describes creating a dependence. And that dependence is on a limited set of ideas that create a tight boundary and surrounds itself with ignorance.
But if we follow Taleb down the path, are we led to an uncomfortable conclusion that the progress of the technological is that of the neomaniac and a path that leads to chaos. Are we in agreement with Theodore Kaczynski? Or perhaps we should embrace the creative destruction of Nietzsche and Marx?
Conclusion
Can the concept antifragility as described by Taleb be implemented? No, it can’t. Conceptually antifragility falls on its own sword, its criticism when turned back on itself means that it doesn’t stand. But it does something very well and that is acknowledge the chaotic nature of existence, and perhaps the chaotic nature of the reasoning here is to promote gains for the reader from the volatility of the author’s reasoning. I’m sure Taleb will have a big word for that, maybe meta-antifragility.
Can the constituent components of Antifragility be implemented? Yes, they can. And they have high utility. But this will stand against the fundamental tenants of current security thinking. It will be nothing less than a pitched battle. But let’s introduce some volatility!