Why does FAIR fail to address the issue of fundamental organisational protection?
Originally published 20/04/23
“You can hear them sigh and wish to die, You can see them wink the other eye, At the man who broke the bank at Monte Carlo.”
Introduction
I will outline for you a number of arguments that will challenge the FAIR methodology and its failure to address the fundamental issue of organisational protection. This will focus on the FAIR methodology as defined in ‘Measuring and Managing Information Risk: A FAIR Approach’.
Firstly, I’ll define what I mean when we say the fundamental issue of organisational protection. Let’s consider what protection actually is, this is the act of preventing damage, loss, injury, or harm. We are in that ballpark. But we can extend this to consider how we would respond in the event of harm occurring to the organisation.
I will give you the steel man position for FAIR. I will provide the following definition and this is derivative of the text itself.
“The purpose of FAIR is to provide decision-makers with the best possible information about loss exposure and their options for dealing with it. FAIR provides a standardised methodology so that risk assessment is consistent across risk factors, repeatable, and defensible”.
What the authors of the FAIR methodology do well is they have identified a number of problems that exist within the industry, specifically that the terms we use are inconsistent. They accurately point out that decisions will be made irrespective of if a methodology or not is in place. They have attempted to apply a methodology to risk assessment and break the problem into constituent components. The separation of primary and secondary risk is notable, and useful. They have made efforts to rationalise the irrational and whilst I appreciate their efforts, there are problems that need to be discussed.
H₀: Den = μ
Conceptual problems
The practice of FAIR can be summarised as gathering or creating data, processing it through a model and making a forecast about what loss expectancy may occur. The first and obvious point about this is that it doesn’t say anything about protecting the organisation or responding to harm. But let’s unpack the methodology.
When FAIR introduces ‘scenario building’ the model betrays a fundamental problem. It describes ‘asset at risk’. This is in the singular and this is important as it is one of the major objections to this framework. There is an embedded assumption within this however that the asset estate is known and therefore can be modelled but we know from how organisations operate there is an endemic issue with asset identification let alone having enough information to input into the model. This assumption in isolation means that the model is always limited by the practitioners understanding of what they know about the landscape. This in itself would be enough to repudiate FAIR as a legitimate method of understanding organisational risk especially in the context of modern approaches including ephemeral infrastructure, and the sticky problem of shadow IT.
The asset-oriented construct also highlights another issue with the methodology, IT centricity. This doesn’t talk to the business risks rather it talks at an asset level. This is a bottom-up approach where a top down would be better, starting with the business objectives, moving through strategy, then to processes that support that strategy, then onto asset level considerations. This is IT telling the business what to care about using disingenuous framing to promote their agenda. The asset level approach also makes this unworkable at any scale other than a very local setting and is resource intensive.
On the point of modelling single assets, we can look to established models from which FAIR seemingly draws, such as methods within epidemiology and bio-statistic. The FAIR method misapplies these concepts which it does in an overly simplified way. We would not apply an incidence rate for example, to a singular ‘thing’, we would consider how many instances we might expect within a population that is being assessed. You would not expect a doctor to say to a patient ‘you have a 40% chance of contracting XYZ condition’. They might say that ‘from a population of 100 people with similar co-morbidities, we would expect 40 to get XYZ condition however we wouldn’t know which 40 these are’. These are fundamentally different statements as there are likely 40 within that population with 100% chance and 60 with a 0% chance. So, a population level metric cannot be reduced into the singular form in a way that FAIR seemingly requires. To this aggregate this into a financial figure is misleading for decision makers in organisations and makes no meaningful statement about where harm might occur.
We see a similar theme when we are looking at investment forecasting. This is abstracted into making forecasts about asset classes rather than individual investments. We are talking about a portfolio being modelled, not a single asset. Modern portfolio theory was important as it established the relationship between risk and reward and some methods of QRM would employ such tools like the Efficient Frontier to optimise that relationship however FAIR doesn’t reach that level of sophistication. Investment forecasting has strict limitations on it can represent the models it presents to customers under COBS rules. How often have you seen ‘past performance is not a reliable indicator of future performance’ on an investment illustration? When did you ever see any such disclaimer on a risk model?
Now, you might say that FAIR can accommodate for classes of assets (although they discourage this in the book) however we know that this is not the intended application by virtue of the scenarios that are presented within its examples. Singular is the order of the day and the methodology applied is wholly inappropriate. And this problem then cascades throughout the model as it is upward aggregated to a higher level of abstraction to represent a holistic picture in financial terms. The problems are masked through abstraction and feign credibility, again, giving decision makers no meaningful basis under which to determine appropriate means of protection.
But the big betrayal is what it doesn’t contain. There is no reference to retrospectively checking the forecast for accuracy. Incident management, process efficiency (lean six sigma), threat modelling, and development methodologies all have root cause, retrospective, hypothesis testing, or even ask did we do a good job? No such mechanism to determine if the forecasts are right or wrong is present within FAIR. The consequence for this is that there is no emphasis on model refinement after point of publish, there is no control group perhaps to check outputs against an industry level perspective. This makes the output unverifiable and puts it above scrutiny after the fact.
Data Fabrication
You gotta be fucking kidding me! ‘Expert estimation’?
This is a real problem with FAIR insofar that it actively advocates for having a punt at quantitative measures where data isn’t available. The authors seem a touch salty about criticism where they ‘make up numbers’ due to a lack of data. They lean on the legitimacy of using SME knowledge to fill this gap, but they create a straw man out of the criticism stating that the criticism of using an SMEs opinion is hypocritical as it is another SME who is leveraging the criticism. They make no effort to challenge the criticism itself and mischaracterise the point of the critique, which is that they are claiming that this is a robust model based on data yet with no sense of irony whilst defending making shit up.
This is where I have a problem with Freund and Jones. They have sections within their text to counter perceived criticisms, and this is how you know they are not throwing straight dice. They strawman arguments repeatedly, using false dichotomies to dismiss the criticisms. In fairness they do acknowledge that expert estimation is subject to cognitive biases although don’t provide any meaningful guidance around this other than to reference it.
Sampling problem
The challenge here is that historical data is limited in utility, especially when we are viewing the problem from an asset-oriented perspective. How well is the risk of an event occurring represented in data from previous software versions, with vulnerabilities that have been patched, and new forms of exploits that aren’t known. There is no valid comparison to be made from historical data to the current situation unless you aren’t updating your estate, which to be frank, means you have bigger problems. Even where there is data available, you aren’t comparing apples with apples and this talks to a problem with the logic of the methodology. Would you consider using data from an army using muskets to establish what the likely battle losses would be for an army using M16s? No, the idea is ludicrous, but this is the exact same problem with using historical data in IT estates.
Fallacy
The entire enterprise of FAIR is predicated on a logical fallacy, the Gambler’s fallacy, also known as the Monte Carlo fallacy (through a delicious bit of irony). That is, where data is available and not fabricated then it is not representative. FAIR assumes there is validity in using historical data to predicate anticipated losses and inform decisions, but this is not the case. It is not a given that your assets hold the same level of risk as described in historical or comparator data. Also, you cannot assume that criminality would be comparable due to its non-deterministic nature.
But FAIR creates a wonderland of fiction atop of logical fallacy. Maybe we should call it Phallus in Wonderland?
Monte Carlo
The Monte Carlo model is an integral part of the FAIR methodology. There are many problems with this method.
The methodology requires that the underlying distribution of the data is known. I could mention standard deviations but you might take that to mean you are ordinarily wronguns. That aside, FAIR permits and even advocates the creation of data so the distribution of the data is on dodgy ground and therefore the Monte Carlo method should be ruled out on that basis alone. Monte Carlo models a probabilistic outcome derived from the population distribution. FAIR makes no effort to describe distribution models other than using an inadequate description of a normal distribution with skew. References are made to other books but not where discussing distributions and this subverts the main generative output of the probabilistic model.
The Monte Carlo method might be useful in understanding problems that are deterministic in nature such as mortality and morbidity however criminality is non-deterministic in nature if you grant that individual autonomy rather than structural societal issues is the causal factor. But here we could veer off into a discussion about if free will exists. The appropriateness of Monte Carlo is debatable as its use must assume that free will does not exist if we relegate criminality to a deterministic factor.
Methods like Monte Carlo are not permissible for retail investment projections as they are probabilistic and not deterministic in construct and COBS rules require standardised deterministic projections, essentially pre-defined modelling scenarios that projection run through. This is for many reasons, it makes the projection repeatable as random variables are not used, it makes it defensible as the scenarios are consistent, and it can be reconstructed at a point in the future and the output replicated with precision. You could split hairs and claim that pseudo-random number can be used, and this would be repeatable, the point is not made and so my objection remains.
Whilst FAIR claims to be repeatable, and defensible, and asserts that the use of Monte Carlo gives it credibility it does exactly the opposite and undermines it’s claims at a fundamental level.
Threat Community (TCom)
Risk with RPG elements? I couldn’t let this slip without comment, it’s like top trumps . . . for c . . s
Ethical Issues
There are many ethical problems with deploying this kind of methodology within an organisation. We hold the highest virtue to be truth. This model does nothing to talk to truth or represent the real world. It is perception, opinion, and inuendo presented as fact. It claims a level of fidelity within the analysis that is absent and draws from reflected credibility from established disciplines.
This model relies on loss aversion bias to inform decisions about spending in a way that is disconnected from the organisational objectives. Using mechanisms such as loss aversion bias constitutes leveraging fear. You can make an argument that utilisation of these methods decreases the security of an organisation as the perception is one mired by catastrophic prophecy. If the organisation proceeds to operate with a backdrop of perceived instability, then the perception may become reality and adversely impact how business in conducted.
As previously mentioned, this type of model assumes IT centricity as it doesn’t talk to the organisational need rather the need of the supporting functions. There becomes an assumed parity that IT cost has equivalence to business cost which can’t be taken for granted. It becomes particularly problematic to present speculative information based on poor data and methodology and stack it up against more considered planned business spending.
Conclusion
At this point it should be clear that the outputs of the model have nothing to say about protecting an organisation as the basis of the projection is technically flawed as I have outlined. This means the output has little utility in determining how a protection approach should be defined outside of relative comparisons, it has nothing to say about how to respond to harm when it occurs. But the problem is more pernicious than that, it fails to understand what the priorities of the business are, have you ever seen those included in a model?
These models require a significant investment in time to implement, this is time that could be spent doing actual work. The claim that this data informs better decision making does not stand up to scrutiny in my . . . err, 'estimation'.
A rebuttal of this argument will need to address the conceptual issues I have outlined with FAIR, the technical issues with FAIR, and the ethical issues with the application of FAIR. But these cannot be solved in its current conceptualisation. The apparent axioms on which it is predicated are incorrect. It is flawed at conception, flawed in execution, and misleading in practice.
But,
Does this do anything that protects the organisation?
No – the inverse is true, it distracts from activity that is protecting the organisation based on the capricious whims of whoever is doing the analysis.
Does this do anything to help respond to harm that has occurred?
No – it has nothing to say in this regard.
And to wrap up, let’s assess FAIR by its own standards. Seems fair, right?
Is it useful?
No – it is an incomplete model, with misapplied methodology.
Is it practical?
No – the asset-oriented approach assumes knowledge you likely don’t have, and time you probably don’t have either.
Are the results defensible?
No – the use of random number models and disregard of the pre-requisites of the Monte Carlo model means it’s unacceptable. Add to that the lack of retrospective validation and refinement of the model.
So, have fun running a FAIR practice, or as I will now be calling it, your probability density function.