The Hyperreal Risk - Why Security is a Simulation
When risk heatmaps and FAIR models become the Hyperreal Gospel
Introduction
Something isn’t quite right. Something is . . . off.
Security is sick.
The subjective nature of security isn’t easily understood so we classify, categorise, and rationalise the subjective. We seek to make it objective but doing so creates a secondary reality that is increasingly fragmented from the real. Frameworks, models, and metrics have become props in a ritual of governance on the stage of security theatre.
Jean Baudrillard called this simulacra where descriptions of reality are replaced and where abstraction becomes reality. As we layer concepts upon each other they become increasingly fractured away from what is true.
The increase of confected content lies all around us. A deluge of AI generated slop has submerged descriptions of reality leaving a sea of synthetic floaters in its place.
Four orders of simulacra
Abstractions are necessary as reality is hard to conceptualise when we are dealing in raw data. We need shorthand ways to understand it, representations that can serve as heuristics to aid comprehension. Is there a point at which these no longer serve comprehension? I suggest there is and I also say that we have passed that point.
Jean Baudrillard proposed four orders of simulacra which describe the level of abstraction from reality. There is a conceptual distinction we need to make. Simulation (or hyperreality) is the false reality and simulacra are the components of which it comprises. Simulacra are representations or signs that no longer refer to reality, instead becoming realities in themselves.
First Order - The representation of reality.
Packet captures, system logs, or other direct measurements.
Second Order - The perversion of reality.
A risk heatmap (5×5 grid) that reduces thousands of vulnerabilities into coloured boxes.
Third Order - Masking the absence of reality (looks real but not tied to any original form)
A compliance dashboard claiming “69% compliant with [your framework of choice]”
Quantitative models like Annualised Loss Expectancy (ALE) and Monte Carlo simulations in FAIR, presenting probability estimations as truth (even when grounded in calibration).
Fourth Order - Pure simulacra (no relation to reality and simulacra becomes reality)
A GenAI policy bot confidently generating bullet points from last year’s policy, recycled endlessly in PowerPoint decks.
Ah ha! I hear you say, isn’t the concept of simulacra itself at least a third order simulacra? Well, yeah, it is. I get the recursive irony.
The simulacra form the hyperreality or the simulation. In a hyperreality the simulacra become the reality. They do not need to relate to reality in any way and these models guide action and thought. We can apply this line of thought to security as a practice and we are left with an unsettling conclusion.
Security is a simulation.
How is security a simulation?
It’s easy to ponder how security practice and risk management provide levels of abstraction that create simulacra. There is a well understood maxim that metrics remove information.
Measurement - a representation of reality. It preserves the detail of what occurred: we received 10 new pieces of work and completed 5.
Metric - an abstraction of reality. Detail is compressed into a symbolic figure: we have completed 33%.
Where measurements are aggregated into metrics, nuance becomes lost. It appears objective, but it is a statistic. We all know how these can be abused. They are a shadow of reality, a perversion of truth.
Consider how risk grids reduce complexity into coloured squares or how CVSS scores collapse exploitable conditions into a number. Even compliance checklists become tick boxes completion metrics that flatten into nonsense (there are many other objections with compliance but that’s for another time). Each of these are metrics, not measurements. They obscure reality in the pursuit of expedience but this leaves us at the uncomfortable conclusion that what we are governing is not real.
If we measure compliance to a framework then we are representing a conformity to an abstraction. The framework measures a perception of a protective state which is typically in the form of an attestation. We have an abstraction of an abstraction of an abstraction. At no point in this model is there a requirement to deal in reality, to understand, or comprehend.
Increasingly practitioners are looking to AI tools to do their thinking for them. Summarise this, analyse that. Baudrillard drew criticism for fatalising . . . something I feel some affinity towards . . . I have to wonder if using AI tools to generate simulacra pushes us further away from comprehension and further towards irrelevance. If we consider that AI takes its responses from a sample of a distribution then we observe a parallel with measurements generated from compliance scores. Rationalisation using AI as a basis compounds these abstractions to the point of delusion.
The worst part is, we perverted ourselves and thought it was all very clever. Welcome to security where protection became fiction.
Nothing’s real. Everything is a copy, of a copy, of a copy.
Tyler Durden
Entropy
The trajectory of technological development contains a certain inevitability. This isn’t like Moore’s law, it is something else, more akin to a Manhattan Project. I am not talking about the destruction of an enemy, I am talking about the destruction of ourselves.
Baudrillard offers us an intriguing hypothesis within the notes of his book. He suggests that Information = Entropy. He contrasts this with cybernetic information theory where information is negative entropy and it is the communication which causes entropy. Baudrillard argues that the information gained about a systems is already a neutralisation of the true state of that system so the information is a form of entropy.
What if both are correct but temporally separated? Cybernetics assumes a purity to information but doesn’t consider it increases in entropy over time through the processes that cause it. This inevitably leads to Baudrillard’s interpretation that information is entropy. A stark example is the proliferation of AI generated slop that pollutes the internet which highlights that the transition between states does not have to be linear, and it can happen quickly, just as Baudrillard suggests of simulacra.
We can consider that information was once durable to entropy but we process that information through AI. That system then creates new information which itself is a form of entropy. The creation of this new information perpetuates a vicious cycle that ends in the decay of meaning. Within security we can consider that the expert estimation of Factor Analysis of Information Risk (FAIR) is a mechanism to create such entropy. Although they lean into Bayesian methods to incorporate new data to decrease entropy it is still built on shaky foundations.
Baudrillard makes a sharp observation by introducing the concept of entropy. Although he doesn’t necessarily make the statement but the inference is clear. Entropy is the degree of disorder in the meaning. The progression of simulacra is proximal to the procession of entropy within information systems.
Conclusion
The creation of the simulacra becomes the template in which reality becomes formatted. By adopting these abstractions as orthodoxy the abstractions become the reality. Your truth is probably not the truth due to these abstractions. Risk models, frameworks, and compliance checklists become the poison in the well just as AI content poisons its own training data. The contemporary interpretation of security frameworks has been increasing through the orders of simulacra since the late 1970’s.
Security is no longer protection. It is performance. Little by little reality is stripped away and synthetic replacements have become a caricature that satires our purpose. Security is on the cutting edge in this regard as it predated the masochistic direction of AI by many decades. A peculiar example where technology is catching up to security.
The solution to this problem is to reorient around truth and not compromise enquiry for expedience. This means that frameworks, compliance, and other methods of abstraction should be treated with scepticism. They can be better used as an indicators for further investigations and not the final word on the state of protection. The veil needs to be lifted on the compromises we have made and more rigour needs to be applied to our critical thinking and our practice.
The hyperreality that has perpetuated itself enables repudiation of accountability, the simulacra is the shield which we hide behind. As practitioners we must strip away the fabricated superstructure. We can talk about causes over symptoms but this thinking needs to be reflected internally. In critical thinking we challenge assumptions, consider alternative hypothesis, and assess the source reliability. Applying this reasoning to our own methodologies is a beneficial starting point to resolving the problem.
For many, it’s hard to step away from simulacra. They are the prisoners stuck in Plato’s cave. To them, proclamations of reality are nothing more than the writhing of a lunatic who no longer belongs. Even if abstractions start as necessary heuristics, clinging to higher-order simulacra keeps the inhabitants of Plato’s cave well and truly chained. They have formed their own hyperreality that has replaced reality. Challenges to the hyperreality are heretical . . . and we all know what happens to heretics.
But let us remind ourselves that we don’t need to free the prisoners from Plato’s cave. We can step back into reality. Their limitations are not our limitations.


