Right Said Ted
A discussion about serial killers, AI, and our dysfunctional relationship with technology.
Introduction
The relationship we have with technology has become a problem.
Our times are punctuated with technological devices that seem to be fuelling division and polarisation. Complicit indifference allowed poisonous ideas to be digitally disseminated. Technology has become ubiquitous, but should we have let it? There are some that suggest that we shouldn’t.
Jonathan Haidt discusses the role of technology on the adolescent development calling it the “greatest destruction of human capital in history”. Haidt notes that Gen Z are more anxious attributing this to a “great rewiring of childhood”. As Haidt describes, technology becomes an inhibiter preventing the formative experiences during development. Children are not as engaged in play, imaginative activities, conflict resolution, understanding social cues to form relationships, or engaging as part of a long-standing communities. The executive functions of the pre-frontal cortex are disrupted leading to inabilities for those to stay on task or plan. Their social skills are diminished and are considered to be an anxious generation plagued with an epidemic of mental illness.
One of the most notable and controversial critiques of modernity was Ted Kaczynski, a terrorist. His ideas can easily be dismissed as the ramblings of a mad man but within his manifesto and other writing there are astute observations and predictions that have come to pass. In recent years he has garnered an online following and is affectionately referred to as “Uncle Ted” in some circles.
As technologists who apply the latest and greatest from the toybox into organisations, have we considered what that does to the people in those organisations? And the population from which we recruit, how is organisational stability adversely impacted by the wider adoption of technology given how it impacts people? The true impact of novel technologies like AI on society will not just be a change in job roles or the automation of tasks but it will change our relationship with technology.
What exactly is security?
Let’s revisit what we actually mean when we talk about security. As many more informed commentators than myself have noted, security derives from the Latin Securitas meaning freedom from care. This means that security is a feeling about the perceived state of protection. Security is not something that is objectively measurable whereas protection is. Security is not locks, fences, firewalls, anti-virus, SIEMs, EDRs. These are the ‘things’ we use for protection and are not security in of themselves but the application of protective measures gives a feeling of security. We are talking about emotional responses so we must consider the human element. This distinction becomes important as we start to introduce other concepts.
The Terrorist
Ted Kaczynski was the terrorist known as the Unabomber. His actions led to the loss of three lives and injured many other between 1978 and 1995. It was not that he had killed particularly significant amounts of people, it was the unpredictability of the attacks and extended periods of inactivity that left a persistent threat in the minds of the US. And this is threat in a real sense and not the security colloquialisms. Kaczynski had the capability to cause harm and had signified intention to do so. The Unabomber evaded authorities leaving little trace for investigators other than his signature method of making his devices. He went to significant lengths to make sure that no components were traceable.
The terror he was able instil into a nation was palpable and led to the one of the most expensive manhunts of all time. Kaczynski had a genius level intellect, his IQ was supposed to exceeded Einstein’s. Kaczynski had participated in CIA experiments while at Harvard that were related to the controversial MK Ultra programme. These experiments were to investigate methods of mind control through the use of hallucinogenic drugs and psychological torture. Although it’s not clear what exactly occurred during these experiments with Kaczynski, this experience combined with the social isolation from having been several years younger than his peers will have put Kaczynski on a path towards becoming the Unabomber. He become a recluse and radicalised himself in isolation from society years before it became an internet trend.
Kaczynski is still discussed today and prompted much discussion following his suicide in 2023. There is an enduring appeal to those who refer to him as “Uncle Ted”, perhaps because he is less obtuse than Jacques Ellul or perhaps there is something accessible about his perspective articulated in such uncompromising terms. He has hit on something that still permeates the zeitgeist which might be as simple as the increasing interest in doomsday scenarios and the destruction of humanity. Kaszynski’s perspective is somewhat notable given it was from a time before the internet was ubiquitous and from before the mass adoption of social media.
Motives of a terrorist
Kaczynski’s contention is best described in the opening paragraph of the manifesto Industrial Society and Its Future which was printed in the Wall Street Journal in 1995.
1. The Industrial Revolution and its consequences have been a disaster for the human race. They have greatly increased the life-expectancy of those of us who live in “advanced” countries, but they have destabilized society, have made life unfulfilling, have subjected human beings to indignities, have led to widespread psychological suffering (in the Third World to physical suffering as well) and have inflicted severe damage on the natural world. The continued development of technology will worsen the situation. It will certainly subject human beings to greater indignities and inflict greater damage on the natural world, it will probably lead to greater social disruption and psychological suffering, and it may lead to increased physical suffering even in “advanced” countries.
Industrial Society and its future, Theodore Kaczynski
The manifesto contained a pedantic use of the phrase “eat your cake and have it too” corrected from the common “have your cake and eat it”. This was seen by his brother and this ultimately led to his capture as this was identifiable to Kaczynski.
We can distil Kaczynski’s ideas to a few main contentions.
The development of technology has led to psychological suffering and a lack of fulfilment.
This is predicated on the concepts he outlines called surrogate activities and a disengagement with the ‘power process’. The proposition further asserts that this leads to a degradation in freedom. He contends that the changes in behaviour and how communities are structured to uphold a technological society are the key factor in psychological suffering.
The continued development of technology will worsen the situation.
As things progress, the more they will deteriorate and the greater the impact caused by psychological suffering.
The elimination of the modern technological society is required to avert disaster.
The conclusion of the first two contentions give rise the justification of the use of violence and intimidation to achieve political change viz:- this is the very definition of the term terrorism.
We can grant that Kaczynski’s diagnosis is correct, yet we must remain cynical of the prescription. An overthrow of technological and collapse of society would lead to enormous levels of human suffering. Kaczynski’s proposition is this, exchange the current conditions for a set of new conditions that lead to different outcomes over a longer timeline. These new outcomes will reengage humanity with their nature and reduce phycological and physical suffering.
It is widely accepted that there is an ongoing mental health crisis particularly among Gen Z as described by Haidt. This gives credence to Kaczynski’s view on the relationship between people and technology is one that is detrimental. We might consider that feelings of security are adversely impacted by unstable phycological states as a result of technology adoption. Given Kaczynski’s framing of the problem we can infer that the reduced use of technology will lead to healthier psychological states and a greater sense of security even if the objective level of protection is reduced. This might seem counter intuitive however we must remember that security is a subjective interpretation about the perceived state of protection. Too much protection can decrease feelings of security as freedom is removed.
The power process and surrogate activities
There are two concepts outlined by Kaczynski that are relevant to this conversation. The first is the power process which is defined as having four elements which are goal, effort, attainment of goal, and autonomy. Engagement with the power process is necessary for phycological health in Kaczynski’s world view and it is easy to see how an increasing dependence on technology would incrementally preclude us from that process.
Obviously Kaczynski relates the power process to basic physical needs such as food and shelter however this concept is not constrained to basic needs. This is where the concept of surrogate activities comes into the frame. Kaczynski argues that in a technological society basic needs are largely met and people will create other goals so that they can maintain some level of phycological health. The examples given are “scientific work, athletic achievement, humanitarian work, artistic and literary creation, climbing the corporate ladder, acquisition of money and material goods far beyond the point at which they cease to give any additional physical satisfaction”.
Clearly Kaczynski’s proposition is untenable as it fails to acknowledge the nature of human existence is within the context of a community where surrogate activities are an inevitable by-product. Kaczynski appears to be working from a conceptualisation of individual survival in isolation from a community which is incongruent from his critique of ‘leftism’. In some respects Kaczynski fetishises a natural mode of living and is dismissive of any benefit bought about by technological advancement.
Kaczynski draws from the idealised conceptualisation of nature seen within the liberal philosophy. He is providing critique from within the same frame of reference. Where he differs is that the state of nature thought experiment that characterises liberal philosophy is rejected by Kaczynski as he advocates of small agricultural communities oriented around families as opposed to brutal individualism. He might be what we would now describe as a postliberal. This is interesting as he is advocating a position that laments the loss of small social groups yet lived in isolation himself.
AI and the neo-luddites
Given the emergence of novel technology such as Artificial Intelligence, it’s reasonable to give some thought as to how our relationship with technology impacts on our feelings of security. Kaczynski’s views can be described as neo-luddite and his view on technology is one of disdain. The use of technology definitionally degrades part of the human connection to the natural world and as we increase usage we sacrifice some our agency.
I admit, there is some irony that I speak as a technologist who finds most endeavours towards AI to be somewhat vulgar. To some, I might be considered a neo-luddite. My counterpoint to that would be that proponents of such technology have consistently failed to demonstrate due care, skill, and diligence when developing and deploying these technologies. But I ask, when you outsource your thinking to an AI tool, what does that say about the quality of your thinking in the first place?
Some egregious uses of this technology include the careless and often unlawful use of data to train these tools. Many companies in the UK will use their customer data, often including special category data under the processing basis of legitimate interest. They are stretching what is acceptable beyond what is lawful especially in view that the EU AI Act had to soften legitimate interest to cover these tools. But this does not apply in the UK in the same way. What legitimate interest exists to commodify someone’s data for financial gain exactly? I would go as far as to call this practice negligence. It’s hard to justify a legitimate use for someone’s data to satisfy vanity projects, yet, here we are.
Neomania - The ongoing march to destruction
Nassim Taleb coined the term ‘Neomania’ to articulate the blind desire towards the next technological advancement and this is what we see. Kaczynski would describe it as science marching on for its own sake without regard as to its impact on humanity. Taleb’s concept of Neomania is similar although not identical to Kaczynski’s. Neomania is about having the latest and greatest version of the gadgets and gizmos that typify modernity. Taleb describes Neomania in a way that moves focus away from utility to vanity. The Kaczynski perspective it is this feature of Neomania that removed people from the power process as coveting the latest technological innovation is little more than a surrogate activity.
Neomania is the inevitable consequence of the liberal paradigm which sits on the assumption that progress is good. It’s not by accident that the industrial revolution occurred alongside the emergence liberal thought. The liberal paradigm is conflicted and incoherent at the most fundamental level. It asserts that freedom and equality can co-exist. This becomes a problem when we attempt to apply these into how we validate novel technology. If you have read the rule set of these technologies and have a functional understanding of virtue ethics you will know this is true.
There is a huge appetite for AI tools to be deployed within organisations yet when challenged as to why they want this technology you are met with the blanks look of a departed mind awaiting its digital replacement. In of itself ‘Neomania’ denotes a lack of purpose and signifies a passive malaise of those afflicted. AI adoption all too often is for its own sake and not to address a business need. If you need an AI to read and summarise a document . . . you are lost.
Progress is an ever-present component of modern life, the continued march of incremental updates, the next thing to woo our adorations. This week it is AI or big data, last week it was the cloud. Who remembers blockchain? The pace of these technological improvements has been increasing for the last two hundred years since the industrial revolution which fundamentally changed the foundations of the societies, humans live in. We moved from our rural communities to the city, out life expectancy increased, we became literate.
Perhaps there is something more to Taleb’s conceptualisation of Anti-Fragility that might explain why the ongoing march of progress. As we technologically progress, we become more encumbered by the system in which we exist as Kaczynski explains. In the context of Anti-Fragility this reduced our optionality increasing our own fragility and ability to respond to adverse events. If we take on face value that Kaczynski is correct that feelings of security are related to our ability to take care of ourselves then the increased adoption of technology is disempowering by virtue of its construct. We might consider freedom as a proxy of optionality and a decrease in optionality is a decrease in freedom.
Conclusion
Am I against AI? No. It is just another technology.
Am I against the irresponsible use of AI? Most certainly.
Do I see AI being responsibly used? Rarely.
I have previously made the point that the best argument for the adoption of AI technology is to maintain economic productivity as we face decreasing global populations. Our interaction with this technology needs to be examined and considered. We know that social media causes harm at key developmental stages and we are seeing the consequences within those now entering the workforce. The use of AI is set to exasperate, not alleviate this problem.
Technology gets applied for technologies sake but we need to consider what the implications of these decisions are. Kaczynski is broadly correct in how he has identified the consequences of unhealthy relationships with technology and his arguments are well reasoned. His solution to these problems are not helpful and reflects a highly destructive resolution that is totalising and absolute.
If we wish to advert the problems Kaczynski highlights we need to act more responsibly with technology like AI. The industry talks about AI ethics but it is mostly vacuous HR patter that fails to get to the root any comprehension.
We are here as security practitioners to help protect organisations. The introduction of technologies like AI can harm organisations. Considerations about the impacts of these types of technology are within the purview of security practice very explictly. Some disagree, but they are wrong.