In an increasingly automated industry, what is the place of values in the future of security?
Originally published 01/02/23
Introduction
"I am Holly, the ship’s computer, with an IQ of 6000. The same IQ as 6000 P.E. teachers."
Holly – Red Dwarf
Something is afoot, indications that a step change in the capability, applicability, and utility of automation technologies are all around us. The days of automation being restricted to repeatable processes are numbered. Artificial Intelligence (AI) and Machine Learning (ML) technology is starting to become packaged into Software as a Service (SaaS) products. Off the shelf software is starting to offer assisted and automated decision making in many areas, even contentious areas such as recruitment where determinations around the suitability of applicants to a role are made by an AI. This domain is no longer the exclusive playground of actuaries in dark corners of the office huddled over laptops with more RAM than a field of sheep.
Let’s just clarify terms. When I say AI, I am not talking about a true artificial intelligence, I am talking about an advanced technical model that has inputs, processes those, then generates outputs. These aren’t sentient machines, and never in the way humans understand it owing to how our consciousness is rooted in our biology and physicality. But this is a whole separate topic.
Business functions and IT areas seem enamoured with these tools and are keen to maintain their relevance through the adoption of these technologies. The problem is that the people making decisions about the deployment of these technologies are driven by the easily measurable and short-term benefits. Nuance is not something that sits easily in this context, yet we cannot discuss the implications of this technology without addressing how values inform our choice about what types of tooling are acceptable and in what context.
This is bleeding edge technology, and inevitably, those riding that crest will bleed. Little consideration has been given to what the ethical or moral implications of these technologies will bring. Granted there is some nominal effort from the Alan Turing institute or the Government Office for Artificial Intelligence however this is vacuous, aspirational, boiler plate non-sense. Most of the big tech companies such as Microsoft, Google, Facebook, or IBM have their own published AI ethics or principles, but these could easily be dismissed as being too low resolution to be of practical use and rife with self-contradiction.
Talking about security as part of risk management for a moment, we need to explore the risks associated with these technologies. We are dealing with risk that is not well understood. The impact is more fundamental than ‘high’ or ‘critical’ vulnerabilities, or how much revenue will be lost. The impact runs across a longer timeline and talks to the core structure of organisations, it’s values. It is the replacement of human decision by automated ones, and the true implications of this aren’t knowable in a short duration.
I am posing that in the concept of values there are certain categories we need to consider and how risk will apply to these or be presented by these. I’m not saying these are definitive but would be a reasonable set of starting considerations to explore further.
The values of the practitioner.
The ethics of the organisation.
The ethics of the software provider.
The morality of using data to create the AI
The morality of applying this technology to customers.
The embedded values contained within the AI.
The morals and laws of wider society.
The Current Context
It’s important to provide some framing of the current level of thinking around AI Principles, be it government bodies or big tech companies. They exclusively talk about bias, fairness, accountability, transparency, safety, accountability, or variants of these. These are not neutral concepts as they are anchored in a set of values and assumptions. These ‘principles’ carry specific connotations rooted in the language of collectivist ideology and this means there is an issue in the definitions of terms. At the extremes you could consider the tools to be a Trojan horse of values seeking to enter by stealth and usurp those of users and organisations, but I suspect that the reality is less sinister. Developers of these tools are well meaning nitwits who have adopted what they think people want to see, or what is saleable in the marketplace.
The Alan Turing Institute acknowledge problems in the meaning of words when they state that there are ‘different ways to characterise or define fairness’, yet they fail to elaborate and provide a definition. How you define ‘fairness’ can fundamentally reframe how you perceive the moral implications of an automation solutions. Ambiguity in definition leads me to conclude that consideration of the fundamental concepts is absent.
Yet these principles translate into how these automation systems will function and what output it will give. This may be directly via processing of information by the AI or ML model, or, more likely in the shorter term, inform how the software provider will further develop the AI or ML code or training data to deliver outcomes that are aligned to their expectations. Curation of training datasets via alteration or omission means that the information used to train the AI can no longer be considered a reflection of reality and becomes a subjective set of information that represents the intention of the person that curated it. This is not hyperbole. This is happening now. The output becomes a reflection, not of the world as we observe it, but of desire. It is how they think the world should be.
As we adopt these technologies the embedded values within them become part of the organisations that we operate in. The point here is not to critique the tacit values contained within the AI or ML solutions, although there is a lot to be critical of, but to highlight that they are there, they are integral to the processing and output of these tools. They will have an impact when adopted by businesses. Given that they exist, it becomes incumbent of businesses to identify this and understand the ramifications.
The Practitioner
Having grounded values are a cornerstone requirement for any credible security practitioner. The current state of automation is of no consequence to the values a security practitioner holds. Considerations of right and wrong are agnostic of technology and create the axioms in which we understand and interpret the world. They relate to how we conduct ourselves, and how we govern our interactions with others. But values are more than words, it is understanding of the concepts that underpin the label that is important. In a cultural context where words are redefined at the capricious whims of the screeching mob, understanding of concepts is more important than ever.
A security practitioner must reflect on their own values and morals and be able to recognise where values are implied by organisations or embedded within automation solutions. An absence of self-reflection leaves the practitioner vulnerable to thought reform. A personal risk that should be identified when the practitioner applies their trade to themselves.
If we extrapolate into the future, we must conclude that risk moves away from the technical implementation of a tool and orients around the moral impact of automation to the organisations, people within those organisations, and customers of those organisations. Consideration must be given to how the adoption of these tools reframe how a business conducts itself in the world.
The availability of these tools means we must consider how a security practitioner applies their skills. Rather depressingly, at the faintest whiff of a capable natural language AI, ChatGPT, the internet is ablaze with folks gleefully thinking about how they can use AI tools to fulfil day to day tasks. This could be to generate code, write articles, draft job descriptions, documents, or any number of other duties they are being paid to undertake. But there are many problems in delegating your thinking to these types of systems.
The most obvious issue with using these tools is the data they are trained on. Tools such as ChatGPT it uses the lowest common denominator information to arrive at its conclusions. If a security practitioner uses these tools rather than considering a problem and divining a solution themselves, they are not providing their insight. By way of analogy, this is not upgrading from a handsaw to a table saw, this is throwing away your handsaw and getting a carpenter to do the job for you, and not checking their work.
A security practitioner using these AI tools within their role is applying constraints to themselves. And this is the rub, what it gives is also what it takes. They will suffer through lack of experience, and lack of knowledge. They erode their ability to use creative and novel thinking to solve problems. Truth is the first virtue of thought. But what truth can be learned by automating the process of understanding? Only that they are workshy perhaps. Security could be characterised as a knowledge-based practice, but it’s more, it requires creativity and abstract thinking. Delegate these to a machine and you become nothing, or worse, contemptable.
At what point can a practitioner no longer claim to have ‘done the work’ and receive payment in good conscience when they are using tools such as ChatGPT, midjourney, or stability.ai? As it stands, it’s not even clear that using these tools is legal given the training data uses huge amounts of copyrighted material. There are current legal challenges and even if the UK and US courts find that AI generated work is transformative and the training data is fair use, that doesn’t change the fact it’s transformative work by the AI and not by the user. The legal challenges to these tools mean there is an outstanding question around the lawfulness of how the tools are using training data. It would be reasonable to conclude that using these tools to complete work and receive payment for it, is unethical.
The place of values for the practitioner is this. Deep understanding of values serves as a form of protection from external pressures that would corrupt the practitioner’s personal values. Deep understanding of values gives practitioner the insight that fulfilment is derived from a sense of responsibility, in doing the job in hand to the best of their ability, and the satisfaction of solving complex problems. Values are the contributor to longevity and wellbeing within the industry and the technical advances mean that there is no longer room for complacency. The importance of values for security practitioners is higher than ever.
The Organisation
The marketing literature around AI and ML technologies from the big tech companies assumes agreement with the values that underpin the principles being packed into the tools. This means that security must consider the risks and impacts of an automation tool in a broader way than just a technical dimension. The practitioner must understand what the ethics of an organisation are and how these are demonstrated within the organisation.
The organisation’s ethics will manifest within business processes and in how decisions are arrived at. This could be who is offered insurance, who is offered credit, who is denied access to services, or similar outcomes. It is a likely scenario that there will be value misalignment between the organisation and the embedded values contained within an automation tool. How any misalignment is treated and considered must be given serious credence. At scale, the long-term impact of replacing human decisions with automated ones could reflect adversely in revenues and profitability.
We arrive back to the problem with definition of terms. By way of example, we are going to look at fairness. Microsoft state that they conceptualise fairness using an ‘approach known as group fairness’. Their fairness assessment utilises capability from a company called FairLearn who use ‘demographic parity’ as one of their measures of fairness. On the surface the propositions seem reasonable however at its core there is a built-in assumption that group affiliations are suitable to derive fairness. This appears to be misaligned to the legal structure that orients around the concept of an individual. Granted that some legislation does include group markers, but this is secondary to the individual at this time. The hierarchy of values is misaligned in a generous interpretation.
Organisations looking to adopt these technologies need to understand what their position on these matters is, why it is that way, and how this reconciles with their legal obligations. Security practitioners also need to be aware of this when assessing tools and understand that there are implications with this technology outside of the capability of the tool. If we take the scenario of recruitment decisions, a business could be creating a pipeline of future legal settlements through a lack of understanding on the fundamental values a tool is predicated on.
Summary
Obviously, this is a large subject, I’ve not talked about deep fakes and how these could be used maliciously. I’ve not spoken about how an AI emulating human decision could be subject to social engineering style attacks. I’ve not talked in depth about what the possible ethical problems are with using people's data to train AI or ML models, which would be better positioned to make obfuscated data identifiable, and therefore not obfuscated. I could go on, and on, and then a bit more.
I rarely talk about myself in this form and usually abstract this away, but I’d like to give a personal perspective. My view on the subject is heavily based on my conception of individualism, personal liberty, and freedom. But at the core of this sits truth. And the truth is this. The manifestation of these embedded values within technology is inevitable by virtue of the fact we seek to replicate aspects of humanity synthetically. We have a choice in what values these systems should be predicated on. Yet we have ignored the virtues that have made our societies successful and allowed weak people to set an agenda of self-contradicting bullshit. As I have looked deeper into this subject, I feel an increasing frustration at the total dereliction of responsibility. What is contained within this technology is antithetical to what I hold to be sacrosanct. I don’t know how to solve this more broadly, but what I do know is that I have my values that I will hold to.
I conclude that the place of values within security is to give us the framing for understanding the human condition. There are few groups of people who are placed with the technical context to understand of the technology, the ability to influence, and the breadth of experience to contextualise this to businesses, the industry, and wider society. Our situation is our opportunity. So, let’s become the voices needed to advance the industry in a way that limits the potential harm this technology clearly presents to the stability of our society.