Americas

Asia

Oceania

christopher_whyte
CSO contributor

Are you prepared for the rise of the artificial intelligence CISO?

Opinion
21 Aug 202313 mins
CSO and CISOGenerative AIIT Leadership

It’s inevitable that AI systems will be tasked with more and more cybersecurity responsibilities. It is time to start thinking about how the roles of human CISOs and AI will evolve.

human wireframe ai
Credit: Addictive Creative / Shutterstock

If one were to solicit a list of the developments most often on the mind of CISOs, AI would certainly be near the top and will continue to be for years to come. After all, there is clear evidence that CISOs and cybersecurity professionals more broadly simultaneously see immense risk, opportunity, and potential prosperity in the adoption of machine learning (ML) and other AI developments across every dimension of private enterprise.

Moreover, AI is already deployed by over a third of companies according to the 2022 IBM Global AI Adoption Index and at least 40% of other companies are considering potential uses.

If AI is going to be a central pillar of cybersecurity developments for the foreseeable future, it’s worth talking about an oddity found in the discourse about its utility. Specifically, much of what is written about AI and cybersecurity splits apart the roles of human operators and the machine systems that will ideally resolve many of the digital world’s security and economic challenges.

The interaction between machines and humans is seen in quite dualistic terms. Simply put, this means that machines are tools that offer specialized advantages in diverse areas, while humans retain substantial amounts of operational control.

AI CISOs will be authorities on tactics, strategies, and resource priorities

There’s a degree to which this tendency is understandable. Absent the unlikely near-term development of credible artificial general intelligence (AGI) that can more fully simulate human agency, it’s true that AI systems will be nothing more than narrow-but-powerful exercises in task performance. Even generative AI applications, which increasingly seem likely to revolutionize certain areas of industry, are just pattern detectors that provide impressive predictive capacity given narrow inputs.

At the same time, however, support systems that are deployed broadly and that operate based on human judgement operationalized in training data, end-user actions, and the structured inputs of developers will inevitably come to act on humans’ behalf and to operate with a degree of trust. After all, tools and models that demonstrate their ability to simulate the strategic, moral, and economic preferences of companies over time will find themselves given more responsibility vis-à-vis human operators.

The result is, in the simplest possible terms, the emergence of AI CISOs that will be de facto authorities on the tactics, strategies and resource priorities of entire organizations. Today’s human CISOs would do well to consider what this means for their business.

The AI CISO will arise out of the arms race between attackers and defenders

Imagine the following scenario. It is several years into the future and AI-augmented cyber campaigns of all kinds — influence operations, espionage activities, counter-critical infrastructure missions, etc. — are increasingly common. The average compromise of private industry systems occurs several orders of magnitude faster than is true in 2023 and the return on attack per hour of access for cyber criminals is twice or three times better than it is today.

Where this is not true is in those situations where defensive AI — whether developed internally or procured from cybersecurity firms — has been deployed as a countermeasure to thwart intrusions. But such countermeasures are no silver bullet. Rather, they are effective tools that nevertheless seem to be in a state of perpetual beta, as the arms race logic of adversarial AI learning means that good defense feeds improved offense.

The logical outcome of such a situation is the AI CISO. After all, what has been in human hands for so long will necessarily become the purview of AI response systems. This includes not only basic tasks that respond to decision-making rulesets but also dynamic tasks. In part, this might mean the selection of defensive — or active defense — tactics and analysis of adversary strategy.

But it will also mean value judgements and moral considerations. What kinds of data or data access should be prioritized for protection, for instance, is a judgment call that contains inherently variable ethical foundations that reference shareholder interests, civic responsibilities, profit metrics, governance baselines and compliance standards. At a point, human intelligence and machine intelligence converge in a meaningful fashion.

The upsides of an AI CISO/human alliance

There are potential advantages to this, as has already been alluded to. But there are concerning implications for this inevitable outcome too. For one, if the AI defender of tomorrow is to be best thought of as a sort of distributed machine-human interface, then today’s planners need to recognize that human agency in the future is something that will be represented rather than actively employed.

We’ve seen this before in the history of disruptive technologies and the outcomes aren’t always stellar. Humans often lose control over social, political, or economic things they once directly shaped when turning to new technology but, alarmingly, the illusion of control often remains. So how can today’s human CISOs plan for the AI CISOs of tomorrow?

The upsides of leaning into the construction of systems that have de facto authority over diverse facets of the cybersecurity enterprise are fairly clear. If an AI arms race set around the evolution of malign threats to Western enterprise is inevitable, then AI CISOs are the key to allowing defenders to keep up. The tightening timeframe of incident response means that systems based around rapid threat detection and analysis will almost certainly excel where human responders could not.

Likewise, the metrics that stem from the use of such systems will almost certainly lead to iteratively better productions of machine learning models that have clear value structures amenable to threat mitigation prioritization. Efficiency, in short, is the clear advantage of the AI CISO.

The upside for cybersecurity governance

Perhaps a less obvious advantage is the upside that might be found for cybersecurity governance as a whole. For some time, experts in cybersecurity have been particularly concerned about the threat of cascading negative outcomes that might stem from the augmentation of tools with AI. The flash crash of the stock market in 2010 is often brought up as an example of this nightmare scenario in which many things could wrong faster than a human could act to prevent it.

In that case, the Dow Jones Index lost almost 1,000 points in 36 minutes as automated sales algorithms reacted to odd market conditions (an accidental sale several orders of magnitude outside of normal parameters, it is often said). While the market recovered, the market impact was a more than $1 trillion loss, entirely due to interacting algorithms.

It’s worth noting that the same logic underlying this common fear might actually play in favor of more standardized norms of responsible practice and accepted threat response in a world where AI CISOs interact with a common set of evolving adversary machine capabilities. It’s a fascinating idea for a space with relatively few norms around defender-attacker engagements.

Deploying AI products that learn best practices from a shared set of industry experiences means a standardization of knowledge about how cyber defense plays out in practice. For both the federal government and private governance initiatives, the cascade of such activities as the new normal of cyber defense offers enticing touchpoints for coordinating shared rules — both formal and informal — around cybersecurity as a national security consideration.

The potential for missteps

As appealing as the idea of AI CISOs that can effectively take the priorities and security requirements of human operators and execute them against rising offensive AI threats may be, the potential for missteps is also substantial.

As any lay user of an LLM like ChatGPT will tell you, the opportunity for outright inaccuracy and misinterpretation in the use of any AI system is noticeable. Even assuming defensive AI systems can be brought within acceptable margins of usability, there is real danger that the humans in the loop will believe they control outcomes that are beyond their ability to shape. In part, this might stem from a willingness to accept AI systems for what they appear to be — powerful predictive tools. But research into machine-human interactions tells us that there’s more to consider.

Recent work has emphasized that businesses and organizational executives are prone to overusing systems where the paradigmatic transformation of an existing company function has been promised or large investment in a specific application has already occurred. In essence, this means that the bounds of what might be possible for such procurements gradually expands beyond what is practical, largely because the positive associations made by stakeholders with “good business practice” creates tunnel vision and wishful thinking effects.

There is a tendency to assume AI has human qualities

And with AI, this tendency goes further still. As with any sufficiently novel technological development, humans are prone to over-assign positive qualities to AI as a game-changer for almost any task. But psychological studies have also suggested that the customizability of AI systems — wherein an AI model might be capable, for instance, of building machine agents with distinct styles or personalities based on the breadth of training data — pushes users towards anthropomorphizing.

Assume that a cybersecurity team at a financial firm calls their new AI tool “Freya” because the real name of the application is the “Forensic Response and Early Alarm” system. In representing their AI system to executives, shareholders, and employees as Freya, the team communicates a human quality to their machine colleague. In turn, as research tells us, this inclines the average human towards assumptions about trustworthiness and shared values that may have no basis in reality.

The possible negative externalities of such a development are numerous, such as company leaders being dissuaded from hiring human talent because of a false sense of capacity or a willingness to discount discomfiting information about the failures of other companies’ AI systems.

Will reliance on AI systems lead to loss of human expertise?

Beyond these possible downsides of the coming age of AI CISOs, there are operational realities to consider. As several researchers have noted, reliance on AI systems is likely to be associated with a loss of expertise at organizations that otherwise maintain the resources to hire human professionals and retain an interest in the skills they might bring.

After all, the automation of more elements of the cyber threat response lifecycle means the minimization or removal of humans from the decision-making loop. This might occur directly as companies see that a human professional just isn’t often needed to conduct oversight on one or another area of AI system responsibilities. More likely, however, expertise loss may occur as such individuals are given less to do, prompting their migration to other industry roles or even a move to other fields.

One may ask, of course, why this would universally be a bad thing if such expertise is not often needed. But there’s an obvious answer — a lack of controls that prevent bias and emotion to impact security situations. And the flattening of the human employee workforce at a company around novel AI capabilities also implies a poorer relationship between strategic planning and tactical realities.

After all, effective cyber defense, and long-term planning around socio-economic priorities — business interests, reputational considerations, etc. — as opposed to mere technical ones requires robust intellectual (read: human) foundations.

Finally, as others have observed, the coming age of AI CISOs is associated with the potential for autonomous cyber conflicts that emerge more from flaws in underlying models, bad data, or odd pathologies in the way that algorithms interact. This prospect is particularly concerning when one considers that AI CISOs will inevitably be assemblages of baked-in moral, parochial, and socio-economic assumptions. While this suggests a normalization of defense postures, it also acts as a basis by which the human qualities of AI systems might be systematically leveraged to create vulnerability.

Human-machine symbiosis is coming

Recognizing that the logical outcome of the trajectory we find ourselves on today is a de facto symbiosis between human and machine systems is of paramount importance for security planners. The AI CISO is far less of a “what might be” and more something that inevitably will be — a real reduction in our control over the cybersecurity enterprise because of developments we will be incentivized to support. To best prepare for this future, companies must consider today the value in cyberpsychological research and the findings of work on technological innovation.

Specifically, companies across private industry would do well to avoid the situation where an AI CISO imbued with ethical and other sociological assumptions develops without prior planning. Any organization that envisions a robust AI capability as part of its operational posture in the future should engage in extensive internal explorations of what the practical and ethical priorities of defense look like.

That, in turn, should lead to a formal statement of priorities and a body that is charged with periodically updating these priorities to reflect changing conditions. Ensuring congruence between the practical outcomes of AI usage and these pre-determined assumptions will obviously be a goal of any organization, but waiting until AI systems are already operational risks outcomes that are more encultured by AI usage than by independent evaluation.

Employ the tenth-person rule

Any organization that envisions extensive AI usage in the future would also do well to establish a workforce culture and structure oriented on the tenth-person rule. This rule, which many industry professionals will already be familiar with, dictates that any situation leading to consensus among relevant stakeholders must be challenged and re-evaluated.

In other words, if nine of 10 professionals agree, it is the duty of the tenth to disagree. Anchoring such a principle of adversarial oversight at the heart of internal training and retraining procedures can help to offset some of the possible missteps to be found in expertise and control loss stemming from the rise of AI CISOs.

Finally, inter-industry learning around what works for AI cybersecurity and related tools is a must. Specifically, there are strong market incentives to try products that are convenient but that may fall short in some other area such as transparency about underlying model assumptions, training data, or system performance. Cybersecurity is a field ironically prone to path-dependent outcomes that see insecurity generated by the ghosts of stinginess past. Perhaps more so than with any other technological evolution in this space in the last three decades, cybersecurity firms must avoid this selection of convenient over best. If they do not, then the coming age of AI CISOs may be one fraught with more pitfalls than promise.