What exactly is an AI system anyway? CISOs are increasingly relying on AI to support decision-making -- here’s how to look at the systems available in today’s products and what they can accomplish.
Most of the chatter about artificial intelligence (AI) in cybersecurity concerns the technology’s use in augmenting and automating the traditional functional tasks of attackers and defenders, like how AI will improve vulnerability scanning or how large language models (LLMs) might transform social engineering.
But there’s always an undercurrent of conversation about how “AI systems” will help decisionmakers as the cybersecurity profession acknowledges the growing importance of AI decision support systems in both the near- and long-term.
Much has been written about how AI will change the decision environment, from taking over responsibility for certain major decisions to tacitly shaping the menu of options available to CISOs. This is a positive development, not least because of the host of ethical and legal issues that can arise from over-trust in processes automated with the aid of machine learning.
But it’s worth pointing out that what is meant by an “AI system” is often glossed over, particularly in this decision support context. What exactly are different products doing to support the CISO (or other stakeholders)? How do different combinations of capability change the dynamic of situation planning, response, and recovery?
The truth is that not all decision-support AI is created equal and divergent assumptions baked into different products have real implications for organizations’ future capability.
The context of AI decision support for cybersecurity
What makes for an effective and efficient decision environment for cybersecurity teams? How should key decision-makers be supported by the personnel, teams, and other organizations that are connected to their area of responsibility?
To answer these questions, we need to address the parameters of how technology should be applied to augment specific stakeholder capabilities. There are many different answers as to what the ideal dynamic should be, driven both by differences across organizations and by distinct perspectives on what amounts to responsible stewardship of organizational security.
As cybersecurity professionals, we want to avoid the missteps of the last era of digital innovation, in which large companies developed web architecture and product stacks that dramatically centralized the apparatus of function across most sectors of the global economy.
The era of online platforms underwritten by just a few interlinked developer and technology infrastructure firms showed us that centralized innovation often restricts the potential for personalization for end users, which limits the benefits. It also limits adaptability and creates the possibility for systemic vulnerabilities in the widespread deployment of just a few systems.
Today, by contrast, the development of AI systems to support human decision-making at the industry-specific level generally tracks broader efforts to make AI both more democratically sensitive and more reflective of the unique needs of a multitude of end users.
The result is an emerging market of decision-support products that actually accomplish immensely diverse tasks, according to different vendor theories of what good decision environments look like.
The seven categories of AI decision support systems
Professionals can split understanding of what constitutes AI decision support systems across seven categories — those that summarize, analyze, generate, extrapolate preferences, facilitate, implement, and find consensus. Let’s take a closer look at those categories.
AI support systems that summarize
This is the most common and the most familiar for the average consumer. Many companies utilize LLMs and ancillary methods to consume large amounts of information and summarize it to form inputs that can then be used for traditional decision-making processes.
This is often much more than simple lexical summation (representing data more concisely). Rather, summarization tools can produce values that are useful to a decision-maker based on their discrete preferences.
Projects like Democratic Fine-Tuning attempt to do this by portraying information as different cosmopolitan values that can be used by citizens to enhance deliberation. A CISO might use a summarization tool to turn an ocean of information into risk statistics that pertain to different infrastructural, data, or reputational dimensions.
AI support systems that analyze
The same methods can also be used to create analysis tools that query datasets to generate some kind of inference. Here, the difference with summative tools is that information is not just represented in a useful fashion, it is open to interpretation before a human applies their own cognitive skillset. A CISO might use such a tool, for instance, to ask what network flow data might suggest about adversary intentions in a particular time period.
AI support systems that generate
Similarly, though distinct, some LLMs are deployed for generative purposes. This does not mean that AI is being deployed simply to create text or other multimedia outputs, rather, generative LLMs are those that can create statements inferred from the preceding analysis of data.
In other words, while some AI decision support systems are summative in their operation and yet others can define patterns in that underlying data, another set entirely is designed to take the final step towards translating inference into statements of position. For a CISO, this is akin to seeing data deployed for analysis leading to statements of policy regarding a specific development.
AI support systems that describe preferences
Quite aside from this focus on understanding data to produce inferentially useful outputs, yet other LLMs are being deployed to describe the preferences of system users. This is the first of several AI deployments to emphasize the treatment of existing deliberation rather than the augmentation of deliberation.
From the CISO perspective, this might look like a system that is able to characterize preferences on the part of end users. The more effectively trained, of course, the more AI should be able to extrapolate user preferences that align with security objectives. But the idea generally is more to model security priorities to provide an accurate read of the fundamentals of practice at play in a given ecosystem.
AI support systems that facilitate
Another use of generative AI to augment the decision environment is via the direct facilitation of discourse and informational queries. One need only think of the various chatbots that have increasingly filled the product catalogues of so many vendors in just the past few years to see how many tools seek to explicitly improve the quality of discourse around security decisions.
AI support systems that implement
The goal with such tools, specifically, is moderation of the discursive process. Some projects take this machine agency one step further, awarding the chatbot agent the responsibility to execute decisions made by the stakeholders.
AI support systems that find consensus
Finally, some tools are designed to discover areas of potential consensus across various perspective-driven inputs. This is different from generative AI capabilities in that the goal is to help mediate the tension between different stakeholders.
The method is much more personal in its orientation too, with the general idea being that LLMs (the Generative Social Choice project being a good example) can help define areas of mutual or exclusive interests and guide decision-makers towards prudent outcomes given conditions that might not otherwise be clear.
How should CISOs think about decision-support AI?
It’s one thing to identify these distinct categories of design for LLMs. It’s another entirely for a CISO to know what to look for when selecting the products and vendor partners to work with in building AI into their decision environment.
This is a decision complicated by two interacting factors: the products in question and the particular theory of best practice that a CISO aims to optimize.
This second factor is arguably much harder to draw neat lines around than the first. In a sense, CISOs should work from a clear idea of how they acquire factual and actionable information about their areas of responsibility while at the same time minimizing the amount of redundant or misleading data in the loop.
This is obviously very much case-specific given that cybersecurity serves the full gamut of economic and sociopolitical activities. But a decent rule of thumb is that larger organizations likely demand more in the methods for aggregating information than do smaller ones.
Smaller organizations might be able to rely on more pure deliberative mechanisms for planning, response, and the rest simply because of the more limited potential for information overload. That should give CISOs a good starting point for picking which kinds of AI systems might be most useful for their particular circumstances.
To adopt or not to adopt? That is the CISO’s question
Thinking about these AI products in a more basic sense, however, the calculation to adopt or not remains somewhat simpler at this early stage of industry development. Summarization tools work fairly well compared with a human equivalent. They have clear problems, but those issues are easy enough to see, so there is limited need to be wary of such products.
Analysis tools are similarly capable but also pose a quandary for CISOs. Simply put, should the analytic elements of a cybersecurity team reveal information from which a CISO can act, or should they create a menu of options that constrains CISO’s possible actions?
If the former, then analytic AI systems are a worthwhile addition to the decision environment for CISOs already. If the latter, then there’s reason to be wary. Is the inference offered by analytic LLMs trustworthy enough to base impactful decisions on yet? The jury is not yet in.
It’s true that a CISO might want AI systems that reduce options and make their practice easier, so long as the outputs being used are trustworthy. But if the current state of development is sufficient that we should be wary of analytic products, it’s also enough for us to be downright distrustful of products that generate, extrapolate preferences, or find consensus. At present, these product styles are promising but entirely insufficient to mitigate the risks involved in adopting such unproven technology.
By contrast, CISOs should think seriously about adopting AI systems that facilitate information exchange and understanding, and even about those that play a direct role in executing decisions. Contrary to the popular fear of AI that implements on its own, such tools already exhibit the highest reliability scores among users.
The trick is simply to avoid chaining implementation to preceding AI outputs that risk misrepresentation of real-world conditions. Likewise, chatbots and other facilitation methods that help with information interpretation often make deliberation more efficient, particularly for large organizations. Paired with the basic use of summative tools, these AI systems offer powerful methods for improving the efficiency and accountability of CISOs and their teams.
Comments
Post a Comment