AI and cyber security: highs, lows, woes and foes


Manufacturers of all shapes and sizes are currently working out how best to leverage AI within their businesses. However, any corners that are cut in the rush for deployment can create risk. Katie Paxton-Fear, Lecturer in Cyber Security, Manchester Metropolitan University, speaks to The Manufacturer about how new AI capabilities can improve security and the vulnerabilities that occur in AI systems.

As well as being a lecturer in cyber security, Katie is also an ethical hacker and bug bounty hunter (which she joked is more a hobby than a job). This essentially means she searches for vulnerabilities within systems and is paid for each one she finds. This can take anything from a few months to as little as five minutes, and can range from consumer-focused apps, right up to high-end military applications.

At Smart Manufacturing & Engineering Week Katie gave a talk focused on cyber security and AI. This covered some of the new cyber security techniques that are being enabled by AI, the new capabilities that exist and how adversaries are starting to use AI to hack manufacturing organisations.

In addition, for any company new to AI she also discussed some of the risks that need to be considered and what the main priority areas should be, looking at a number of AI and cyber security related FAQs.

How are UK manufacturers currently leveraging AI in their businesses?

KPF: It’s been very interesting to attend Smart Manufacturing & Engineering week as I saw so many exhibition stands featuring AI solutions of one kind or another. One factor that I covered in my talk is that while AI is often associated with relatively new platforms such as ChatGPT, Bing Chat or Meta Llama, the reality is that AI is nothing new to industry.

AI is fundamentally associated with statistics and data analytics, and then assessing how a company not only gathers useful insights from it, but really starts to use them in a meaningful way. And when I say ‘use’, there’s an interesting security question around where and how that information is being exposed and whether there is a safety issue to be dealt with.

There’s also a very interesting legal aspect centred around regulation. We’ve seen the UK produce the National AI Strategy as well as the UK Cyber Security Strategy. However, we’re also seeing regulations in Europe with the EU’s AI Act, while the US is also looking very closely at how to best regulate the technology. The scope and variety of how AI is being used and deployed has expanded enormously, so governments around the world have obviously recognised the need regulate it appropriately.

AI is everywhere, and it’s quickly gone from science fiction to a tool some people use daily, whether that’s to write emails, generate an avatar for social media, or just edit a photo. With this breakneck speed a lot of people are worried about bias, legal implications or the safety of AI systems, and they’re not wrong.

We have an opportunity to slow down and think about the implications of AI before it has the chance to do harm. Similarly, as we extract the possibilities of AI and begin to implement it to help us be more efficient, many are worried about AI being used by adversaries in the same way. While we haven’t yet seen AI-driven attacks on critical infrastructure, AI enabling fake news is something we see more and more often, especially around elections.

It’s not all doom and gloom however. AI can do a lot of good and actually most AI isn’t even new, it’s just statistics and models. We can take the best of AI, detecting cyber attacks when they’re in their early stages, to help lift the load in documenting processes and policies.

Are attitudes towards AI technology changing?

The way that AI is being received is undoubtedly changing as people begin to see the benefits, and there’s been a lot of talk around how manufacturers can increase productivity. Indeed, at Smart Manufacturing & Engineering Week, there’s been discussions around using data, actually thinking about the intelligence that it provides, and then really generating actionable insights to benefit business operations.

Of course, manufacturers have traditionally been quite change and risk averse, and while there is an undeniable desire to be quick off the mark with rapid AI deployments, it’s actually quite good that manufacturers are a little bit more circumspect and sceptical. When you’re looking at a technology like AI, you could potentially be handing over very sensitive data to a company that you may not trust. Maybe it’s not even the company you think it is, and perhaps they’re sending your data on to another third-party.

That perspective is certainly the case when dealing with critical infrastructure, but it’s also true of standard safety infrastructure in general. Many manufacturers operate in hazardous and dangerous environments, so safety procedures are often very thorough and rigorous; that attitude is a good place to start in terms of using AI in a secure and responsible way.

What are the more common mistakes that manufacturers make with AI implementation and deployment?

There are two main issues as I see it. The first is trust; having faith in the output of AI to the extent of not having a human involved in the decision making process. This is something that is becoming extremely prominent with emerging AI regulations – where is AI actually making decisions? And crucially, does the business or organisation even know what decisions AI is making in the first place?

There is a second element around how AI is being used; potentially in a way that is unknown or unplanned for the business. A manufacturer might have a very clear AI deployment for new robotics for example, but does the same clarity exist around whether HR is using AI and their own decision making? That presents a risk.

And when we start to look at responsibility and regulations there is also the question around third-party risk. You may be deploying AI securely but can the same be said of your suppliers, and your suppliers’ suppliers. We have looked at many cyber attacks over the past few years, and we’ve heard a lot about supply chain security, where one failure can send ripples down into a bunch of third-parties. I suspect we’ll start to see similar patterns emerge with AI.

Are attackers using AI to find vulnerabilities?

I don’t think we’re quite at the point where AI is going to start finding vulnerabilities, and we’re still in an era where we need human involvement. But, I’ve certainly seen some really interesting techniques that support human decision making. You could have a situation where an ethical hacker like me could attack a system and I’ll be able to show the company that the vulnerability was down to a certain piece of software because it wasn’t updated, for example.

That is something AI could feasibly start to recognise. And remember, within cyber security and manufacturing companies, AI is nothing new. There has been a lot of talk in recent years about AI, what it means and, at a broader level, what the consequences of the technology might mean for us humans.

However, the fundamentals of AI are based on data analytics, gathering data from a variety of different sources, displaying that data and making decisions to gain actionable insights; this is nothing new for manufacturing.

What would be your top tips?

The first thing that UK manufacturing can do is really embrace the regulations that have been put in place. We know that safety regulations, for example, are often written in blood for a reason, especially in a sector like engineering and manufacturing. We have the opportunity to make sure AI regulations aren’t written with mistakes and potential human cost. So it’s important for manufacturers to engage with the regulations and be open to the idea of what they mean; be part of those discussions and make sure you have a seat at the table.

It’s also important within manufacturing organisations to open up conversations around what might be called ‘shadow AI’, where the technology is being used but the organisation at large may not be aware. Perhaps someone in HR has rolled something out or maybe a developer has deployed their own piece of software. If you don’t know that AI is in use, you have no idea what your attack surface looks like in regards to security.

So, before you can ever get to the point of asking whether or not your AI is going to get hacked, you’ve first got to ask, am I using AI in the first place? If you are, where are you using it? How are you using it? And I would urge manufacturers to keep an eye out for key legislation that we’re going to see coming in around AI, especially around issues of responsibility.

For example, in 2022 Canada Airlines claimed that its own chatbot was “responsible for its own actions” after it had given incorrect information to a traveller. However, when the case went to court, The British Columbia Civil Resolution Tribunal rejected the argument and ruled that the chatbot was acting on behalf of the organisation, and as such, the airline was responsible for the output. Air Canada had to pay damages and tribunal fees.

So, if you’re going to trust AI to make those types of decisions, manufacturers need to honour its choices. This kind of legal framework is going to become more prevalent over the next 12 to 18 months, and will be key for manufacturers to understand their own responsibilities around using AI and how it will impact their business going forward.

There is both a risk and an opportunity in AI, and pieces of legislation being introduced has highlighted this. Will AI be an ultimate force for good? A tool used by adversaries to enable their campaigns? Or another risk to be accounted for? Only time will give us these answers, but we can take steps to be a part of the conversation and lobby for policies now before we end up with blood on our hands.

Comments