New and emerging cybersecurity threats and attacker tactics

 

As cyberthreats continue to evolve nearly four decades after the first computer virus for PCs emerged in 1986, the cybersecurity landscape faces increasingly sophisticated challenges. While many are familiar with common threats like phishing and ransomware, newer, more targeted attacks are emerging, threatening the very foundations of our digital infrastructure.

Supply chain cyber-risks

Recent incidents have underscored the devastating potential of supply chain attacks. One alarming example is the XZ Utils backdoor (CVE-2024-3094), a critical vulnerability found in a widely used open-source compression tool. This attack, led by the ‘Jia Tan’ account, was a multi-year operation that began in 2021 and culminated in deploying a backdoor in 2024. Over time, the attackers embedded their exploit into the software, demonstrating how deeply supply chain attacks can infiltrate and exploit foundational software used across numerous organisations.

This incident serves as a critical reminder for organisations to scrutinize the security of their software supply chain. Open-source components can be weak links often maintained by small and underfunded teams. Organisations must monitor updates and patches to avoid introducing new vulnerabilities.

Open-source software issues

The XZ Utils incident highlights broader concerns within the open-source community. Malicious actors can insert backdoors into open-source projects with alarming ease. The Jia Tan account is just one example of how suspicious accounts can fly under the radar, quietly injecting malicious code into widely used software packages.

A recent analysis revealed that even PIP, the Python package management system, has a suspicious account with commit access. This raises serious concerns about the security of numerous critical Python packages. These accounts often make seemingly innocent contributions but could lay the groundwork for future exploits. This situation underscores the need for greater vigilance and verification within the open-source community. Organisations relying on open-source software must implement strict vetting processes and use tools to monitor and alert them to suspicious activity within their codebases.

The promise and perils of GenAI

GenAI offers transformative potential, as demonstrated by Klarna’s AI Assistant, which now handles the workload equivalent to 700 customer service agents. For Klarna, this translates into an estimated $40 million in annual savings, showcasing AI’s ability to enhance productivity and reduce operational costs.

However, the integration of GenAI comes with risks. Executives need to ensure that cybersecurity is a foundational consideration when adopting AI solutions. GenAI systems can be vulnerable to various threats, such as data poisoning, where attackers feed misleading data into AI systems, resulting in incorrect outputs. Additionally, these systems can face denial-of-service attacks, increasing costs and degrading performance, or privacy breaches where sensitive data is exposed.

Three key considerations when integrating GenAI are availability, system integrity, and privacy. Ensuring these aspects are robustly managed will help mitigate the risks associated with deploying AI systems at scale.

Best strategic defense tactics against cyberattacks

Orgnisations must adopt a multi-layered defense strategy to navigate this complex threat landscape. Here are some critical components:

1. Proactive security testing: red and blue team exercises

Red and blue team exercises simulate real-world cyberattacks, helping organisations uncover vulnerabilities before they can be exploited. For AI systems, these exercises should focus on assessing the robustness of models against harms such as hallucination, bias, and prohibited content like harassment. Organisations can stay ahead of potential threats by continuously evaluating and improving the security and ethical performance of AI systems.

2. AI-specific security measures, start leveraging ATLAS

Addressing AI-specific threats is crucial as AI becomes more integrated into business processes. Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS) is a knowledge base complementary to MITRE ATT&CK that documents real-world adversarial tactics against AI systems. Organisations should use ATLAS to stay informed about these evolving threats and improve their defenses against attacks targeting AI technologies.

3. Zero-trust architecture, the journey to better access control

Adopting a zero-trust architecture is crucial in today’s environment, especially for systems integrating AI. This approach operates on the principle that no entity—whether inside or outside the network—should be trusted by default. Continuous verification of user identities and strict access controls are foundational elements.

However, for AI systems, data boundaries are equally important. AI models often process vast amounts of sensitive data, and ensuring that this data is adequately segmented and protected is critical. Establishing clear data boundaries prevents unauthorized access to sensitive information, reducing the risk of data leakage or manipulation. This is particularly vital in AI systems where data integrity directly impacts the outputs and decisions made by the AI.

By implementing a zero-trust architecture with strong data boundary controls, organisations can ensure that their AI systems operate securely, protecting both the data they process and the insights they generate.

The evolving threat landscape demands that organisations remain vigilant and proactive in their cybersecurity efforts. Organisations can better protect their digital assets by understanding the risks associated with supply chain vulnerabilities, open-source software, and the integration of GenAI, as well as by implementing strategic defense tactics. Cybersecurity is no longer just an IT issue. It’s a critical component of overall business strategy that requires attention at every level of the organization.


Comments