How CISOs can counter the threat of nation state espionage

by Wire Tech

How CISOs can counter the threat of nation state espionage

The rise of DeepSeek has prompted the usual well-documented concerns around AI, but also raised worries about its potential links to the Chinese state. The Security Think Tank considers the steps security leaders can take to counter threats posed by nation state industrial espionage?

Over 80% of global companies are now using AI to improve business operations. AI has also become a feature of individuals’ daily lives as we interact with chatbots, voice assistants, or predictive search technologies. But as AI diffusion grows, so too do the risks associated with its misuse – particularly by nation state actors engaged in espionage, cyber attacks, and supply chain compromise.

Recent developments like February’s AI Action Summit, president Trump’s Executive Order and the UK government’s AI Opportunities Action Plan reveal two key themes. First, national interest is at the heart of government AI strategies, and second, AI has become an explicit focus of many national defence strategies. It is therefore no surprise that the emergence of powerful models such as DeepSeek’s R1 has renewed concerns about industrial espionage.

However, focusing on particular models, vendors, or states misses a broader point: AI is already being weaponised to support cyber attack tactics, including reconnaissance and resource development to target industries and their secrets. For chief information security officers (CISOs) and security leaders, the question raised is how AI changes the threat landscape and how to respond accordingly. For startups and technology-driven industries, this is even more pressing, as nation-states have already been shown to target those at the cutting-edge of technology. Adjustments to the roles of people, processes and technology in cyber security are therefore required to respond strategically to AI threats.

AI-augmented cyber operations

Nation-state actors have been increasingly integrating GenAI into cyber attacks to enhance efficiency, automation, and precision. More than 57 advanced persistent threat (APT) groups linked to nation states have been observed using AI in cyber operations. AI can automate research, translate content, assist with coding, and develop malware to advance cyber operations.

One of the most concerning challenges is the use of AI in crafting highly convincing phishing messages, increasing both the pace and scale of cyber-attacks. Large language models (LLMs) can generate highly plausible messages, targeted to individuals and organisations. Criminals are deploying believable, personalised AI-generated deepfake videos, audio, and images to enhance social engineering campaigns. The case of Arup, the design and engineering firm, which lost $25 million as a result of a deepfake ‘CFO’, shows how convincing AI-enabled operations can gain meaningful access to companies.

Supply chain vulnerabilities

Beyond direct cyberattacks, threat actors are also targeting AI supply chains from hardware to software. The infamous SolarWinds Sunburst attack demonstrated how sophisticated nation state actors can infiltrate enterprise networks by targeting supply chains. The risk extends to AI software as well. By embedding vulnerabilities at the manufacturing or development stage, adversaries can target a broad range of adversaries, profiting from economies of scale.

Supply chain vulnerabilities are a key trend dominating cyber security. The Bureau of Industry and Security’s recent prohibition on the import and sale of hardware or software for connected vehicles from certain nations highlights the US's growing concern. Malicious actors have targeted Python packages for LLMs like ChatGPT and Claude to deliver malware that can harvest browser data, screenshots and session tokens. Those procuring AI systems and their components need to consider both where the AI has come from and how users will interact with it.

AI governance and security frameworks

To defend against AI-augmented nation-state threats, security leaders must adopt a range of strategies including AI governance frameworks, targeted training, robust data protection measures, third-party risk management processes, and proactive threat intelligence.

AI frameworks aligning with best practice for governance – such as NIST AI RMF, ISO 42001 and MITRE, OWASP and NCSC for security – provide the basis to a structured defence. By establishing clear roles and accountabilities for AI, policies defining acceptable and unacceptable use, and robust approaches to monitoring and auditing, the framework can implement defences against exposing sensitive information.

The role of people and culture needs to change in response to AI risks. Training, starting with AI literacy to cover foundational AI awareness and its impact on security, can empower staff to spot, challenge, and mitigate AI cyber threats. An inventory of AI systems is a foundational part of AI governance. CISOs need to know where and how AI is being used across the enterprise, and technology companies need to know what and where their critical assets are.

Data protection measures

Data access controls can limit adversaries’ ability to exfiltrate proprietary secrets. Data segmentation to restrict AI models from processing sensitive data, privacy-enhancing technologies like encryption, and monitoring systems for unauthorised loss of corporate data make it harder for nation-states to extract valuable intelligence. Applying data protection principles like minimisation, purpose limitation, and storage limitation can further both security and responsible AI objectives.

Read more about DeepSeek

  • US politicians have introduced a bill seeking to ban the use of the DeepSeek AI tool on government-owned devices, citing national security concerns due to its alleged links to the Chinese state.
  • Researchers at Palo Alto have shown how novel jailbreaking techniques were able to fool breakout GenAI model DeepSeek into helping to create keylogging tools, steal data, and make a Molotov cocktail.
  • DeepSeek has found popularity with its reasoning model. However, based on geopolitical tensions and safety tests, there are questions about whether enterprises should use it.

Securing AI supply chains

Meanwhile, supply chain risk management prevents infiltration of compromised AI tools. Important steps include conducting security assessments for third-party AI vendors, ensuring that AI models do not rely on foreign-hosted APIs that could introduce vulnerabilities, and documenting software bills of materials (SBOMs) to track dependencies and detect risks.

AI-driven threat detection and response

Finally, AI itself can be a tool to defend against AI-powered threats. AI-driven anomaly detection can identify suspicious behaviour or data loss patterns, deploying adversarial AI to test enterprise AI systems for vulnerabilities, increase monitoring for AI-generated phishing, and assess the effectiveness of controls. As AI-enabled cyber attacks accelerate beyond human response capabilities, automated monitoring and defensive systems are necessary to prevent exploitation of vulnerabilities at machine speed.

Clearly, the rise of AI-powered nation state threats demands a proactive and strategic response from security leaders. By adopting AI governance frameworks, enforcing strict data governance, securing supply chains, and leveraging AI-driven threat detection, enterprises can strengthen their defences against industrial espionage.

Elisabeth Mackay is a cyber security expert at PA Consulting

Originally published at ECT News

You may also like

Leave a Comment

Unlock the Power of Technology with Tech-Wire: The Ultimate Resource for Computing, Cybersecurity, and Mobile Technology Insights

Copyright @2023 All Right Reserved