NSA comes up with best practices for rolling out secure AI systems NSA comes up with best practices for rolling out secure AI systems

NSA comes up with best practices for rolling out secure AI systems

According to the agencies, attackers targeting AI systems can use attack vectors unique to such systems as well as traditional techniques.

On April 15, 2024, the US National Security Agency’s Artificial Intelligence Security Center (NSA AISC), in collaboration with the Cybersecurity and Infrastructure Security Agency (CISA), the FBI, and cybersecurity agencies from Australia, Canada, New Zealand, and the United Kingdom, unveiled a comprehensive guide (pdf here) for securely deploying artificial intelligence (AI) systems. This new cybersecurity information sheet aims to assist organizations in enhancing the security of externally developed AI systems.

Titled “Joint Guidance on Deploying AI Systems Securely,” the document was produced as part of an international effort to safeguard AI technologies from sophisticated and conventional cyber threats. It outlines a series of best practices critical to protecting AI systems against an array of attack vectors.

The guidance emphasizes the importance of securing the infrastructure supporting AI systems, rigorous testing of AI models, and implementing automated rollback processes. It advocates for a cautious approach to integrating new AI models into operational settings, urging organizations to conduct thorough checks for any exposed APIs, actively monitor AI behavior, and routinely conduct security audits and penetration testing.

An excerpt from the PDF document provided by CISA.
An excerpt from the PDF document provided by CISA.

Recognizing the complexity and varied nature of AI systems, the guidance advises tailoring the security measures to the specific needs and threats unique to different environments. This includes adopting a “secure by design” philosophy from AI suppliers who prioritize the security of their systems.

The NSA AISC and its partners have highlighted the dynamic nature of AI security. They note that as new vulnerabilities are identified and exploitation techniques evolve, ongoing updates and adjustments to AI systems and their security practices will be necessary.

This initiative builds upon previous documents such as “Guidelines for Secure AI System Development” and “Engaging with Artificial Intelligence“, which also focus on establishing robust security frameworks for AI technologies. CISA has encouraged organizations utilizing AI to consult these guidelines to better understand and implement the necessary security measures.