Major tech firms launch coalition for AI security standards

The goal is to fortify the security and integrity of AI systems by establishing and disseminating standardized frameworks, methodologies, and tools for secure-by-design AI development.
Coalition for Secure AI

The Coalition for Secure AI (CoSAI) was officially launched today at the Aspen Security Forum. Hosted by the global standards body OASIS, CoSAI is an open-source initiative that provides practitioners and developers with the necessary guidance and tools to create AI systems that are secure by design. This initiative seeks to address the fragmented landscape of AI security by fostering a collaborative ecosystem that shares methodologies, standardized frameworks, and tools.

CoSAI is backed by a diverse range of stakeholders, including industry giants like Google, IBM, Intel, Microsoft, NVIDIA, and PayPal. These founding Premier Sponsors and other founding Sponsors, such as Amazon, Anthropic, Cisco, and OpenAI, signify a strong commitment across the industry to enhance AI security and build trust among global stakeholders.

The primary focus of CoSAI is to develop comprehensive security measures that address both classical and unique risks associated with AI systems. These risks include model theft, data poisoning, prompt injection, scaled abuse, and inference attacks. The initiative aims to create robust best practices and standardized approaches to mitigate these vulnerabilities.

David LaBianca of Google, CoSAI Governing Board co-chair, highlights the need of this initiative: “CoSAI’s establishment was rooted in the necessity of democratizing the knowledge and advancements essential for the secure integration and deployment of AI. With OASIS Open’s help, we look forward to continuing this work and collaboration among leading companies, experts, and academia.”

Omar Santos of Cisco, another CoSAI Governing Board co-chair, emphasizes the coalition’s collaborative spirit: “We are committed to collaborating with organizations at the forefront of responsible and secure AI technology. We aim to eliminate redundancy and amplify our collective impact through key partnerships focusing on critical topics.”

Initially, CoSAI will focus on three primary workstreams:

  • Software Supply Chain Security for AI Systems: This workstream will enhance composition and provenance tracking to secure AI applications.
  • Preparing Defenders for a Changing Cybersecurity Landscape: This will address investments and integration challenges in AI and classical systems.
  • AI Security Governance: Developing best practices and risk assessment frameworks for AI security.

These workstreams are designed to evolve, with plans to introduce additional focus areas over time. CoSAI aims to create a well-rounded and thorough approach to AI security by involving a wide range of experts from industry and academia.

Participation in CoSAI is open to all, and the initiative welcomes additional sponsorship support from companies involved in AI and cybersecurity.

Posted by Alex Ivanovs

Alex is the lead editor at Stack Diary and covers stories on tech, artificial intelligence, security, privacy and web development. He previously worked as a lead contributor for Huffington Post for their Code column.