In a move to enhance the cybersecurity landscape, the United States federal government is calling on AI developers to integrate security as an essential prerequisite. Highlighting the intricate challenges surrounding AI systems, the government warns that rectifying machine learning code after deployment is arduous and costly. The Cybersecurity and Infrastructure Security Agency (CISA) underlines the importance of ensuring that AI is inherently secure, aligning with its ongoing campaign to infuse security measures into the core of design and development initiatives.
While refraining from introducing specific legislative proposals, CISA's recent blog post highlights past research that accentuates the intricate nature of machine learning. Acknowledging the distinctive nature of AI's security requirements, the blog post lists fundamental, universally applicable security practices that remain pertinent to AI software. This list gains heightened relevance as threat actors have demonstrated a propensity to exploit AI systems by targeting vulnerabilities in non-AI software components.
CISA reinforces that AI software development, deployment, testing, and management should be anchored in established security practices endorsed by the wider community. Special attention is directed towards systems that process AI model file formats, with CISA urging stringent protection against untrusted code execution attempts and advocating the use of memory-safe programming languages. The agency also underscores the importance of implementing vulnerability identifiers, capturing a comprehensive software bill of materials for AI models and their dependencies, and adhering to fundamental privacy principles by default.
As AI redefines industries, the call to prioritize security underscores a collective effort to fortify the digital landscape against evolving threats. This initiative signals a proactive approach towards security, positioning it at the forefront of AI innovation.