Senators Mark Warner and Thom Tillis introduced the Secure Artificial Intelligence Act to establish an AI Security Center and strengthen counter-AI measures. The bill mandates the tracking of AI breaches and the development of preventive techniques.

A new legislative proposal, the Secure Artificial Intelligence Act, has been introduced in the U.S. Senate to enhance the security of AI systems by tracking breaches. Senators Mark Warner (D-VA) and Thom Tillis (R-NC) presented the bill, which calls for the establishment of an Artificial Intelligence Security Center at the National Security Agency. This center would spearhead research into “counter-AI” techniques and develop preventive measures against them.

The legislation mandates the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency (CISA) to maintain a database of AI breaches and near-misses. It categorizes counter-AI methods into data poisoning, evasion attacks, privacy-based attacks, and abuse attacks. Data poisoning involves inserting corrupt code into AI data, while evasion attacks alter data to confuse AI models.

The Biden administration’s AI executive order had previously instructed NIST to set up “red-teaming” guidelines, where developers intentionally probe AI systems for vulnerabilities before public release. Companies like Microsoft have developed tools to facilitate the addition of safety measures to AI projects.

The Secure Artificial Intelligence Act will first need to pass through a committee before being presented to the full Senate.

Share.

Ivan Massow Senior Editor at AI WEEK, Ivan, a life long entrepreneur, has worked at Cambridge University's Judge Business School and the Whittle Lab, nurturing talent and transforming innovative technologies into successful ventures.

Leave A Reply

Exit mobile version