California state Senator Scott Wiener has proposed a bill aimed at regulating artificial intelligence to ensure safety measures are in place, including mandatory testing and emergency off-switches. The legislation highlights the need for preemptive action in addressing the ethical and cybersecurity challenges posed by AI advancement.
California state Senator Scott Wiener has proposed the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act to regulate artificial intelligence (AI) within the state. Introduced on February 14, 2024, the bill seeks to establish the Frontier Model Division within the California Department of Technology. This new division would ensure the mandatory testing of large AI models before their deployment and require AI systems to have built-in emergency off-switches and hacking protections.
Senator Wiener emphasized the importance of preemptive action to mitigate safety risks related to AI technologies. Despite the potential benefits of AI in improving government digital services and efficiency, the rapid advancement in AI also brings new ethical and cybersecurity challenges.
The proposed legislation does not specify how the Frontier Model Division would collaborate with ongoing state policies or with officials like statewide Chief Information Officer Liana Bailey-Crimmins. Both the California Department of Technology and the California Government Operations Agency have declined to comment on the bill.
The bill has the potential to not only enhance AI regulations in California but also influence policies across the nation.