Governor Gavin Newsom invalidated a crucial AI safety bill on Sunday while addressing fears from the tech industry that it may discourage innovation and compel AI firms to leave California. The governor challenged the bill for set high standards for even primary AI functions regardless of their usage.
In his veto message Newsom noted that the legislation did not address the dangers tied to AI systems working in paramount or private information contexts. He pointed out the necessity for functional controls and assigned distinguished professionals in generative AI to guide the state in forming a reliable framework for AI governance. He told state agencies to examine the risks tied to catastrophic events involving AI.
Democratic State Senator Scott Wiener wrote the bill to create safety testing for advanced AI models and to set up a regulatory authority for Frontier Models. Such systems will showcase AI tools that need strong computing capabilities or surpass $100 million in investment. He maintained that this bill should be necessary to shield the citizens from the quick developments in AI technology.
According to critics like Wiener the decision risks exposing the public to harm since past voluntary agreements from the sector tend to fall short. He indicated that it is unwise to delay measures until there is a major emergency.
Newsom vowed to work alongside lawmakers on impending AI regulations despite the veto. He admitted that an innovative approach from California may be needed given that federal action has been hindered with AI safeguards.
Tech industry groups, including the Chamber of Progress, praised the veto, asserting that California’s tech economy thrives on competition and innovation. Major companies like Google, Microsoft, and Meta had voiced opposition to the bill, while proponents included Tesla CEO Elon Musk and Amazon-backed Anthropic.
In a separate move, Newsom signed legislation requiring the state to evaluate potential threats posed by generative AI to California’s critical infrastructure, signaling ongoing efforts to address AI-related risks.