Strategic counsel for innovators and creatives.

Blog

California Passes Groundbreaking AI Safety Law

California has officially stepped into the AI regulation arena with the passage of SB 53, a landmark law signed by Governor Gavin Newsom that sets new transparency and safety standards for large artificial intelligence companies. This first-of-its-kind legislation not only targets the biggest players in the AI space, like OpenAI, Meta, Anthropic, and Google DeepMind, but also signals a major shift in how emerging technologies will be governed at the state level.

At its core, SB 53 requires large AI developers to disclose their safety protocols and provides whistleblower protections for employees within those companies. It also mandates that companies report “critical safety incidents” to the California Office of Emergency Services. These could include things like AI-powered cyberattacks or autonomous deceptive behavior, both of which fall outside the jurisdiction of the European Union’s AI Act, making California’s law one of the most comprehensive to date.

Not surprisingly, the bill has stirred debate in the tech world. OpenAI and Meta actively lobbied against SB 53, with OpenAI even publishing a public letter urging the governor not to sign it. Their concern? A fragmented, state-by-state approach to AI regulation could stifle innovation. But supporters, including Anthropic, argue that basic safety and accountability are not optional in the age of autonomous systems.

The passage of SB 53 also comes at a time when some tech leaders are investing heavily in federal and local political campaigns aimed at keeping AI regulation minimal. As major companies pour money into pro-AI super PACs, California is charting a different course, one that prioritizes public safety and transparency over unfettered growth.

The bill’s sponsor, Senator Scott Wiener, made substantial changes to this version of the legislation after Governor Newsom vetoed a broader AI bill last year. This time around, Wiener actively engaged with major tech companies to revise the language and bring stakeholders to the table. The result is a bill that many see as a reasonable first step in regulating a rapidly evolving industry without stalling innovation.

Another bill, SB 243, which focuses on AI companion chatbots, is currently awaiting the governor’s decision. If signed into law, it would require chatbot operators to implement safety protocols and accept legal responsibility if their AI companions violate those protocols. Together, these bills suggest a broader shift in California’s approach to AI governance, ethics, and accountability.

For creators, technologists, and employers in California, the message is clear: the era of AI self-policing is coming to a close. Legal compliance, ethical development, and proactive risk management are no longer just good practices, they’re becoming legal imperatives.

Need help navigating AI-related compliance or building policies for your organization’s use of emerging technology?

Let’s talk. 

ARS Counsel provides strategic legal guidance for innovators shaping the future. Contact us today for a consultation

Almuhtada Smith