As concerns over artificial intelligence (AI) grow worldwide, both the European Union and California have taken groundbreaking steps to implement comprehensive AI regulation frameworks. These initiatives, which come amid increasing scrutiny over the ethical and safety implications of AI technologies, mark significant milestones in global AI governance.
On August 1, 2024, the EU's Artificial Intelligence Act (AI Act) became the world’s first all-encompassing regulatory framework for AI systems. The Act introduces a risk-based approach to AI regulation, classifying AI tools based on the potential risks they pose and setting clear legal obligations depending on their category. General-Purpose AI (GPAI) models, which are capable of performing a wide variety of tasks, face the most stringent regulations. Notable examples of GPAI systems include large-scale models like GPT-4 and image generators such as DALL-E.
The EU's AI Act takes a phased approach, with the rules surrounding prohibited AI practices set to take effect by February 2025. Further regulations governing GPAI models and high-risk AI systems will follow in 2025 and 2026. By targeting different categories of AI based on their risk profiles, the EU aims to ensure that AI innovations are harnessed responsibly while mitigating potential dangers to public safety.
Meanwhile, in California, the state legislature passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) in late August 2024. The bill mandates strict safety measures for developers of "frontier" AI models, which require significant computational resources for training and have the potential to be misused in harmful ways, such as the creation of cyberweapons or the disruption of critical infrastructure. SB 1047 introduces annual third-party audits, incident reporting requirements, and whistleblower protections to ensure AI developers prioritize safety and accountability.
California's SB 1047, which is now awaiting approval from Governor Gavin Newsom, would come into effect in 2026. This legislation mirrors the EU’s risk-based approach but is narrower in scope, focusing on high-powered AI systems rather than a broader classification of AI models. The bill has divided Silicon Valley, with some tech industry leaders, like Elon Musk, advocating for regulation, while others warn it could stifle innovation.
Together, these legislative moves by the EU and California signal a growing global consensus on the need for strong regulatory frameworks to manage the risks associated with AI. Both regions are setting the stage for other governments to follow suit, as the world grapples with the rapid development of AI technologies and their far-reaching impacts.