The European Union’s artificial intelligence law, the first of its kind in the world, officially came into effect on Thursday. This is a significant step in the EU’s efforts to regulate this technology.
The Artificial Intelligence Act aims to protect the “fundamental rights” of citizens in the 27-nation bloc and promote investment and innovation in the AI industry.
The AI Act is a comprehensive rulebook for governing AI in Europe. It can also serve as a guide for other governments trying to regulate the rapidly advancing technology.
The AI Act applies to all products or services that use artificial intelligence in the EU, regardless of whether they are provided by big tech companies or local startups. The restrictions are divided into four levels of risk, and most AI systems are considered to be low-risk, like content recommendation systems or spam filters.
“The European approach to technology puts people first and ensures that everyone’s rights are preserved,” European Commission Executive Vice President Margrethe Vestager said. “With the AI Act, the EU has taken an important step to ensure that AI technology uptake respects EU rules in Europe.”
The measures will be gradually implemented, and Thursday’s enforcement date marks the beginning of the countdown for their gradual implementation over the next few years.
AI systems that are considered too risky, like social scoring, certain predictive policing, and emotion recognition in schools and workplaces, will be completely banned by February.
New rules for general-purpose AI models like OpenAI’s GPT-4 system will be in effect by August 2025.
Brussels is setting up a new AI Office that will act as the bloc’s enforcer for the general purpose AI rules.
OpenAI said in a blog post that it’s “committed to complying with the EU AI Act and we will be working closely with the new EU AI Office as the law is implemented.”
By mid-2026, all regulations related to high-risk AI, such as algorithms determining loan approvals or operating autonomous robots, will be enforced.
AI systems can be classified into four categories based on the level of risk they pose. The fourth category includes systems with limited risk, but they still have to meet transparency obligations. Chatbots need to be clearly identified as machines, and AI-generated content such as deepfakes must be properly labeled.
Companies that don’t comply with the rules face fines worth as much as 7% of their annual global revenue.