As of Sunday, the European Union has begun enforcing its AI Act, granting regulators the authority to ban AI systems deemed to pose an “unacceptable risk” to individuals or society. February 2 marks the first compliance deadline under the sweeping AI regulatory framework, which was officially passed by the European Parliament in March 2024 and took effect on August 1.

The AI Act categorizes AI systems into four risk levels:

  1. Minimal Risk – AI applications such as email spam filters will face no regulatory oversight.
  2. Limited Risk – Includes customer service chatbots, which will be subject to light regulatory requirements.
  3. High Risk – Covers AI used in critical sectors like healthcare and finance, subject to strict oversight.
  4. Unacceptable Risk – These AI applications are prohibited altogether under the new compliance rules.

Among the banned AI use cases are:

  • AI-powered social scoring systems that assess individuals based on their behavior.
  • AI tools that manipulate decisions through subliminal or deceptive tactics.
  • AI that exploits vulnerabilities related to age, disability, or socioeconomic status.
  • AI that attempts to predict criminal activity based on appearance.
  • AI that infers personal characteristics like sexual orientation from biometric data.
  • Real-time biometric data collection in public places for law enforcement purposes.
  • AI that analyzes emotions in workplaces or schools.
  • AI systems that build or expand facial recognition databases using online images or surveillance footage.

Companies violating these regulations face steep penalties, with fines reaching up to €35 million (~$36 million) or 7% of global annual revenue—whichever is higher.

What Comes Next?

Although organizations are expected to comply as of February 2, enforcement will take full effect in August 2025, according to Rob Sumroy, head of technology at Slaughter and May. “By then, regulatory bodies will be in place, and fines will officially be imposed,” he noted.

Industry Response

Ahead of the compliance deadline, over 100 companies—including Amazon, Google, and OpenAI—signed the EU AI Pact, voluntarily committing to aligning with the AI Act’s principles. However, major tech players like Meta, Apple, and French AI startup Mistral did not sign the pledge. Despite this, legal experts suggest that most companies will likely comply with the Act’s prohibitions, as the banned AI applications are not widely used in mainstream business operations.

Possible Exemptions

While strict, the AI Act does allow for exceptions in certain cases:

  • Law enforcement may use biometric AI in public spaces for targeted searches, such as finding abducted persons or preventing imminent threats to life. However, regulatory approval is required, and AI alone cannot be used to make legally binding decisions.
  • Emotion-detecting AI may be permitted in workplaces and schools if used for medical or safety purposes, such as therapeutic applications.

The European Commission has promised additional guidelines in early 2025 to clarify enforcement and exemptions, but these have yet to be published. Experts also warn of potential legal complexities as the AI Act interacts with other EU regulations, including GDPR, NIS2, and DORA, particularly concerning overlapping compliance and incident reporting requirements.

As AI regulation takes shape, businesses operating in the EU will need to navigate these evolving legal landscapes to ensure compliance while maintaining innovation.