On May 21, 2024, the European Commission adopted world’s first law on AI, formally known as the Artificial Intelligence Act, which aims to regulate the development and use of AI in a way that is ethical, transparent, and respects fundamental rights.

Brief Recap of the AI Act

The European Union’s AI Act was first proposed by the European Commission on April 21, 2021. After its proposal on April 21, 2021, the EU AI Act underwent several key stages of development. Initially, the draft sparked extensive debates among EU member states, industry stakeholders, and civil society, leading to numerous revisions and amendments to balance innovation with regulation. Throughout 2022 and 2023, the proposal was refined through consultations, expert input, and negotiations within the European Parliament and Council. It was finally approved by the European Parliament on March 13, 2024 and the act was formally adopted on May 21, 2024.

The primary goal of the Act was to create a comprehensive regulatory framework for artificial intelligence, aiming to ensure AI technologies were developed and used in ways that were safe, ethical, and in alignment with European values. This proposal came in response to the rapid advancements in AI and the need to address potential risks associated with its deployment.

The initial proposal outlined a risk-based approach to AI regulation, categorizing AI systems into different risk levels: unacceptable risk, high risk, and low/minimal risk. It also included requirements for transparency, human oversight, and robustness for certain high-risk AI applications. The proposal sparked extensive discussions and debates among EU member states, industry stakeholders, and civil society, leading to numerous revisions and amendments to balance innovation with regulation.

The adoption of the AI Act positioned the EU as a global leader in AI regulation, setting a precedent for other regions and countries to follow. The Act’s implementation aimed to foster trust in AI technologies while safeguarding the rights and safety of individuals, promoting ethical AI development and usage across member states.

Enactment and Commencement

The text will officially become law as soon as it is signed by Presidents of the European Parliament. The law will come into force 20 days after its publication in the official Journal and will be fully applicable 24 months after this date with the following exceptions:

  1. Bans on prohibited practices will apply 6 months after the law takes effect.
  2. Codes of practice will be enforced 9 months after the law takes effect.
  3. General-purpose AI rules, including governance will be applicable 12 months after the law takes effect.
  4. Obligations for high-risk systems will come into effect 36 months after the law takes effect.

 

Applicability of the AI Act

The AI Act directly affects businesses operating within the EU, whether they are providers (i.e. those developing the systems), users (referred to as “deployers”), importers, distributors, or manufacturers of AI systems.

The Act will have a big impact on businesses and employers by introducing new rules for using artificial intelligence. Companies using high-risk AI systems will face higher costs and paperwork to ensure these systems are safe and transparent. They will also need to make sure humans can oversee and intervene in AI decisions. Businesses will have the chance to test new AI technologies in special environments with regulatory support, which can help them innovate safely.

Small businesses and startups will get extra help to comply with the rules. While meeting these requirements might be challenging, it can also build consumer trust and give companies a competitive edge. However, failing to follow the rules could lead to hefty fines and legal issues.

It provides exemptions such as for systems used exclusively for military and defense as well as for research purposes.

Salient features of the AI Act

Risk-Based Classification:

Unacceptable Risk: AI systems that pose a clear threat to safety, livelihoods, and rights of people, such as social scoring by governments, are prohibited.

High Risk: AI systems used in critical areas like employment, education, law enforcement, and essential public and private services must meet stringent requirements regarding risk management, data governance, transparency, and human oversight.

Limited Risk: AI systems that involve limited risk, such as chatbots, must adhere to transparency obligations to inform users they are interacting with an AI system.

Minimal Risk: AI systems with minimal risk are mostly exempt from additional requirements, encouraging innovation in low-risk areas.

Transparency and Human Oversight:

High-risk AI systems must provide clear information about their functioning and limitations.

Users must be informed when they are interacting with an AI system.

Provisions to ensure human oversight, allowing humans to intervene or override decisions made by AI systems.

Data and Governance Requirements:

High-risk AI systems must utilize high-quality datasets to minimize risks and biases.

Requirements for record-keeping and logging to ensure traceability of AI system decisions.

Obligations to maintain and provide documentation to demonstrate compliance.

Regulatory Sandboxes:

Creation of controlled environments where businesses can test innovative AI systems under regulatory supervision to foster innovation while ensuring compliance.

Market Surveillance and Enforcement:

Establishment of authorities at both EU and national levels to monitor and enforce compliance. Penalties for non-compliance, including significant fines, similar to the General Data Protection Regulation (GDPR).

Support for Innovation:

Measures to support small and medium-sized enterprises (SMEs) and startups, including reduced regulatory burdens and access to regulatory sandboxes.

Encouragement of public-private partnerships and funding initiatives for research and development in AI.

Harmonization and International Cooperation:

A unified framework that aligns AI regulations across all EU member states.

Provisions for international cooperation to align with global standards and promote the EU’s approach to trustworthy AI worldwide.

Banned Applications

The new regulations prohibit specific AI applications that pose risks to citizens’ rights, such as biometric categorization systems based on sensitive traits and the indiscriminate collection of facial images from the internet or CCTV footage for facial recognition databases. Additionally, emotion recognition in workplaces and schools, social scoring, predictive policing based solely on profiling or characteristic assessment, and AI designed to manipulate human behavior or exploit vulnerabilities are also banned.

 

Consequences of non-compliance

Non-compliance with the prohibition of the AI practices shall be subject to administrative fines of up to 3,50,00,000 EUR or, if the offender is an undertaking up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher and any of the following provisions related to operators or notified bodies, other than those laid down in shall be subject to administrative fines of up to 1,50,00,000 EUR or, if the offender is an undertaking, up to 3% of its total worldwide annual turnover for the preceding financial year whichever is higher.

Conclusion

In conclusion, the EU AI Act represents a crucial milestone in regulating the use of artificial intelligence, ensuring it aligns with ethical standards and fundamental rights. By categorizing AI systems based on risk levels and imposing stringent requirements, the Act aims to foster trust in AI technologies while safeguarding individuals’ rights and safety. Its significance lies in providing a comprehensive framework for AI governance, thereby promoting responsible innovation and ensuring a level playing field for businesses. Moving forward, the Act will play a pivotal role in shaping the future of AI, balancing innovation with regulation to create a safer and more transparent AI landscape for businesses and society as a whole.

 

Disclaimer:  This is an effort by Lexcomply.com, to contribute towards improvingcompliance managementregime.User is advised not to construe this service as legal opinion and is advisable to take a view of subject experts.

Leave a Reply

Your email address will not be published. Required fields are marked *