The European Parliament has adopted a landmark law known as the Artificial Intelligence Act, marking a significant step towards regulating artificial intelligence (AI) within the European Union. This legislation aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI, while also fostering innovation and establishing Europe as a leader in the field of AI. The Act was endorsed by MEPs with 523 votes in favor, 46 against, and 49 abstentions, reflecting a broad consensus on the importance of balancing the benefits of AI with the need to safeguard against its potential risks.
The Artificial Intelligence Act is designed to ensure safety and compliance with fundamental rights, while also boosting innovation. It establishes obligations for AI based on its potential risks and level of impact. The regulation includes measures to protect against high-risk AI applications, such as bans on biometric surveillance, emotion recognition, predictive policing AI systems, and the indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases. These measures are aimed at safeguarding human rights and privacy.
The Act also introduces transparency and risk-management rules for AI systems. General-purpose AI (GPAI) systems and their underlying models must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. More powerful GPAI models that could pose systemic risks will face additional requirements, such as performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents. Artificial or manipulated images, audio, or video content (“deepfakes”) must be clearly labeled as such.
To support AI innovation, the Act promotes regulatory sandboxes, which are controlled environments established by public authorities for testing AI before its deployment. These sandboxes will be accessible to SMEs and start-ups, allowing them to develop and train innovative AI technologies before they are placed on the market.
The Act also includes provisions for the right of consumers to launch complaints and receive meaningful explanations about decisions made by high-risk AI systems that significantly impact their rights. This is part of a broader effort to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly.
The legislation is expected to enter into force twenty days after its publication in the official Journal and will be fully applicable 24 months after its entry into force. However, certain provisions, such as bans on prohibited practices and obligations for high-risk systems, will have different timelines for implementation.
The Artificial Intelligence Act represents a significant milestone in the EU’s efforts to regulate AI, aiming to ensure that the technology is developed and used in a way that is safe, ethical, and beneficial for society. It reflects the EU’s commitment to balancing the potential benefits of AI with the need to protect fundamental rights and ensure that the technology is used responsibly.
Will the Act be enforced by the EU or by member states?
The enforcement of the Artificial Intelligence Act (AI Act) will be primarily carried out by national competent market surveillance authorities in each Member State of the European Union (EU). This approach ensures that the Act is enforced at the national level, allowing for a tailored application of the regulations to the specific contexts and legal frameworks of each country. The European AI Office, established within the Commission, will play a crucial role in overseeing the AI Act’s enforcement and implementation across the member states. Its mission includes fostering collaboration, innovation, and research in AI, as well as engaging in international dialogue and cooperation on AI issues. This dual enforcement mechanism, combining national authorities with a centralized European office, aims to create a cohesive and effective framework for the regulation of AI technologies within the EU.
The European AI Office will also be responsible for setting standards, coordinating enforcement efforts, and ensuring that the Act is implemented in a manner that respects human dignity, rights, and trust. This dual enforcement approach is designed to address the challenges of ensuring consistency and compliance across the EU, which has been a concern with previous regulations such as the General Data Protection Regulation (GDPR). The European AI Office will work alongside national authorities to ensure that the AI Act is effectively enforced, with the European AI Board serving as a coordination platform and advisory body to the Commission.
The AI Act introduces a strict enforcement regime, with national authorities designated by each EU Member State supervising compliance within their territories. The European Artificial Intelligence Board, comprised of representatives from member states, will ensure consistent application of the law. The Act also outlines significant fines for non-compliance, ranging from 7.5 million euros to 35 million euros or up to 7% of a company’s total worldwide annual turnover, depending on the type of AI system, the size of the company, and the severity of the infringement. These fines are designed to deter non-compliance and ensure that entities adhere to the Act’s requirements.
The enforcement of the AI Act will be a collaborative effort between the EU and its member states, with the European AI Office playing a central role in coordinating and overseeing the implementation of the Act. This approach aims to ensure that the Act is effectively enforced across the EU, balancing the need for national adaptability with the goal of creating a unified regulatory framework for AI technologies.
How will the Act affect the development and deployment of AI technology?
The Artificial Intelligence Act (AI Act) will significantly impact the development and deployment of AI technology within the European Union (EU) by introducing a comprehensive regulatory framework aimed at ensuring that AI technologies are developed and used responsibly. This legislation will have several key effects on AI development and deployment:
- Promotion of Responsible AI Development: The AI Act will encourage the development of AI technologies that are safe, transparent, and accountable. It will require AI developers to adhere to strict ethical guidelines and transparency requirements, ensuring that AI systems are developed with human dignity, rights, and trust in mind. This will likely lead to a shift in the AI development industry towards more ethical and responsible practices.
- Enhanced Transparency and Accountability: The Act introduces measures to increase transparency and accountability in AI systems. This includes requirements for AI developers to disclose training data and other key information to users and other stakeholders, as well as publicly disclosing adverse incidents or AI system failures. These measures are designed to build trust in AI systems and ensure that they are developed and used in a way that is transparent and accountable to all stakeholders.
- Protection of Fundamental Rights and Privacy: The AI Act aims to protect fundamental rights and privacy from the risks associated with high-risk AI applications. This includes bans on certain AI applications, such as biometric surveillance and predictive policing AI systems, and measures to protect against the indiscriminate scraping of biometric data. These protections will likely lead to a reevaluation of the types of AI applications that are developed and deployed, with a focus on those that do not infringe on individuals’ rights and privacy.
- Regulatory Sandboxes for Innovation: The Act promotes the establishment of regulatory sandboxes, which are controlled environments for testing AI technologies before they are deployed. These sandboxes will provide a safe space for innovation, allowing AI developers to experiment with new technologies without the risk of immediate enforcement of the AI Act. This will encourage innovation in AI technology while ensuring that any new technologies are developed and tested responsibly.
- Consumer Protection: The AI Act includes provisions for the right of consumers to launch complaints and receive meaningful explanations about decisions made by high-risk AI systems. This will ensure that consumers have a voice in how AI technologies are developed and used, and that they are protected from the potential risks associated with these technologies.
- Enforcement and Compliance: The Act introduces a strict enforcement regime, with significant fines for non-compliance. This will ensure that AI developers and operators adhere to the Act’s requirements, and that there are consequences for those who do not. The enforcement of the Act will be carried out by national competent market surveillance authorities in each Member State, with the European AI Office playing a central role in overseeing the Act’s enforcement and implementation across the member states.
In summary, the AI Act will have a profound impact on the development and deployment of AI technology within the EU, promoting responsible AI development, enhancing transparency and accountability, protecting fundamental rights and privacy, encouraging innovation through regulatory sandboxes, ensuring consumer protection, and introducing a strict enforcement regime for compliance. These measures are designed to ensure that AI technologies are developed and used in a way that is safe, ethical, and beneficial for society.
Late news around the world: Trump vs. Biden: 2024 Presidential Rematch Since 1956