top of page
Writer's pictureHigor Barbosa

EU AI ACT: A Skeptical View on the Regulation of Artificial Intelligence (AI)



Artificial Intelligence (AI) is a technology with the potential to transform various aspects of human life, from health and education to the economy and security. However, along with these opportunities, AI also poses risks and challenges, including the violation of fundamental rights, the potential for discrimination, behavior manipulation, and the impact on individual autonomy.


In light of these considerations, it is crucial to establish a legal and regulatory framework that ensures the ethical, safe, and responsible development and use of AI, respecting the values and principles of society. In this regard, the European Union (EU) has emerged as a global leader in AI regulation, introducing in April 2021 a legislative proposal called the EU AI Act. This proposal aims to create a set of harmonized rules for AI within the economic bloc.


After an extensive process of public consultation and negotiation among European institutions, this regulatory framework reached a political agreement on December 8, 2023. While it still needs approval from the European Parliament and the EU Council before becoming law, the EU AI Act already represents a historic milestone in AI governance in Europe and worldwide.


Key Objectives and Impacts of the EU AI Act

The primary goal of the EU AI Act is to ensure that AI is developed and used in accordance with the values and fundamental rights of the EU, such as human dignity, democracy, equality, the rule of law, and respect for diversity. To achieve this, it establishes an asymmetric approach based on a risk assessment matrix, categorizing AI applications into four groups: prohibited, high-risk, limited-risk, and minimal-risk.


- Prohibited Applications: These are applications incompatible with EU values and principles, such as the use of AI in manipulative techniques or social scoring systems. Such applications are banned in the EU, subject to severe penalties for violations.


- High-Risk Applications: These involve AI that can cause significant harm to the rights and freedoms of individuals or public safety, covering sectors such as infrastructure, transportation, health, and education. They are subject to specific obligations, including pre-risk assessment, data quality and safety assurance, transparency, human supervision and intervention, as well as monitoring, auditing, and registration in a European high-risk AI database.


- Limited-Risk Applications: This category includes instances where AI may have a moderate impact on the rights and freedoms of individuals, such as the use of AI in biometric identification systems. These applications are subject to transparency obligations, informing users about the nature and objectives of AI.


- Minimal-Risk Applications: Applications like games or virtual assistants are exempt from specific obligations but must adhere to general data protection, consumer protection, and competition rules.


Overall, the EU AI Act aims to foster a trustworthy AI ecosystem in Europe by encouraging the development and use of ethical, safe, and responsible AI applications that contribute to social well-being, economic growth, and EU competitiveness. In other words, the EU AI Act seeks to ensure the protection of the rights and freedoms of individuals affected by AI, guaranteeing their access to information, explanations, control mechanisms, and avenues for redress.


Moreover, it is important to emphasize that this set of regulatory standards aims to establish a global standard for AI regulation, serving as a reference and inspiration for other countries and regions seeking to adopt similar measures.


Challenges and Ethical Implications of the EU AI Act

Despite representing a step forward, the EU AI Act faces challenges and ethical implications that require attention. One of these challenges is the need to maintain relevant and adaptable regulation in a rapidly evolving AI innovation landscape. In other words, it must be updated and periodically revised based on evidence and public consultations, given the constant and dynamic nature of technological changes.


Another challenge is ensuring the effective implementation and rigorous enforcement of the EU AI Act, relying on cooperation and coordination among various actors and levels of governance. To address this, the EU AI Act proposes the establishment of a European AI body to oversee, guide, and support the enforcement of rules throughout the EU, in collaboration with national supervisory authorities.


Regarding ethical implications, one concern is addressing algorithmic bias, as an AI system has the power to reproduce or amplify prejudices, stereotypes, or discriminations. Thus, the EU AI Act seeks to mitigate this bias by requiring the use of high-quality and diverse data in high-risk AI applications, transparency, supervision, and periodic monitoring.


Furthermore, another ethical implication is ensuring transparency and accountability in AI systems, crucial for user and stakeholder trust, understanding, and participation. Therefore, the EU AI Act aims to strengthen the accountability of institutions by requiring that high-risk and limited-risk AI applications be accompanied by labels, manuals, and documentation informing users about features and risks.


Implications of the EU AI Act for Brazil

Brazil, like the EU, has shown significant efforts in AI regulation, demonstrated by the bill under discussion in Congress (PL 2338/2023), which shares similarities with the EU AI Act. Both adopt a risk-based approach, prohibiting the use of AI in excessively high-risk situations and identifying critical sectors.


On the other hand, the Brazilian bill presents differences and gaps compared to the EU AI Act. For example, there is no provision explicitly addressing the use of AI in automated content recommendation systems, a limited-risk application specified in the EU AI Act. Another gap is the lack of a clear definition of the implementation and enforcement of rules in the Brazilian bill, as well as coordination between actors and levels of governance. These aspects are crucial for the effectiveness and consistency of regulation.


However, it is important to note that, despite distinct characteristics and specificities, the EU AI Act can serve as inspiration and reference for Brazil, facilitating dialogue and cooperation in the areas of innovation and technology. This collaboration can promote the exchange of knowledge, strengthening the convergence and harmonization of rules and standards. Thus, strategic cooperation can benefit both countries in global AI governance, advocating for common values and influencing other countries and regions to adopt best practices.

9 views0 comments

Comments


bottom of page