Institutos Universitarios

Navigating AI Ethics: EU's Regulatory Approach and Theoretical Foundations

Autor: Gabriel Rodríguez Molina

Universidad Complutense de Madrid-ICCA

 

Modalidad: Presencial

 

Abstract:

The attention surrounding AI development is linked to numerous widely publicized cases and discussions. These range from algorithms influencing election outcomes and the dissemination of online misinformation to the ethical dilemmas posed by self-driving cars, as well as research advancements in genomic sequencing and analysis or drug discovery.


Given the diverse challenges presented by such an uncertain technology, regulators have recognized the urgency of responding to it due to the complexity of this domain (Justo-Hanani, 2022). Policymakers, regulators, governments, and public authorities worldwide have found themselves increasingly confronted with these impacts (Council of Europe, 2017), prompting calls for the development and utilization of AI to be situated "within the bounds of our fundamental norms, values, freedoms, and human rights" (European Economic and Social Committee, 2017).


This presentation aims to examine the institutional response within the European Union by analyzing its theoretical foundations and various ethical principles and dilemmas presented and derived from documents published by the EU and the AI Act. We will be looking at the initial draft of ethical guidelines published by the High-Level Expert Group (HLEG) on behalf of the European Commission (HLEGAI, 2018); the subsequent Ethics Guidelines for Trustworthy AI (HLEGAI, 2019), a text explicitly embraced by the AIA; its Policy and Investment Recommendations for Trustworthy AI; and the White Paper on AI—A European Approach to Excellence and Trust (European Commission, 2020), a document issued by the Commission which delineated the risk-based approach to AI and associated policies (Floridi, 2021). However, the primary focus of this analysis will be on the final AI Act regulation, an initiative aimed at promoting human-centric and trustworthy AI, ensuring protection from potentially harmful effects of AI-enabled systems, and fostering innovation (Wagner et al., 2024).