Skip to main content

SoBigData Articles

Children's Vulnerability in the EU AI Act

Children interact with AI technologies through a variety of devices, such as smart toys, virtual assistants, video games, and adaptive learning platforms. While these innovations offer significant advantages, they also present substantial risks to children's privacy, safety, and overall well-being due to their inherent vulnerability which stems from their “lack of psychophysical maturity and corresponding legal incapacity”[1].

The EU Regulation 2024/1689 on AI emphasizes the protection of vulnerable groups interacting with AI systems[2], including children, by addressing their specific needs within the digital environment. In this regard, the so-called AI-Act incorporates key international standards, including Article 24 of the Charter of Fundamental Rights of the European Union and the United Nations Convention on the Rights of the Child, notably General Comment No. 25 on children’s rights in the digital environment, and it includes crucial provisions designed to address age-related vulnerabilities. These references establish a framework for child protection in AI governance and delineate the responsibilities of developers towards younger users. 

HIGHLIGHTS 

  • Recognition of children's rights: The final text explicitly acknowledges children as a distinct vulnerable group deserving of specialized protection (Recital 28). This is a notable enhancement over earlier drafts. Indeed, it has been observed that, during its development, the nature of the AI Act shifted from a product safety-focused approach to one oriented towards the protection of fundamental rights[3].
  • Prohibition on exploiting age-related vulnerabilities: Article 5(1)(b) aims to prevent the cognitive manipulation of behavior among vulnerable individuals, including children. In this regard, the AI Act prohibits the commercialization or use of AI systems that exploit the vulnerabilities, (e.g. manipulative toys and applications designed to materially distort user choices and cause significant harm).
  • High-risk classification for educational AI: AI systems applied in educational settings are classified as high-risk and must be subject to rigorous management and oversight, with particular emphasis on protecting children’s rights (Annex III, Article 6).
  • Consideration of vulnerability for high-risk systems: Article 7(h) incorporates vulnerability as a factor for updating the list of high-risk AI systems, including age along with other criteria like power imbalances, status, authority, knowledge, and socio-economic conditions.
  • Transparency requirements: The Act mandates that AI-generated content, including deepfakes, must be clearly disclosed and labeled, ensuring that users, particularly minors, are aware of its artificial nature (Recitals 133 and 134).
  • Ongoing monitoring and compliance: The Act establishes mechanisms for continuous assessment and enforcement to ensure compliance with regulations and mitigate negative impacts on vulnerable individuals (Articles 7(h), 27, 79(2), 90).
  • Protection in regulatory sandboxes: Within the framework of AI regulatory sandboxes, individuals in vulnerable conditions, including children, must be “adequately protected” (Article 60).

 

IMPACT

The EU AI Act marks a significant leap in AI regulation by specifically addressing the vulnerabilities of young users and introducing rigorous safeguards for digital safety. However, several challenges remain. A key concern is the regulation of deepfake technology. Although the Act mandates transparency and labeling for AI-generated content, including deepfakes, these measures may not be sufficient to mitigate the significant psychological impact that children would suffer anyway, especially when deepfakes are used for harmful purposes such as gender-based violence and cyberbullying[4]. Additionally, practical concerns exist regarding the enforcement of bans on exploitative AI practices and the effectiveness of risk management systems designed to protect children's rights. Ensuring the effective implementation and enforcement of the vulnerability protections outlined in Article 5 will be crucial[5]. To effectively address the complex challenges posed by AI technologies and ensure robust protection, in particular for children, it is essential that the Commission develops detailed guidelines that fully integrate the principles of the United Nations Convention on the Rights of the Child and General Comment No. 25.


📅 Join us on October 29 for the Awareness Panel: Regulating data and AI in the age of EU digital reforms, where we will further discuss this topic! 

 

[1] For a general overview, D. Amram, Children (in the digital environment), in Elgar Encyclopaedia of Law and Data Science, G. Comandé (dir.), Elgar, 2022, p. 155 ff.

[2] For an overview of the concept of vulnerability within the AI Act, G. Malgieri, Human vulnerability in the EU Artificial Intelligence Act, in Oxford University Press blog 

[3] S. Lindroos-Hovinheimo, Children and the Artificial Intelligence Act: Is the EU Legislator Doing Enough?, European Law Blog, 2024.

[4] In this sense: Centre for Digital Governance, The false promise of transparent deepfakes: How transparency obligations in the draft AI Act fail to deal with the threat of disinformation and image-based sexual abuse2022; N. Kurian, EU AI Act: how well does it protect children and young people?, CFI Blog Series, University of Cambridge, 2024 .

[5] 5rightsfoundation, EU adopts AI Act with potential to be transformational for children’s online experience