Categories: Technology and Society

UN agencies call for urgent, child-centered AI rules as tech outpaces protections

UN agencies call for urgent, child-centered AI rules as tech outpaces protections

Global push for child-centered AI governance

The International Telecommunication Union (ITU) and a coalition of UN agencies have issued a strong call for rapid, child-centered governance of artificial intelligence (AI). As AI becomes embedded in classrooms, messaging apps, social platforms and search tools, experts warn that protections for children have not kept pace with technological advances. The coordinated appeal emphasizes that safeguarding children’s rights must be at the core of any AI policy, design and deployment, whether in education, health or digital safety.

Key pillars of the proposed framework

Experts outline a set of core components for child-centered AI governance:

  • <strongData protection and privacy: robust safeguards to minimize data collection about children, with clear explanations of how data is used.
  • Transparency and explainability: user-friendly disclosures about AI tools, their purposes, and the potential risks they pose to young users.
  • Safety and protection from harm: proactive measures to prevent exposure to inappropriate content, manipulation or targeted abuse.
  • Consent and control: age-appropriate consent processes and easy-to-use settings that empower families and schools.
  • Digital literacy and empowerment: curricula and resources that help children understand AI, data rights and online safety.
  • Accountability and governance: clear responsibility for developers, platforms and governments, with independent oversight and remedies for breaches.

Education sector at the center

AI is increasingly used to personalize learning, grade assessments and support administrative tasks in schools. The UN points to potential benefits, including tailored instruction, early intervention for students with learning gaps and more efficient administration. Yet it warns that without child-centered policies, AI could exacerbate inequalities, invade privacy or expose minors to manipulation, misinformation and unsafe online environments.

Practical steps for policymakers

To transform principles into practice, the UN agencies recommend several concrete actions:

  • Adopt national AI guidelines for children: integrate child rights into all AI-related policy frameworks.
  • Establish independent oversight bodies: monitor compliance, audit data practices and enforce penalties for violations.
  • Mandate privacy-by-design: require safety and privacy features in all educational AI tools from the outset.
  • Support inclusive access: ensure that AI-enabled educational resources are accessible to all students, including those with disabilities.
  • Foster international cooperation: align standards across borders to manage cross-border data flows and AI services used by children globally.

Balancing innovation with protection

The push from ITU and its UN partners recognizes that AI can unlock new opportunities in learning, health and youth engagement. However, the same technology raises complex questions about consent, bias, and the long-term impact of algorithmic systems on developing minds. The statement urges governments to balance the push for innovation with steadfast safeguards, ensuring that children’s rights are never an afterthought in the race to deploy the latest tools.

What comes next

With the call to action publicizing the need for urgent, child-centered frameworks, policymakers, educators and tech developers are urged to collaborate. The aim is a harmonized set of guidelines that are practical to implement, adaptable to different national contexts and capable of evolving as AI technologies mature. Stakeholders hope these measures will lay the groundwork for safer, more equitable digital environments where children can learn, connect and grow—without compromising their rights or wellbeing.