Categories: Human Rights & Technology Policy

CHR Advocates AI that Upholds Workers’ Rights and Democratic Values

CHR Advocates AI that Upholds Workers’ Rights and Democratic Values

Introduction: A Call for Human-Centric AI Governance

The Commission on Human Rights (CHR) of the Philippines has issued a clear warning: the widespread deployment of artificial intelligence (AI) must not erode workers’ rights or fundamental freedoms. In its position paper, the CHR emphasizes that while AI offers numerous benefits—from efficiency to innovation—its development and deployment must be anchored in human rights, democratic principles, and international standards.

Key Rights at Risk and the CHR’s Concerns

As AI technologies increasingly influence daily life and employment, concerns about security of tenure, protection against unemployment, privacy, non-discrimination, and freedom of expression come to the fore. The CHR notes that constitutional protections, data privacy laws, and cybercrime statutes must guide AI governance. The right to work, based on Article 6 of the International Covenant on Economic, Social and Cultural Rights (ICESCR), remains a cornerstone of the state’s responsibilities. The CHR urged that AI systems should not undermine the ability of people to earn a living through freely chosen work.

A Human Rights-Based Framework for AI

The CHR argues for a foundational Magna Carta for Responsible AI—an overarching framework that consolidates AI-related measures in legislation. This proposed charter would be rooted in human rights principles, require transparency, ensure multi-stakeholder participation, and align with international standards. In essence, AI should enhance dignity, freedom, equality, and other fundamental rights rather than erode them.

Democratic, Ethical, and Legal Anchors

To fulfill its promise, AI governance must be anchored in democratic values and ethical considerations. The CHR highlights a need to balance innovation with protections for privacy, data security, and equal treatment. This approach aims to prevent discrimination in algorithmic decision-making and safeguard freedom of expression—rights that are central to a functioning democracy and are enshrined in national laws and constitutional protections.

What This Means for Policy and Practice

The CHR’s position paper outlines concrete steps for policymakers, technology developers, and employers. First, legislation and regulatory guidance should embed a human rights-based approach to AI, ensuring that transparent, explainable systems respect workers’ rights and do not compromise job security without just cause. Second, the Magna Carta for Responsible AI would consolidate disparate AI measures into a single, coherent framework, streamlining oversight and accountability. Third, engagement with civil society, labor groups, industry, and the public is essential to reflect diverse perspectives and protect vulnerable workers who may be disproportionately affected by automation.

International and National Obligations

As a State Party to the ICESCR, the Philippines bears a responsibility to safeguard the right to gain a living through freely chosen work. The CHR’s emphasis on aligning AI policy with international human rights standards reinforces the country’s obligation to uphold dignity and equal opportunity in the face of rapid technological change. It also underscores the necessity of enforcing existing laws—such as the Data Privacy Act and the Cybercrime Prevention Act—while avoiding overreach that could stifle innovation.

Conclusion: Toward Responsible, Rights-Respecting AI

AI is a powerful tool for progress, but its benefits must be shared equitably and responsibly. The CHR’s recommendations aim to ensure that AI development does not infringe workers’ rights or suppress essential freedoms. By committing to a Magna Carta for Responsible AI and embedding human rights at every stage of governance, the Philippines could build a model where technology advances with dignity, equality, and the protection of fundamental rights for all workers.