The AI for Good Global Summit took place in Geneva on 8 July 2025. Credit: ITU/Rowan Farrell
The Summit brought together governments, tech leaders, academics, civil society, and young people to explore how artificial intelligence can be directed toward Sustainable Development Goals (SDGs) – and away from growing risks of inequality, disinformation, and environmental strain, according to the UN.
“We are the AI generation,” said Doreen Bogdan-Martin, chief of the International Telecommunications Union (ITU) – the UN’s specialized agency for information and communications technology – in a keynote address. But being part of this generation means more than just using these technologies. “It means contributing to this whole-of-society upskilling effort, from early schooling to lifelong learning,” she added.
ABUJA, Nigeria, Aug 14, 2025 (IPS) – Artificial Intelligence (AI) is reshaping the world at a speed we’ve never seen before. From helping doctors detect diseases faster to customizing education for every student, AI holds the promise of solving many real-world problems. But along with its benefits, AI carries a serious risk: discrimination.
As the global body charged with protecting human rights, the United Nations—especially the UN Human Rights Council and the Office of the High Commissioner for Human Rights (OHCHR)—has a unique role to play in ensuring AI is developed and used in ways that are fair, inclusive, and just.
The United Nations must declare AI equity a Sustainable Development Goal (SDG) by 2035, backed by binding audits for member states. The stakes are high. A 2024 Stanford study warns that if AI bias is left unchecked, 45 million workers could lose access to fair hiring by 2030, and 80 percent of those affected would be in developing countries.
The Promise—and Peril—of AI
At its core, AI is about using computer systems to solve problems or perform tasks that require human intelligence. Algorithms drive the systems that make this possible—sets of instructions that help machines make sense of the world and act accordingly.
But there’s a catch: algorithms are only as fair as the data they are trained on and the humans who designed them. When the data reflects existing social inequalities, or when developers overlook diverse perspectives, the result is biased AI. In other words, AI that discriminates.
Take, for example, facial recognition systems that perform poorly on people with darker skin tones, hiring tools that favor male candidates because they’re trained on data from male-dominated industries, or a LinkedIn verification system that can only verify NFC-enabled national passports that the majority of Africans don’t yet possess. These are more than technical glitches; they are human rights issues.
What the UN Has Already Said
The UN is not starting from scratch. The OHCHR has already sounded the alarm. In its 2021 report on the right to privacy in the digital age, the OHCHR warned that poorly designed or unregulated AI systems can lead to violations of human rights, including discrimination, loss of privacy, and threats to freedom of expression and thought.
The report asked powerful questions we must keep asking—questions that go to the heart of how AI will shape our societies and who will benefit or suffer as a result. UNESCO, another UN agency, has also taken a bold step by adopting the Recommendation on the Ethics of Artificial Intelligence, the first global standard-setting instrument of its kind. This recommendation emphasizes fairness, accountability, and transparency in AI development, and calls for banning AI systems that pose a threat to human rights.
The Danger of Biased Data
A major driver of AI discrimination remains biased data. Many AI systems are trained on historical data—data that often reflects past inequalities. If a criminal justice algorithm is trained on data from a system that has historically over-policed Black communities, it will likely continue to do so.
Even well-meaning developers can fall into this trap. If the teams building AI systems lack diversity, they may not recognize when an algorithm is biased or may not consider how a tool could impact marginalized communities. That’s why it’s not just about better data—it’s also about better processes, better people, and better safeguards.
In one of the most significant AI discrimination cases moving through the courts, the plaintiff alleges that Workday’s popular AI-based applicant recommendation system violated federal anti-discrimination laws because it had a disparate impact on job applicants based on race, age, and disability.
Judge Rita F. Lin of the US District Court for the Northern District of California ruled in July 2024 that Workday could be an agent of the employers using its tools, which subjects it to liability under federal anti-discrimination laws. This landmark decision means that AI vendors, not just employers, can be held directly responsible for discriminatory outcomes.
In another case, University of Washington researchers found significant racial, gender, and intersectional bias in how three state-of-the-art large language models ranked résumés, favoring white-associated names over equally qualified candidates with names associated with other racial groups.
The financial impact is staggering. A 2024 DataRobot survey of over 350 companies revealed that 62% lost revenue due to AI systems that made biased decisions—proving that discriminatory AI isn’t just a moral failure, but also a business disaster.
What the UN Can—and Must—Do
To prevent AI discrimination, the UN must lead by example and work with governments, tech companies, and civil society to establish global guardrails for ethical AI.
This could include:
Clear Guidelines: Building on UNESCO’s Recommendation and OHCHR’s findings, set rules for inclusive data collection, transparency, and human oversight.
Inclusive Participation: Create a Global South AI Equity Fund to involve diverse voices in AI policy-making.
Human Rights Impact Assessments: Require evaluations before AI tools are rolled out.
Accountability Mechanisms: Establish an AI Accountability Tribunal within the OHCHR.
Digital Literacy: Promote global education on AI and rights.
Intersectional Audits: Test for combined biases like race, gender, and disability.
AI is not inherently good or bad—it is a tool. Its impact depends on how it is used. If not carefully managed, it could deepen inequalities and create new forms of discrimination. But with human rights at its core, AI can uplift rather than exclude.
Ahead of the UN General Assembly meeting in September, the call is clear: declare AI equity a Sustainable Development Goal by 2035, with binding audits for all member states. The future of AI—and human dignity—depends on it.
Author: Gift Nwammadu is a Mastercard Foundation Scholar at the University of Cambridge, where she is pursuing an MPhil in Public Policy with a focus on inclusive innovation, gender equity, and youth empowerment. A Youth for Sustainable Energy Fellow and Aspire Leader Fellow, she actively bridges policy and grassroots action. Her work has been published by the African Policy and Research Institute addressing systemic barriers to inclusive development.