Benchmark

Claude Code is 4.2x faster & 3.2x cheaper with CustomGPT.ai plugin. See the report →

CustomGPT.ai Blog

June 19th is Juneteenth in the U.S. – How Does AI Impact Social Justice?

image 29

Juneteenth, June 19th, commemorates the end of slavery in the United States. It’s also known as Emancipation Day and is the day freedom came to 250,000 enslaved people of Texas, by executive decree, and when 2,000 troops arrived in Galveston Bay, Texas. 

The Emancipation Proclamation took effect in 1863, declaring freedom for enslaved people in Confederate-held territories, but it was not until June 19th, 1865, that the order was enforced in Texas, the last Confederate state with institutional slavery. This event contributed to the eventual end of slavery in the United States, which was legally abolished with the ratification of the 13th Amendment on December 6, 1865.

Today, despite generations of progress, human rights issues and violations persist around the globe. The campaign for social justice, where everyone’s human rights are respected and protected and there are equal rights and opportunities for all, continues. 

Will AI Impact Social Justice? 

AI is described as a “keystone” technology, which means it will permeate and influence every aspect of our lives. It’s being used to benefit humanity in every sphere, from combating climate change and its impacts to developing new medicines and improving workplaces. AI’s benefits for science in research, analysis, modeling, and forecasting are unparalleled. Applied for the benefit of all, for social good, AI could make all our lives better. 

In social justice research, education, and protection, AI can help identify and highlight issues and inequalities. However, the dangers of AI include ethical concerns, bias, discrimination, and even historical inaccuracies, as well as potential threats to humanity. 

“Artificial intelligence (AI) has great potential to benefit society, but the technology’s full potential can only be realized if it is representative of the diversity of populations it impacts throughout every step of its development.”

“A Blueprint for Equity and Inclusion in Artificial Intelligence,” World Economic Forum (WEF).

AI’s Use Cases for Social Good

McKinsey says as AI advances, “its potential to address social issues defined by UN Sustainable Development Goals expands:

“In fact, AI is already being used to further all 17 UN Sustainable Development Goals (SDGs)—from the goal of eliminating poverty to establishing sustainable cities and communities and providing quality education for all.”

The firm adds that generative AI “opens up more possibilities.” It published a report in 2018 outlining 170 use cases to benefit society, including for equality and inclusion. Its latest findings cover 600 use cases. 

The AI for Good Global Summit 2024 in Geneva in May focused on human interaction with AI but highlighted that a third of humanity, without internet access, is excluded from the AI revolution. The summit also shared examples from the AI for Good Innovation Factory, including a startup producing affordable prosthetics for amputees, including children. The startup uses a smartphone for scanning, brain-controlled technology, and simplified fitting, negating the need for travel to a hospital. 

At the summit, UN Secretary-General António Guterres noted AI’s capability to revolutionize agriculture, housing, and disaster management and how AI could deliver education and healthcare to remote areas. 

He added that “AI could be a game-changer for the Sustainable Development Goals (SDGs).” But he also cautioned that AI’s full potential requires addressing its risks, including bias, misinformation, and security threats. 

Google is working with the UN to use data and AI to track progress towards the UN’s SDGs and to map it globally. Its initiative, Data Commons, brings together web pages, images, maps, videos, and now publicly available data from government statistical organizations and bodies like the World Bank and the United Nations. The platform synthesizes data into single graphs and is freely available to everyone, including students, researchers, non-profits, and policymakers

Addressing AI’s Ethical and Bias Problem

UNESCO’s mandate has been to ensure that science and technology develop with strong ethical guardrails “for decades.” Gabriela Ramos, Assistant Director-General for Social and Human Sciences of UNESCO, says without ethical guardrails for AI “it risks reproducing real world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms.” Ramos says AI has created many opportunities but adds:

“These rapid changes also raise profound ethical concerns. These arise from the potential AI systems have to embed biases, contribute to climate degradation, threaten human rights, and more. Such risks associated with AI have already begun to compound on top of existing inequalities, resulting in further harm to already marginalised groups.”

UNESCO produced the first global standard on AI ethics, ‘Recommendation on the Ethics of Artificial Intelligence,’ in 2021. This framework, adopted by 193 countries, has the protection of human rights and dignity as its “cornerstone.” It recommends transparency and fairness and stresses the importance of human oversight of AI systems. It also includes extensive policy action areas. 

UN Secretary-General António Guterres says AI must never stand for “Advancing Inequality,” and outlines UN initiatives and recommendations including that developing countries need technical assistance and investments to participate in and benefit from the AI revolution. 

Universities and research institutions worldwide are working to highlight and address bias and discrimination in AI, including the University of Toronto’s Artificial Intelligence for Justice Lab and Stanford University’s Human-Centered Artificial Intelligence. 

The American Bar Association (ABA) refers to the “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People,” released by the White House in 2022, and “Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued October 30, 2023,” and describes EU restrictions on some AI systems due to their risk to basic rights and freedoms. It quotes the EO saying it has seen where AI has deepened discrimination and bias, and irresponsible deployments have intensified existing inequalities. The ABA Task Force on Law and Artificial Intelligence was launched in 2023 to address the legal challenges of AI and related ethical implications.

AI will certainly impact social justice and social good, and it’s clear there are many high-level efforts to address its risks, including any exacerbations of inequality and discrimination. AI regulation is evolving, and businesses and organizations will need to be ready but also individually commit to responsible AI for social good.

Frequently Asked Questions

Is AI a social justice issue?

Yes. AI becomes a social justice issue when it changes who gets access to information, services, and opportunity. Barry Barresi, a social impact consultant, describes one constructive use this way: “Powered by my custom-built Theory of Change AIM GPT agent on the CustomGPT.ai platform. Rapidly Develop a Credible Theory of Change with AI-Augmented Collaboration.” That is the upside: AI can help organizations plan and scale social-good work. The risk is that AI can also reinforce exclusion if it reflects bias, discrimination, or incomplete histories rather than the diversity of the communities it affects.

How does AI affect social justice in education?

MIT’s Martin Trust Center made entrepreneurship knowledge available 24/7 in 90+ languages with zero reported hallucinations. Doug Williams, Product Lead at the Martin Trust Center for MIT Entrepreneurship, said: “For the Martin Trust Center for MIT Entrepreneurship, we needed a Generative AI platform that would provide trustworthy responses based on our own data. We chose the CustomGPT solution because of its scalable data ingestion platform which enabled us to bring together knowledge of entrepreneurship across multiple knowledge bases at MIT.” In education, that matters because access improves when students can get reliable help outside class hours and in more languages, not only when an instructor is available live.

Can AI improve social equity for underserved small businesses?

AI can improve equity for underserved small businesses when it lowers the cost of customer support, knowledge sharing, and online engagement. Evan Weber described that practical benefit this way: “I just discovered CustomGPT, and I am absolutely blown away by its capabilities and affordability! This powerful platform allows you to create custom GPT-4 chatbots using your own content, transforming customer service, engagement, and operational efficiency.” The social-justice value is not that AI removes inequality by itself, but that affordable and usable tools can give smaller teams more capacity to serve customers and compete.

How can nonprofits use AI for social media without spreading misinformation?

Use AI only with vetted sources and keep a human review step for sensitive claims. Elizabeth Planet, Nonprofit Leadership Coach & Advisor, said: “I added a couple of trusted sources to the chatbot and the answers improved tremendously! You can rely on the responses it gives you because it’s only pulling from curated information.” For nonprofits sharing information about rights, services, or policy, the safest workflow is to ground the system in trusted materials, require citations or source checks, and have a staff member review posts before publication.

Can AI make government services fairer?

It can, but only if access does not become digital-only. AI can make routine public information easier to find and available at any time, which may help people get answers faster. At the same time, the AI for Good Global Summit highlighted that a third of humanity still lacks internet access, so fairness requires more than automation. The strongest approach is to use AI for routine questions while preserving phone, in-person, accessibility, and human-escalation options for complex or sensitive cases.

What are the biggest bias risks when AI is used for social justice work?

The biggest risks are biased source material, missing community representation, and overconfident answers in high-stakes areas such as education, benefits, housing, or civil rights. One published benchmark found that CustomGPT.ai outperformed OpenAI on RAG accuracy, which suggests grounding answers in approved sources can reduce factual error. But accuracy alone does not remove bias. The World Economic Forum warns AI reaches its full social value only when it is representative of the diversity of the populations it affects throughout development, so teams still need inclusive source selection and human review.

3x productivity.
Cut costs in half.

Launch a custom AI agent in minutes.

Instantly access all your data.
Automate customer service.
Streamline employee training.
Accelerate research.
Gain customer insights.

Try 100% free. Cancel anytime.