Minority-led and BIPOC centered artificial intelligence (AI) models stand in the gap to reduce digital inequities in emerging technology and biases within larger language processing models. As of early 2023, generative AI models have reemerged as a viral point of interest among technologists and non-technologists alike thanks to the recent launch of OpenAI’s GPT-4 language processing model and interactive chatbot. In fact, we used GPT-4 to co-generate portions of this very blog (a testament to the collaborative possibilities with emerging technology rather than the fear of replacement).
The fact of the matter is artificial intelligence capabilities are significant to the future (and present) of work, and underrepresented business communities can no longer afford to ignore the impact of these emerging technologies. Yet, AI is not new, and neither is the GPT series, which launched initially in 2017 with several iterations to follow.
Discrimination, racism, and bias are not new for BIPOC communities either.
So, in this blog, we spotlight minority led and BIPOC centered AI models and organizations that are changing the face of industry but tend to gain less mainstream attention (largely due to lack of funding). On top of developing disruptive AI models, the same organizations often practice ethical AI that both reduces discrimination and centers the voices and lived experience of BIPOC people.
But first…
How can generative AI models be bias?
AI models use data sets and machine learning to generate output within natural language processing (NLP) models – so much so that the output mirrors that of a human being. The trouble is that many of the same negative social human practices holding the equitable future back replicate within the data sets that produce NLP output.
According to the study Responsible AI in Africa, “AI bias occurs when an algorithm’s output becomes prejudiced due to false assumptions based on the data fed into it (Silberg, 2019).”
Here are 5 recent Western examples of AI-driven bias or the BIPOC change makers who called it out:
GPT-3’s bias against Muslims. Machine learning Team Lead for Hugging Face, Abubakar Abid, conducted a workshop to test the earlier iterations of GPT in 2020. He found that the platform was 10 times more likely to mention violence about and toward Muslims.
Social media algorithms. During the pandemic, TikTok was flagged for limiting promotion of Black creators’ content, while Meta’s advertisement practices are continuously in question for discrimination and blatant racism such as when Black men were labeled ‘primates’ in a Meta video ad.
Anti-Black facial recognition. In 2023, Randall Reid was mistakenly arrested and placed in jail following the Baton Rouge Police Department’s use of facial recognition technology. Reid was later deemed 40 lbs. lighter than the actual video suspect.
Fintech discrimination against LatinX and Black homeowners. A 2019 empirical study brought attention to discriminatory consumer lending for LatinX and Black households in fintech, where this population may be profiled in weaker competitive environments and targeted for higher mortgage rates.
Ousting of Dr. Timnit Gebru. In 2020, former co-lead of Google’s ethical AI team, Dr. Timnit Gebru, was allegedly fired from Google for publishing scholarly research that called out the risks of large language models and bias in search algorithms against women and people of color.
Moreso, your favorite search engines like Google (BERT) and Bing (Microsoft Turing Natural Language Generation (T-NLG) model) produce results based on the NLP models driving them and the way people interact with and create the content. Therefore, if a dataset is limited or a mass audience’s biases validates content over other information, AI models begin to perpetuate the same biases. The examples above are great illustrations of this.
Thus, as problems arise from growth in new industries, new solutions abound to assuage the impact.
What is a minority led AI model?
Individuals within underrepresented communities, such as BIPOC, women, and members of the LGBTQ+ community develop minority-led AI models. These models are designed to address the biases and inequalities that exist in current AI models, which often perpetuate systemic discrimination and exclusion, such as lack of recognition of darker skin tones under sensors or declassification of non-Western dialects.
Artificial intelligence (AI) has become an increasingly common part of our lives, from voice assistants to self-driving cars. However, as AI continues to evolve and become more pervasive, it has become apparent that many AI models are not designed to meet the needs of all individuals or include them at all, particularly the needs from underrepresented communities. Minority-led AI models and research organizations are increasing in number to mitigate these design flaws.
7 minority led and BIPOC centered AI models + organizations you should know:
We mentioned before that AI and its related technological advances have been around for some time and continue to evolve in uncertain ways. Ethical AI helps maintain governance as Big Data grows, while BIPOC and women technologists along with startup founders alike develop the models that challenge unethical standards. (Check out this read from LatinXinAI: Navigating Murky Waters: A Framework for Ethical Governance Inspired by the EU AI Act)
Here are 7 minority led and BIPOC centered AI models and organizations to pay attention to in 2023:
The DuBois™ Model – The Seams Social Labs team developed patent-pending AI model, the DuBois, using natural language processing to transform the way urban planners deliver solutions using the voice of the community. The DuBois is being trained to understand regional and colloquial dialects and accents within the community.
Co.Census – An AI-powered urban planning software that collects community and stakeholder feedback and prioritizes ethical analysis.
Robin.AI – A U.K.-based AI-powered contract editor for legal AI to transform the legal process and contract negotiation.
Moment.AI – An autonomous vehicle company delivering in-cabin AI built around physiological intelligence to alert drivers and vehicles of potential danger.
Accel.AI – A LatinX founded social impact AI consulting and research firm centering AI ethics.
Black in AI – A nonprofit organization for Black professionals in artificial intelligence.
Masakhane – A grassroots NLP community for Africa, by Africans on a mission to spur NLP research in African languages.
Quite a few innovators recognize the problem – underrepresented communities are being left out… again. Yet, the resolve has taken shape across the world, including in regions that are expanding AI development such as West and South Africa. Hundreds of data scientists and innovators in Africa frequent AI conferences like AI4D Africa Language Challenge and the Conference on Neural Information Processing Systems (NeurIPS) developing new AI systems and new NLP models that center native African languages such as Wolof, Ndebele, Shona, Swati, Swahili, Xhosa, and Zulu.
In Africa, the challenges take on a different character with dataset bias against ethnicity, tribal affiliations and other cultural nuances. Additionally, non-Black African-founded companies are overtaking the current AI scene on the continent. Chinese-founded firms are on a mission to become world leaders in AI at the expense of expanding unethical use of continental datasets such as the collection of African facial recognition data in countries like Zimbabwe.
The benefits of minority-led and BIPOC centered AI models
One of the key benefits of minority-led AI models is that they are developed with a greater understanding and care for the unique challenges and perspectives of underrepresented communities. They are better equipped to address issues such as racial bias, gender bias, and discrimination against marginalized groups. AI models in this context help ensure the digital divide does not widen and underrepresented business communities can embrace digital transformation, AI/ML, other emerging technology and advance as a result.
Moreover, minority-led AI models provide opportunities for individuals from underrepresented communities to participate in AI development and innovation promoting diversity, equity, and inclusion in the tech industry. By doing so, these models can help to address the longstanding lack of diversity in the tech industry, while also shaping more equitable and inclusive AI systems.
We’ve seen another example of this when a team of Black data scientists at the data analytics firm Civis Analytics developed an AI model to identify neighborhoods that were at risk of being undercounted in the 2020 US Census ensuring that underrepresented communities were accurately represented and able to receive adequate resources and representation.
In Conclusion
Artificial Intelligence is here to stay and is evolving rapidly. As underrepresented business communities consider their digital transformation efforts, it is paramount to consider the way ethical AI and minority-led AI models and the right digital strategy partner can have impact on business growth.
Comments