How Develop Diverse avoids AI bias
We’ve all heard the horror stories about AI. It should sound like something from a movie, but from self-driving cars running over civilians to facial recognition technology wrongly classifying world-class athletes as criminals, most of us approach AI with a huge pinch of caution.
But as AI increasingly becomes part of the way we operate from shopping to recruitment, we know it’s important for you to understand exactly how we implement it here at Develop Diverse — and how we make sure our end-to-end processes stay unbiased.
In this post, we’ll explain exactly how the Develop Diverse platform leverages AI to identify instances of bias in communication, and why we’ve taken a research-backed approach that blends human expertise with cutting-edge technology.
What AI is, how it works, and how we use it at Develop Diverse
AI is an integral part of how the Develop Diverse platform works. But before we can get into explaining how we use it, we need to unpack a few key terms that form part of how our platform works:
- AI: AI refers to the simulation of human intelligence processes by a computer. Its applications include anything from robots to self-driving cars, satnav, and iPhone’s Siri assistant.
- Machine learning: Machine learning is a branch of AI. It enables computers to learn and improve based on data.
- Natural language processing: Natural language processing (NLP) is a branch of machine learning that helps computers understand and analyse the human language.
The Develop Diverse platform uses both machine learning and NLP alongside a (human) research team working with the latest studies in psycholinguistics and sociology. That helps us make sure our platform always stays unbiased, even as language changes.
Jihane Kamil, Former Head of Data Science at Develop Diverse, explains.
“Lots of software uses AI to build their models (how the computer recognises patterns) because they trust the AI,” she says. “The problem with AI tools is that they’re inherently biased, because they use data from the internet.
“As an example, one of the language models that the majority of machine learning data scientists use is called word embeddings. But if you use those models, you’ll find they’re based on sexist stereotypes because of the data it uses.
“The difference is that at Develop Diverse, we use AI with caution. The computer itself is neutral, but the data that we humans build isn’t, because we’re all inherently biased. Instead of using AI to build our models, we use it to analyse the bias within them. AI helps us complement our knowledge of bias.”
Cutting-edge research led by humans, not AI
At Develop Diverse, our team of researchers works with data outputs that come from AI to identify bias, and make sure our technology doesn’t fall into the same traps as other software leveraging AI.
“Let’s use sexism as an example,” Jihane says. “If you give me a sentence, we don’t have an equation to tell you 100% accurately if something is biased or not. Because of that, a lot of our focus at Develop Diverse is in building a huge data set. Our research team — who are experts in linguistics, DEIB, sociology, socio-politics and socio-psychology — analyse that data, and label which words are biased and why. That process takes at least six months for a new language.”
After identifying biased language, the whole team evaluates their analysis together. Then, it’s fed into the Develop Diverse platform to help it learn what biased language looks like.
“Once we have that data set from our research team, we feed it to a model,” Jihane says. “Instead of learning from biased data from the internet, the machine will learn from these examples that were labelled by our experts in DEI.”
Every time the Develop Diverse platform identifies a biased word, it highlights and suggests an inclusive alternative. So how does Develop Diverse make sure that its model stays unbiased over time?
“Research is our core value as a company,” Jihane says. “What makes us different is that we build our own data sets based on the knowledge of DEIB and linguistic experts. The output of our model comes from a curated data set built by a diverse team of researchers and experts which controls for bias, as much as possible.”
Find out more about how we leverage AI to help our customers scale an inclusive culture by booking a demo with one of our team members.
- Inclusive Language (40)
- Inclusion (39)
- DEI (14)
- Hiring (9)
- Advice (5)
- Guide (5)
- Talent acquisition (4)
- AI bias (3)
- Employer Branding (3)
- Language (3)
- Podcast (3)
- news (3)
- Hiring best practices (2)
- Compliance (1)
- Disability (1)
- Diversity (1)
- Job ad tips (1)
- Press (1)
- pilot study (1)
- ramadan (1)
- December 2024 (1)
- November 2024 (4)
- August 2024 (8)
- July 2024 (2)
- April 2024 (1)
- March 2024 (1)
- September 2023 (12)
- August 2023 (3)
- June 2023 (4)
- May 2023 (2)
- February 2023 (9)
- April 2022 (1)
- March 2022 (1)
- September 2021 (1)
- May 2021 (1)
- April 2021 (1)
- March 2020 (1)
- November 2019 (1)
- January 2019 (1)
You May Also Like
These Related Stories