Is AI racist? In our previous episode with Sébastien Krier, we spoke about Twitter’s racist algorithm that was cropping Black faces out in favour of white ones. In this week’s episode, Louis Byrd, founder and Chief Visionary Officer of Goodwim Design, takes this conversation further as we discuss racist chatbots and exclusive data sets.
In his UX article, Louis explains how his Samsung Smart refrigerator does not understand him. In fact, many machines struggle to understand Louis and people who look like him. In his Are You A Robot? interview, he gives reasons to why this might be. Firstly, this type of conversational AI technology might not be as advanced as we expect it to be. Secondly, he believes it is down to how it’s been designed, especially due to unexclusive data sets.
“Not enough data is inclusive.”
Louis highlights that most of the data sets that are available on the market distorts real world perceptions as it does not include everyone. For example, when machines are designed, they might only use data from one sample set to train an AI model. Without expanding, these machines will only continue to distort real world perceptions as they advance. They will continue to be faulty for those who weren’t taken into account initially.
What can we do to get more inclusive data? A key point Louis makes is to acknowledge global history and how certain groups of people have been exclude from history. Companies could also leverage their technologies (ethically) to collect data from more people, for example via Siri and other conversational AI. However, it is vital that humans understand how the data is being collected.
What do you think we can do to make data and AI more inclusive? Join our conversation in our Slack community!
What are your thoughts? Join our Slack channel and join the conversation!