Logo Are You A Robot

2021-01-07

Season Two Recap and Review

Charles Radclyffe

Season Two Finale! Charles Radclyffe, founder of EthicsGrade and sponsors of Are You A Robot? joins us to discuss his highlights of Season Two and his future predictions in AI advancements.

Season Two Finale! Charles Radclyffe, founder of EthicsGrade and sponsors of Are You A Robot? joins us to discuss his highlights of Season Two and his future predictions in AI advancements.

In the latest Are You A Robot? episode, Charles and Demetrios both share their excitement from great conversations we’ve had in Season Two. We’ve had many exciting guests, each bringing something different to the AI ethics conversation.

We have had a common theme running through Season Two about how our data is being used. Merve warns about how COVID19 track and trace systems can be used unethically. This topic was raised again in Are You A Robot? Slack community, with news that COVID privacy data is available to police in Singapore. Robbie takes this conversation further, and raises questions as to how we are represented in the digital world via the data that has been collected about us. Because our data is collected in different forms, it eventually becomes fragmented. It is unclear as to where our data is, how it was collected and what it is being used for.

Another key question that has been raised in this Season is, how exactly should we treat technology? Some technology is rejected when it is implemented. For example, Dylan explains when the phonograph was invented, people were taken aback and rejected it. It ended up changing our relationship with music, and how we listen to it. In Jason’s episode, we take this relationship even further, discussing whether we should treat technology like humans. In our Season Two Recap episode, Charles reveals his opinions about how robots should be treated: We should treat technology with respect and treat it with awe. However, technology doesn’t have agency, like humans, therefore, politeness isn’t necessary.

Additionally, regulation has been a key topic. Paul explains that there are two main ways for different types of technologies and its uses. For machines dealing with human safety measures, it is vital to make sure that the machines work before they are implemented. However, with technology working with human rights, it is difficult to do so. Knowing that something is fair or not cannot be measured scientifically; it will be down to the way it is implemented. As we have equality legislation and systems of address, Paul suggests it’s more practical to fix these issues after implementation.

In his conversation about The Social Dilemma, Zachary makes comparisons of Y2K reactions versus the dangers of social media. Charles and Demetrios explain regulation is now different, as before money were used to fix the problem; whereas now, it is performative.

Performative regulation is an issue; as Charles explains, the marketing of AI leads us to think that there will be no issues with the technology we us. Or that the problems are accidental. As Dan suggests, it is vital to build an ethical AI team in order to avoid these issues. Although it might seem costly, it will be worth it in the long run. Companies might suffer worse down the line, through losing customers, bad PR and even facing legal battles.

The work EthicsGrade is doing will support technology companies to think about AI and best practises, especially around AI ethics. Charles explains that soon on EthicsGrade website, you will be able to measure the ethics rating of Toyota, in comparison to Vauxhall. Make sure to keep an eye on updates on EthicsGrade Twitter and LinkedIn!

What are your thoughts? Join our Slack channel and join the conversation!

Watch:

Listen: