Logo Are You A Robot

2020-12-28

Human Rights and AI Obligations

Paul MacDonnell

In this episode of ‘Are You A Robot?’, Paul MacDonnell joins us to discuss AI regulation.

Paul is the Executive Director at Global Digital Foundation, a European Think-Tank seeking to consider the challenges that arise from the widespread adoption of technology.

In this episode, Paul and Demetrios raise an interesting point about how we interact with technology once they are first implemented, usually in some form of rejection. Paul gives the example of Elizabeth I, who rejected sewing machines in the fear that it will cause unemployment. Currently, we have an extreme example of the fear of 5G towers. But, what about AI, its uses, issues and regulation?

In his most recent paper, Paul explains all of these factors, and expands in this episode. He strongly believes that AI is not capable of handling issues where bias is apparent, due to human subjectivity. As we have come across previously in Are You A Robot?, AI algorithms designed upon human bias leads to problematic unethical issues.

“I don’t see that AI is approaching anything like ever getting to agency. This is because of key differences between specific nature of human intelligence and artificial intelligence.”

How do we regulate to make sure different types of AI are fair? Paul explains that there are two main ways for different types of technologies and its uses. For machines dealing with human safety measures, it is vital to make sure that the machines work before they are implemented. However, with technology working with human rights, it is difficult to do so. Knowing that something is fair or not cannot be measured scientifically; it will be down to the way it is implemented. As we have equality legislation and systems of address, Paul suggests it’s more practical to fix these issues after implementation.

Do you have any ideas? Join our Slack community and let us know!

What are your thoughts? Join our Slack channel and join the conversation!

Watch:

Listen: