Watch Out For the Predictable Surprise!
We have seen many disastrous stories stemming from AI technologies — could anyone see them coming? In this episode, Rohan Light joins us to discuss predictable surprises, and how we can mitigate these risks.
A surprise is predictable where a decision-maker remains oblivious to an emerging threat or problem but is recognised for what it is by people closer to the decision problem. The issue's root is epistemic (‘what is true?’) while the decision-maker makes an instrumental decision (‘what to do?’).
When an organisation is under scrutiny due to the unethical decisions it has made, many people knew that this could have been avoided. Many of the current waves of AI service delivery failures with significant ethical objections are predictable surprises.
In this episode, Rohan argues that these are all risk factors, hidden by three blind spots:
He does acknowledge that mitigating these risks is challenging. After recognising the threat and working out if it’s worthy of pursuing, an organisation needs to mobilise their resources to detect the threat. After years of working on an AI project, it cannot be entirely stopped.
In fact, Rohan believes that these risks can never be stopped. He argues that people cannot react quickly enough to change due to the time it takes to process what’s happening. Furthermore, many threats cannot be foreseen. Therefore, it doesn’t matter how much data we have on risks; there will always be inevitable issues.
What are your thoughts? Join our Slack channel and join the conversation!