When We Fail to be Ethical by Design
In this episode, Ryan Carrier joins us to discuss the vital work that's happening at ForHumanity, a charity he founded to examine the specific and existential risks associated with AI and Automation.
“I had this kind of an epiphany – ‘is that the side of history I want to be on?’”
After a successful 25-year career in finance, Ryan decided to start a non-profit charity. One of his main motivations behind building the charity was after foreseeing the impact AI risks would have on society. Ryan wanted to ensure we could mitigate and control AI risks to get the best results for humanity.
One of the key things in building ethical AI is assessing algorithms via audits. The tech space is currently an autocracy — there are no rules, and tech advancements are exciting. However, Ryan explains that now we have to play catch-up in regulating these advancements. Auditing is a way in which we can do so
ForHumanity plans to make the rules in a grassroots way. There is potential for big finance auditing firms to carry out AI audits themselves, but it’s far more complicated. Ryan thinks that no one is qualified to carry out AI audits just yet; it is a multidisciplinary area. When thinking about AI ethics, we also need ‘softer’ sciences, for example, philosophy and sociology.
What are your thoughts? Join our Slack channel and join the conversation!