How Can We Efficiently Regulate AI?
In our interview with David, we discuss how engineers need to build AI in line with regulation. We also think about innovation versus law, and how we can create suitable regulatory structures to ensure that start-ups, innovators and everyone else can create new AI applications.
David has a strong background in building ethical and safe AI. Until recently, he worked on machine learning and computer vision for autonomous flight systems, responsible for several of the company's core products. In addition, he led the definition of the first guidelines for the safe & robust use of artificial intelligence in aviation together with the European Union Aviation Safety Agency (EASA).
At Lakera, one of David's main aims is to bring AI from the prototyping/research phase to real-world applications with the highest safety, trustworthiness, and ethics, in line with current and future regulations.
In our interview, David highlighted that the EU has made it easy to define what safe and trustworthy AI is due to its published list from 2019. However, some organisations are yet to put this in place. David argued that mature companies looking at ethics well had integrated their principles since the beginning – before even making their AI models, ethics have always been thought about. Others that take on flawed approaches build tech first and then take the risk of thinking about safety later.
However, David argued that it isn't as clear to define what high-risk AI is from the recent EU regulation draft on AI. This might affect companies worldwide that have been building 'high risk' AI for years — how will this affect their future innovation processes?
What are your thoughts? Join our Slack channel and join the conversation!