The AI Assurance Club kicked off last month in London with AI Assurance: the State of Play, at which speakers from industry and government discussed the current state of play in AI. Key themes included the importance of a coherent standards landscape, the need to avoid the pitfalls of GDPR, and the value of a global perspective on AI governance.
Dr Daniel Hulme, CEO of Satalia and Chief AI Officer at WPP, started us off with a provocation: definitions of AI are useless. Our attention would be better focused on concrete problems addressed through the prisms of safety, security and governance. David McNeish of the UK’s Centre for Data Ethics and Innovation (CDEI) gave a snapshot of the Centre’s current work on AI assurance, including its Roadmap to an Effective AI Assurance Ecosystem. David emphasised that different stakeholders have varied drivers and barriers to assurance; what matters is applying the right assurance technique at the right time. Turning to Europe, Nicholas Moës, a researcher and member of the CEN-CENELEC Joint Technical Committee on AI, walked us through the proposed AI Act. With thousands of amendments now tabled by MEPs, key diving lines are emerging. We can be fairly confident, however, that third party conformity assessments for high-risk AI systems will remain in the final text.
As well as academic and policy experts, we got to hear from organisations already providing AI assurance services. Dr Sara Zannone of Holistic AI explained their full range of AI assurance services – from general risk assessment to certification and monitoring. Finally, Ryan Carrier, Founder and CEO of ForHumanity, presented their community-based Independent Audit of AI Systems programme. ForHumanity is also currently developing a GDPR certification scheme for the UK Information Commissioner’s Office. As both Sara and Ryan’s presentations made apparent, there is growing interest in and demand for AI assurance services.