What is AI assurance?
βAt its core, AI assurance refers to a range of services for checking and verifying both AI systems and the processes used to develop them, and for providing reliable information about trustworthiness to potential users.β
Across sectors, AI presents huge opportunities for innovation, consumer benefit and business value. Already, it has been harnessed to drive significant scientific advancements, and to transform and improve many products and services. To fully realise these opportunities, however, greater trust in AI enabled systems must be established.
Across jurisdictions, AI governance landscapes are evolving rapidly. Key developments include overarching and sector-specific regulation, as well as standards and principles. While these instruments set out rules and criteria for the development and use of AI, businesses and users need reliable information and tools to put them into practice. This is particularly pressing for businesses operating with complex and dynamic AI supply chains.
AI assurance addresses this challenge.
At its core, AI assurance refers to a range of services for checking and verifying AI systems and the processes used to develop them, and for providing reliable information about trustworthiness to potential users. Businesses, including those procuring AI systems, can then better understand, assess and manage risks, while demonstrating regulatory compliance. As an approach, AI assurance draws heavily on methods developed in the accounting profession. It encompasses techniques including different types of impact assessments, compliance audits, as well as more objective forms of assessment and verification such as performance testing.
Please see our AI Analysis page for details of all our publications related to AI. These include policy briefings, white papers, roundtable and event outcome reports, and our global trends report.
Read Towards Multi-Actor Governance: in Five Practical Steps, John Higgins, Paul MacDonnell (2020).