This blog shares the insights from a recent interview with Cathy Cobey, the EY global trusted AI leader.
One of the key insights is a robust inventory management process to easily identify or inventorize their AI models.
Cathy stressed that these creative practices often inspire market growth.
External audit opinions or certifications of AI models or systems are still in development, as standards that can be used as evaluation criteria, for both technical performance and ethical practices, are not yet available.
Also, work is still required to develop audit accreditation ad the certification program itself, although work is underway by accounting bodies, such as CPA Canada and the AICPA to consider the assurance standards, evaluation criteria, and auditor credentials.
The ethical development of AI is critical to ensure that the risk of machine to machine decisions that could impact where personal data ends up is explained in the Standard for Personal Data Artificial Intelligence (AI) Agent which describes the technical elements required when developing AI ethically and also keeping a human involved in all decision making.
Fast forward nearly thirty years, AI’s growth rates and continued lack of governance controls and audit standards on data sets, in only increasing the risks of AI at scale.
Through exploring these diverse angles and asking robust questions, only then can leaders then ask what are the risks of unintended, unfair or illegal bias.