10 Comments
User's avatar
Patrick Senti's avatar

Great read and insight. What I find lacking most is the actual impact on engineering practice and the requirements to AI systems. After all, responsible AI is a matter of doing the right things in the right way and avoiding the wrong things and wrong practices. I see companies focus too much on 'compliance' by ticking off checkboxes and filing comments in huge multi-sheet Excel sheets, while the actual engineering practice is hardly ever changed or even discussed. This creates a weird dystopian simulation of compliance, where all these Excel sheets create some representation of the actual system and engineering to fit the expectation of whatever it is that it should comply with. I am exaggerating a bit, of course. In my opinion, it must be done the opposite way: change engineering practices and focus on outcomes first, then collect evidence to demonstrate compliance.

Expand full comment
James Kavanagh's avatar

Couldn't agree more Patrick. So many times, I've seen checkbox compliance that is disconnected from engineering practice and over time they diverge so far that there is a theatre of compliance that is fabricated and artificial. It's sad that the dystopia you describe is far too common: auditors and internal compliance people poring over spreadsheets and narratives that bear no real relation to the actual systems built.

I use three terms: high-integrity assurance, checkbox assurance and malicious compliance to describe three different mindsets around assurance - only one of which is useful. The big difference of high-integrity assurance is that it's done with a shared goal of safer outcomes, trust exists between engineers and compliance teams - they work with the same documents and the same tools. I've only ever seen it in organisations with strongly aligned leadership and teams who actually understand each others domains enough to collaborate effectively.

I'm writing a little more on this now and will publish another article shortly that explores your point. (I also wrote a bit about it on LinkedIn before: https://www.linkedin.com/pulse/shift-left-mindset-ai-safety-james-kavanagh-ony5c/)

Thanks for reading and your feedback. I'm very much enjoying the writing and hearing the perspectives of others. I really appreciate it.

Expand full comment
Nicole Jahn - AIGP, CPMAI's avatar

Any recommendation on courses for audit training

Expand full comment
James Kavanagh's avatar

So I personally haven't done an AI audit course - a long time ago I did ISACA CISA and other audit courses. The one that I've heard more about (and I don't have any first-hand experience with) is the BABL course - their curriculum looks good and you can know it's coming from actual practitioners. I know it's not an audit course (more AI governance and AI technical safety) but I have to plug the guys at https://aisafetyfundamentals.com/ Those courses I have joined and they're terrific (and free)

Expand full comment
Ayşegül Güzel's avatar

I participated in both the BlueDot AI Safety Fundamentals course and the AI Auditor Certificate Program by BABL AI (which opened the door for me to become an AI auditor and which I strongly recommend). So, I’m happy to answer any questions you may have, @Nicole Jahn.

Expand full comment
Nicole Jahn - AIGP, CPMAI's avatar

Great thanks James

Expand full comment
Andrew Durrett's avatar

Thank you James. An eye opening article. After many years at a major tech company, I totally agree with you the resource allocation wall and challenges in getting leadership alignment are formidable roadblocks. This has always been true though for all new ISO standards. Still there should be a genuine sense of urgency amongst all the AI companies and any who are heavily involved. I am an ISO 42K auditor for NSAI (a certification body), so please send any interested companies our way. :)

Expand full comment
James Kavanagh's avatar

Thanks Andrew, I think the urgency has been building although I’m not sure how much of that is genuinely driving practice improvements versus gaining the certification for the sake of third-party procurement. In lots of cases, I’m also seeing the scope being misrepresented or misunderstood - for example Amazon’s ISO42001 cert was for only 4 AWS services, Microsoft’s was only for Copilot and Copilot Chat. Google claims certification was the broadest for Google Cloud, Google Workspace and Google Gemini - although I’m a bit dubious about the integrity of that sweeping scope claim.

Expand full comment
Andrew Durrett's avatar

so true. I could see companies pushing for the 3rd party procurement opportunities. We will be watching those who seek certification to make sure their objectives and risk sources match the reality of their organization. Scope sweep and inadequate AIMS frameworks will be easy to spot given most all the applying companies will already have ISO 9K, 27K etc. It will be difficult for them to hide how all these management systems integrate (or not). I predict first quarter next year we will see a lot of smaller nimble tech companies getting certified out of interest and need to apply AI to everything where it makes sense - content creation and data analytics for their main workstreams as well as the heavier applications. Their leadership will be easily aligned and resources available for those small successful nimble companies. Looking forward to that although the AIMS framework is most desperately needed in the big tech companies and AI provider companies who lead the common use service industries (fello, control4, tesseract, quillbot, clearview, powerbi, etc.). I am stunned they are not certified yet and a bit worried we have too many big children out there playing with matches.

Expand full comment
Karen Smiley's avatar

Good to see attention to the meaning of ISO 42001 certification for AI companies, James! I'm curious:

- What was the organizational scope of the AWS and Google certifications? (I'm assuming it wasn't the entire company in either case)

- What aspects of ethics does the ISO 42001 standard actually cover? For instance, does it cover ethical use of data labeling suppliers or proactive management to identify and mitigate biases?

Thanks for any insights you can share!

Expand full comment