6 Comments
User's avatar
Suhrab Khan's avatar

This 100-day roadmap brilliantly balances urgency with practical governance. Embedding AI risk management into existing processes and culture is exactly what makes oversight both effective and sustainable.

I talk about the latest AI trends and insights. Do check out my Substack; I am sure you’ll find it very relevant and relatable.

Expand full comment
Neural Foundry's avatar

The distinction between policies and mechanisms really resonates with me. So many organizations get stuck in the documentation phase, thinking that wrting comprehensive policies equals governance. But without actual mechanisms that enforce those policies automatically, you're just hoping people will do the right thing under presure. The Bezos quote about good intentions never working captures this perfectly. I've seen teams where bias testing was "required" but kept getting skipped due to deadlines, until it was finally baked into the deployment pipeline itself.

Expand full comment
James Kavanagh's avatar

100%. Policies on paper are like comfort blankets. They might make you feel better but do more harm than good when there's real danger. Mechanisms were so central to governance in Amazon, we used them everywhere and I really believe they're the missing ingredient in real applied governance. You might find useful how I expanded on them in this article: https://blog.aicareer.pro/p/mechanisms-for-ai-governance

Expand full comment
Taras Kovalchuk's avatar

Thanks for sharing this, James! It's a very helpful resource. I particularly appreciate its pragmatic take on implementation. More specifically, the acknowledgment that you can't immediately ban all unapproved AI use, and that a perfect AI policy is a goal to work towards, not a day-one requirement.

Expand full comment
James Kavanagh's avatar

Thanks Taras - definitely, every policy is just a work in progress. But I have seen some organisations where their executive leadership says 'We've got to use AI, we're innovating with AI', but their internal policies say 'Don't use AI unless they are 100% approved by IT and Legal'. The disconnect is impossible to navigate for an individual trying to do the right thing (and get their work done. I'd be interested to know how you think the template policy I published on greenlights and guardrails worked in that sense for your context. https://blog.aicareer.pro/p/green-lights-and-guardrails-crafting

Expand full comment
Taras Kovalchuk's avatar

James, I share your view from the article that organisations should establish straightforward approval processes and foster open dialogue to prevent concealed use of AI tools. Ultimately this approach will help develop a suitable workplace culture around AI implementation.

Expand full comment