As someone who is both an aviation enthusiast and a provider of governance tools for AI governance, the only stance I can take is one of complete agreement. Risk management documentation might help reduce legal risk, but if the underlying technical reality and the documents are disconnected from each other and if no-one in leadership can understand the implications of what the risk analyses really say, then it's all just kabuki theatre.
In my years in the energy sector in IT supporting complex data systems I've seen humans gaming the systems to inflate metrics, exfiltrating proprietary data for financial gain, ignoring maintenance concerns for budget reasons, ignoring safety standards to protect their record and outright espionage.
Now we have AI agent systems making decisions that left unchecked, could cascade into destruction and while we're trying to pull logs and debug and do recovery and restore from potentially contaminated backups, they're 11 steps ahead. Was it really worth saving money using deepseek if it just took your data to home base?
Great piece, James. Really liked the framing around culture, especially the idea that we need to allow the right culture to emerge, rather than try to legislate it into existence.
The more rules we layer on, the more we incentivise people to play the system instead of actually building for safety.
I have been saying this for years and it's still true: " AI governance today suffers from a troubling disconnect: it's populated by policy and legal professionals without deep technical understanding or experience, while the engineers and data scientists who actually build and monitor AI systems are treated as subjects of governance rather than partners in creating it."
FWIW, I do predict (using myself as an example) that more engineers and engineering leaders will make their way into the field.
Excellent insight. We've been working for a few years to apply the methodologies of human factors in industrial security into the practice of Model Risk Management and Human Oversight, but it's a long road ahead. GRC managers seem to prefer qualitative risk assessment to evidence-based analysis. https://arxiv.org/abs/2009.08127 and https://hal.science/hal-04046408v1/file/ihm2023_baudel_en.pdf
As someone who is both an aviation enthusiast and a provider of governance tools for AI governance, the only stance I can take is one of complete agreement. Risk management documentation might help reduce legal risk, but if the underlying technical reality and the documents are disconnected from each other and if no-one in leadership can understand the implications of what the risk analyses really say, then it's all just kabuki theatre.
Wise words.
In my years in the energy sector in IT supporting complex data systems I've seen humans gaming the systems to inflate metrics, exfiltrating proprietary data for financial gain, ignoring maintenance concerns for budget reasons, ignoring safety standards to protect their record and outright espionage.
Now we have AI agent systems making decisions that left unchecked, could cascade into destruction and while we're trying to pull logs and debug and do recovery and restore from potentially contaminated backups, they're 11 steps ahead. Was it really worth saving money using deepseek if it just took your data to home base?
I'm working on it.
Great piece, James. Really liked the framing around culture, especially the idea that we need to allow the right culture to emerge, rather than try to legislate it into existence.
The more rules we layer on, the more we incentivise people to play the system instead of actually building for safety.
A very worthwhile read.
Thanks for reading Camilo, glad you enjoyed it
I have been saying this for years and it's still true: " AI governance today suffers from a troubling disconnect: it's populated by policy and legal professionals without deep technical understanding or experience, while the engineers and data scientists who actually build and monitor AI systems are treated as subjects of governance rather than partners in creating it."
FWIW, I do predict (using myself as an example) that more engineers and engineering leaders will make their way into the field.
Loved this post. Thank you.
Excellent insight. We've been working for a few years to apply the methodologies of human factors in industrial security into the practice of Model Risk Management and Human Oversight, but it's a long road ahead. GRC managers seem to prefer qualitative risk assessment to evidence-based analysis. https://arxiv.org/abs/2009.08127 and https://hal.science/hal-04046408v1/file/ihm2023_baudel_en.pdf
Is this your work? Its great.