Your First 100 Days leading AI Governance
A brief playbook for those critical first days in your new role, working to turn good intent into the real practice of making AI safe, secure and lawful.
Congratulations, you’re the new AI Governance Lead!
Now what?
You’ve been hoping for this day, working towards it, reading and absorbing, learning as much as you possibly can. You’ve probably even been advocating for this role to exist in your organisation. But what happens now? Well, day 1 might find you sifting through draft policies, fielding unsure emails, uncovering shadow-AI projects and facing leadership’s mandate to “get AI risk and compliance under control”. The clock is ticking. There is so much to do. How do even start to transform good intentions into real, tangible AI governance, and do it fast? You know it’s make or break in 100 days.
There’s something special about the framing of 100 days in a new role. Clearly the round number is attractive, but it’s not arbitrary. It’s short enough to sustain urgency, long enough for real impact, time enough to show your true prospects for success or failure. It’s a leadership benchmark that came from a president who inherited a catastrophe. When Franklin Roosevelt took office in March 1933, the United States was in free fall. A quarter of the workforce was unemployed. Banks were collapsing at a rate of hundreds per week — nearly 5,000 had failed in the months before his inauguration. The day before he was sworn in, governors in 38 states had ordered their banks closed entirely.

Roosevelt’s response was extraordinary in both speed and scope. Within 36 hours of taking office, he declared a national bank holiday and called Congress into emergency session. Over the next 100 days, he pushed through 15 major pieces of legislation creating the foundations of his New Deal. He gave his first “fireside chat” eight days in, explaining his banking reforms directly to millions of fearful Americans by radio. He didn’t wait for perfect solutions. He acted, learned, and adapted.
What made those 100 days so impactful of course wasn’t just the pace of legislation. Roosevelt took immediate visible action to demonstrate control. He built a coalition of advisors from different disciplines, his famous “Brain Trust” that included economists, lawyers, and policy experts who fiercely disagreed with each other. And he communicated directly with a frightened public about what he was doing and why.
On Day 1 of your new role leading AI Governance, you’re not facing a banking crisis (let’s hope). But the parallels are closer than you might think. There’s a gap between what leadership believes is happening with AI and what’s actually happening on the ground. In your organisation, there are AI systems already in use and deployed without proper oversight, employees experimenting with tools nobody approved, and a vague expectation that things should be “ethical” and “under control”. You’ve most likely inherited a situation already in motion, and your first actions will set expectations. Your early coalitions will determine what’s possible later, and your wins in the first few weeks will build or destroy the credibility you need for harder fights ahead. You have 100 days - spend them wisely.
Doing AI Governance is a newsletter by AI Career Pro about the real work of making AI safe, secure and lawful.
The AI Governance Practitioner Program is a training and tools program that provides the knowledge and practical skills you need through a 16-course online training schedule. You can learn more at aicareer.pro or watch the short video below.
Please subscribe (for FREE) to our newsletter so we can keep you informed of new articles, resources and training available. Plus, you’ll join a global community of more than 4,000 AI Governance Practitioners learning and doing this work for real. Unsubscribe at any time.
If that’s what you face - the AI governance role, not the financial crisis - or the role you’re aiming for, then this article is for you. It’s a playbook for your first 100 days. It draws from discussions I’ve had over the past six months with so many new, aspiring and experienced AI Governance practitioners, combined with lessons from my own experience. I really hope you find it useful.
Within the AI Governance Practitioner Program, we focus on exactly this moment in your career: the day someone taps you on the shoulder and says, “Can you own AI for us?” It sounds flattering - and it is, you’re being entrusted with a complex, strategic challenge that your leadership team somehow know is vital, even if they can’t quite articulate why. But you and I both know what it really means: you’re suddenly accountable for AI risk, safety and compliance in an organisation that’s already experimenting with models, datasets, agents and vendors. No one hands you a playbook. There’s no neat job description. Your certifications and studies gave you a foundation, but now you’re on your own — making decisions, setting direction, and responding to crises. AI Governance is no longer a subject of study and inquiry, it’s on you now. You’re about to embark on possibly the most challenging and rewarding stage of your entire professional career.
So what do you do first?
How do you map where AI actually lives in your organisation? How do you build a coalition with engineering, cybersecurity, legal and audit without becoming the department of “no” or being sidelined? How do you move past high-level principles to draft concrete policies — good enough to start using, even if they’ll evolve later? And how do you turn those policies into real mechanisms: quick wins in data governance, lightweight risk assessments, early guardrails around shadow AI and vendor risk? How do you make sure you create an AI Governance program that is genuine and impacful? Oh, and fun!
That’s what follows, or at least the best I can suggest in a short article. Throughout, I’ll link to deeper articles on each topic. And if you really do want to dive deep, then I do encourage you to consider joining us, so you learn these skills directly with your peer practitioners along with tools and templates within the AI Governance Practitioner Program. I have been so humbled and honoured to have more than 280 people join the program in just the first month, I do hope to see you there too.
But let’s go - Day 1 - what do you do?
Map Your AI Landscape (Days 1–15)
Your very first step is taking stock of how AI is currently used (and misused) in your organisation. In practice, I’ve found there’s often an astonishing gap between official AI initiatives and what employees are doing on the ground. So don’t rely on org charts or lofty strategy docs. You have to go find the reality:
Inventory your AI systems and projects. Meet with product teams, data science leads, and business units. What AI systems are in development or deployed? Which vendors and models are being used? Get these in a spreadsheet – you need a basic AI system inventory from day one as your single source of truth. This will reveal if critical systems, like a customer-facing ML model lack oversight.1
Go hunt for shadow AI. Ask IT and security to help identify unsanctioned AI tools in use. Chances are, employees have signed up for cloud AI services or are plugging sensitive data into chatbots without approval. Your IT network security guys will be delighted to give you a report. In fact, surveys found over 38% of employees admit to sharing sensitive work info with AI tools without permission2. Network monitoring shows the typical enterprise already runs ~67 AI apps – about 90% with no IT sign-off. Those numbers are eye-opening. They mean your organisation likely has dozens of quiet “experiments” that could spill secrets or create compliance nightmares. Gather these facts now, before you get surprise by incidents. (By the way - that’s another thing about the 100 day limit - find the problem before the magic 100 and it’s their problem, find it afterwards, it’s your problem. Happy hunting!)
Identify high-risk use cases. Not all AI is equal. Try to pinpoint any AI use that involves sensitive customer data, affects human welfare or legal rights (like AI in hiring, lending, medical decisions), or any agentic AI that makes autonomous decisions.3 These are where the consequences of failure are highest, and where you’ll want to focus risk mitigation first. For each, figure out any looming deadlines – is a new AI feature about to launch? An external AI product about to be procured? Prioritise those areas where a lack of governance could imminently cause harm or legal trouble.
Ok, 15 days, two weeks in. You weren’t expecting to keep your weekends, were you. By now you should have a clear map of AI in your organisation – what exists, what’s coming, and where the biggest gaps are. This reality check will ground all your next steps. It likely also arms you with a few hair-raising examples to make the case for urgent action (e.g. “Team X plans to deploy a customer-facing GPT-5 tool next month with no privacy review”, “Team Y is planning an agentic app for customer ordering with no real human review on decisions”).
Use these findings to get executive attention early. But be thoughtful about how you do it. You’re a leader building coalitions, not filing reports on people. Snitches get stitches!
Build Your Coalition (Days 2–30)
Get used to doing multiple things in parallel. Multi-task like an exec chef. But realise you can’t do this alone from a corner office. So your next move is to rally the people who will design, enforce, and live with the AI governance mechanisms you put in place. That means forging alliances across at least five groups.
Engineering and Data Science. The developers and data scientists building AI systems are your closest partners, don’t treat them like subjects of governance.4 I promise you, they have the power to negate every single thing you try if you do. Engage them early. Sit beside them. Learn their workflow: CI/CD pipelines, model release cycles, sprint cadences. Their insights will make sure your policies are technically feasible rather thana facade, hopelessly disconnected from engineering reality. If you plan to introduce an AI model review step, design it to fit their rhythm, not some three-week committee process that halts releases. Enlist a respected engineer as a champion on your governance committee so others see this isn’t just a legal exercise. Engineeers build stuff and break stuff - they just want to move fast, and genuinely they want to move fast and not break things. You can help them do that.
Cybersecurity. Loop in your CISO or security team straight away. They’re natural allies. Many AI risks like data leaks, model vulnerabilities, and adversarial attacks manifest as security issues. Your security colleagues can help set data handling rules and technical controls: monitoring to flag large uploads to external AI services, encryption and access controls for training data. They likely own existing incident response processes, which you’ll want to extend to cover AI incidents5. Treat security as co-architects, not reviewers. Cybersecurity pros are amazing, they have an exceptional ability to figure out how things will break. So find ways to channel that.
Legal and Compliance. Your legal counsel and compliance officers are probably already tracking emerging AI regulations. Partner with them to anticipate legal obligations and translate broad laws into specific controls. But be warned: a purely compliance-driven approach can overemphasise paperwork and checklists. So make sure your legal team understands that compliance doesn’t equal safety if it’s all just box-ticking. Don’t allow yourself to get lost in regulatory mapping or paperwork impact assessments. Take shortcuts.6 Balance their expertise with the operational focus from your other stakeholders. But lawyers have an astonishing ability to construct and communicate ideas, and although you don’t need to be a lawyer to work in AI Governance, you need them as friend to call. So cultivate those friendships.
Risk and Audit. If your company has an enterprise risk management function or internal audit, get them involved. They bring systematic methods for identifying and assessing risks. Work with them to integrate AI risks into the existing risk register. Don’t allow AI to exist in a risk silo. One suggestion: evolve your existing Risk Management Policy to include AI, rather than creating a standalone “AI Risk Policy” that nobody outside your office will read.7 The risk team will also be crucial allies when explaining AI risks to the board. The folks who do compliance and audit know the truth that compliance alone won’t deliver safety or security and they will naturally want to help you, but take the time to understand their pressures to produce evidence, and follow process. You’ll need them.
Business units / Product. And (I hope) it goes without saying, you need the business units and some key business leaders on board. Governance serves the business, full stop. You can never lose sight of how AI creates value for customers, generates revenue, or reduces costs. If you disconnect governance from those realities, you become irrelevant fast — or worse, an obstacle to be routed around. Find business leaders who understand that safe AI is sustainable AI, and that one bad incident can destroy years of customer trust. They’ll help you prioritise which risks actually matter to the organisation, not just which ones look scary in a risk register. They’ll advocate for resources when you need them. And when you have to push back on a reckless deployment, their support will be the difference between being heard and being ignored. Governance without business backing is just paperwork. Get them on your side early.
Bringing these groups together early sets the tone: AI governance is cross-functional by design. Think about setting up an AI Governance Committee in your first month, try to have your CIO/CTO or other senior leader chair it, with the head of Engineering, a lead Data Scientist, Head of Legal, CISO, and key business unit leaders as members. In a mid-sized company, it might be a handful of people wearing multiple hats. The exact structure matters less than the message: AI governance is everyone’s responsibility and it’s got senior leadership on board.
By Day 30, aim to have this core team in place and your mandate clearly communicated. Host a kickoff to align on the mission to accelerate AI innovation in a safe, secure, and lawful way. When engineers, lawyers, and security sit at the same table, you’re starting to bridge the translation gaps that plague so many organisations. No more AI projects launched in secret. No more compliance policies written in a vacuum.
Month 1 down. 30 days in. Awesome work. Time to step it up.
Quick Wins: Visible Guardrails (Days 30–45)
With your coalition forming, it’s time to deliver some quick wins. Early tangible actions build credibility for the AI governance program and, frankly, prevent disasters while you work on longer-term frameworks. Think in terms of guardrails, those immediate measures that keep things on track without freezing innovation. What you learned in the previous 30 days will guide you, but here’s three high-impact moves you might make:
Publish an interim AI Use Policy. If your organisation doesn’t already have an AI Use Policy, fast-track a basic version now.8 This is a simple handbook for all employees on safe, secure, lawful AI use. Lay out in plain language the green lights and red lines for using AI in their daily work. Encourage uses like AI-assisted brainstorming, drafting, and data analysis, with the caveat that sensitive data must never be entered and humans remain in the loop9. Explicitly ban high-risk behaviours: no feeding confidential or personal data into external AI tools, no letting AI make decisions without oversight, no uses that would violate privacy or intellectual property law. Tell people what they can do, not just what they must not do. Get the core rules out now. Make it a living document you’ll refine over time.
Rein in shadow AI — carefully. Based on your earlier discovery, you likely have a host of unsanctioned AI tools in use. Resist the urge to send a draconian memo: “Effective immediately, all use of AI without approval is forbidden.” Just don’t do it, don’t ask your leadership to do it. You could technically ban everything, but that drives AI use underground and breeds resentment. Instead, create channels to bring experimentation into the light. Set up a lightweight approval process — a web form or email alias where employees can request review of an AI tool they’ve found useful. Promise fast turnaround (48-hour security and privacy vetting) so you’re enabling agility, not stifling it. Publish a whitelist of approved AI services and update it continuously. If your company subscribes to an enterprise AI platform, make sure everyone knows that’s the go-to solution, not random free apps. The message: “Yes, go ahead, you can use AI, but use our vetted tools or ask us to vet new ones.” Employees are far less likely to work around your rules if there’s a clear, reasonable path to use new technology with your blessing. Figure out some basic training and awareness that can go along with that advice.
Triage critical projects. You need to make sure there are no AI time bombs about to detonate on your watch. Use your inventory to flag any high-risk AI project nearing release or currently in production without oversight. Arrange rapid risk assessments for these. If a team is about to launch an AI-driven feature to customers, pause and audit: Does it involve personal data or legal risks? Has it been tested for bias or errors? It’s better to delay a launch by a couple of weeks than to release an unsafe system and pull it back amid complaints or regulatory scrutiny. Review any major AI procurements in progress — don’t sign that vendor contract until you’ve examined the product’s risks and demanded necessary safeguards10. You can’t rely on a vendor’s glossy brochure - demand contracts, and your new found legal and audit friends will be only too happy to help you. You need evidence and auditability at every link in the supply chain. If a vendor can’t provide transparency into their model and data, that’s a red flag to address before deployment.11
By the 45-day mark, these quick wins should be taking effect. You’ve issued clear guidance for AI use, closed the obvious data leak paths, and caught the scariest risks in flight. I hope you’re starting to have fun, while you’re making governance visible through concrete action. Other people around the company will notice. That visibility matters. It shows that AI governance isn’t PowerPoint slides about “ethical, responsible AI” — it’s real mechanisms keeping the company and customers safe. Sooner or later, every organisation discovers that you just can’t paperwork your way to safety. But you’re not doing that - you’re delivering real guardrails, in practice. Nice.
Not yet in the role? This is the time to prepare.You can learn the practices of AI Governance in the 16-course online AI Governance Practitioner Program at AI Career Pro.
The first track is available now and includes courses that span all of the knowledge and practical skills you would need in your first 100 days leading AI Governance, along with the tools and templates you can apply immediately in your work. This video explains that program and what you can expect as it goes way beyond textbook theory to the real-world knowledge, practical skills, templates, tools and connections you need. Check it out.
Set the Governance Foundations (Days 45–70)
45 days in - you’re doing great. With immediate risks mitigated and your team coming together, you can turn to building the long-term governance framework. Resist the urge to start this too early — but now, you’re ready. Think of this as laying the foundation upon which all future AI work will rest.
Finalise the AI Governance Policy. This is your master policy that defines how your organisation will oversee and control its use of AI. It’s not a formality; it’s the strategic blueprint for AI in your company. Focus on getting a concise, practical document approved by leadership. It has to establish the purpose and scope of AI governance (why we govern AI in the first place, and what this policy applies to) and it has to articulate the organisation’s commitment to safe, secure and lawful AI. Figure out how to tie these objectives to recognised AI principles, but don’t stop at platitudes. If your company espouses “fairness” as a principle, your policy should commit to concrete measures like regular bias testing and mitigation. If “accountability” is a value, the policy needs to assign clear responsibility to specific roles — the CTO will chair the AI Governance Committee and must approve all high-risk deployments. Translate those lofty mile-high principles into enforceable commitments. By all means, align the policy with standards your organisation already follows: if you have ISO 27001 for security, extend those structures to AI; if you’re pursuing ISO 42001 for AI management, map the policy to its requirements. Aim to get this policy ratified by your executive committee or board within your 100 days. It becomes your charter for everything that follows.
Define roles and responsibilities. As part of the policy or a supporting charter, nail down who’s accountable for what. You’ve assembled the cross-functional committee, so now you want to formalise its role: the AI Governance Committee will oversee AI risks, approve high-risk uses, and monitor compliance. Define responsibilities for business units and project teams too. Product owners have to make sure their AI projects undergo risk assessment and adhere to your controls. Clarify the escalation paths, when does something come to the central committee versus being handled by a project team? The goal is all about embedding governance into normal operations with clear decision pathways. You need signals to flow efficiently to where decisions need to be made. In practice, this might mean team managers or departmental leads make decisions for low-risk work, escalating significant issues to the central committee. Decide on resources: do teams need to assign a point person to liaise with you? Will you have a central AI risk team supporting projects? Get these pieces in place early so everyone knows how to engage.
Embed AI into existing governance. In large organisations, the smart move is to plug AI governance into familiar structures rather than reinventing the wheel. Don’t create bureaucracy for its own sake. If there’s already a Software Architecture Review Board or a Privacy Review in your development lifecycle, integrate AI considerations there. Build an AI risk check into project stage gates or the product launch checklist. If procurement has a vendor risk process, add AI-specific due diligence questions on data protection and model transparency. Many elements of AI governance, like model documentation, testing, and monitoring can be folded into existing QA, security, or risk management processes. This avoids policy sprawl. One important win perhaps: update your Risk Management Policy or IT Governance Policy to explicitly include AI. Add a section mandating risk assessment for new AI systems, assign the AI Governance Committee to oversee high-impact AI risks, and reference the AI Use Policy for user-level guidance. The thing is that by embedding AI in existing policy, you signal that it isn’t some alien silo. Your work is not a fad or pet project. It’s just part of normal business risk and technology management. Employees are so much more likely to follow processes that fit their usual way of working than brand-new standalone requirements.
By roughly day 70, you should have the skeleton of governance in place: a policy that says what you’re doing and why, a committee and roles that say who’s doing it, and initial integration into organisational processes showing how you’re doing it. These are your governance foundations. They turn what can be abstract goals into a genuine system of accountability. When done right, this foundation demonstrates to leadership and regulators that the organisation is serious about AI oversight. If customers ask for evidence, if regulators or auditors come knocking, that approved AI Governance Policy is going to be the first artefact you hand over.
Operationalise Risk Management (Days 70–100)
With policy and structure in place, the focus shifts to making AI risk management an ongoing, operational practice. This is where you move from framework to day-to-day execution. In your first 100 days, you won’t have a fully mature risk management process — don’t kid yourself, that takes time to refine — but you can set up the core elements.
Risk identification and assessment. Introduce a systematic way to identify risks in AI projects. Lean on existing frameworks to jump-start this — NIST’s AI Risk Management Framework breaks risk management into functions like Map, Measure, Manage, and Govern, ensuring you cover everything from context setting to monitoring. Supplement this with real-world perspectives. The MIT AI Risk Repository and the AI Incident Database can educate teams on what actual failures look like across hundreds of documented cases. Don’t go overboard creating theoretical risk checklists that nobody reads. Pick a few practical tools that teams will actually use. One approach could be to require an AI Risk Assessment for each new system, covering questions like: what’s the intended use and could it be misused? What harm could a mistake cause and how likely is it? What safeguards are in place? Provide a template so it’s not a blank page. By Day 100, pilot this assessment on at least one or two projects, ideally the high-risk ones you triaged earlier. The feedback will help you improve the process. And remember, AI risks are dynamic. Models drift, new vulnerabilities emerge. Plan for periodic re-assessment. You’re instituting a continuous loop, not a one-time audit.
Control implementation. Based on identified risks, you need to drive adoption of controls — the specific measures that actually reduce risks12. Some controls will be technical: rate-limiting an AI model’s actions, requiring human review for certain outputs, bias testing protocols. Others are procedural: training, documentation, approval checkpoints. Rather than writing hundreds of controls from scratch, start from existing libraries and tailor them. The Cloud Security Alliance and others provide some solid AI control catalogues. Choose a core set that addresses your top risks. Categorise them by domains — Privacy, Bias, Security, Transparency — so teams understand the goals. Lean on the Compliance Megamap as a starting point if you like13. (Oh, and if you like it, you won’t believe what we have coming soon into Practioner program - more on that another time!)
But here’s what matters most: controls need to become mechanisms, not just documented requirements. A policy might require teams to test their models for bias. It might even provide detailed instructions and metrics. But it still depends on teams remembering, caring, and executing correctly under pressure. A mechanism, by contrast, makes bias testing a mandatory gate in the deployment pipeline. It automatically runs standardised tests, blocks deployment until issues are resolved, and improves its test suite based on what it learns from production incidents. The consistency comes not from human discipline but from system design. As Jeff Bezos put it: “Good intentions never work, you need good mechanisms to make anything happen.”
Every mechanism needs an owner14. If you say “all models must undergo bias testing before launch,” decide who performs that testing and who is accountable if it doesn’t happen. Without clear ownership, controls decay into abandoned processes that teams work around. Start with a few high-impact mechanisms rather than an overwhelming spreadsheet. Maybe by Day 100, you might mandate model card documentation for any new model — capturing its purpose, training data, and limitations — as a required step before release. You might require the security team to red-team any especially sensitive AI system before deployment. The key is embedding these into the normal workflow, not bolting them on as afterthoughts.
Monitoring and incident response. Establish that AI risk management isn’t set-and-forget. Work with IT, ops, or product teams to set up basic monitoring on AI systems in production. This could be as simple as requiring a monitoring plan for each AI: what metrics or indicators would signal the model is behaving unexpectedly? Error rates, unusual input patterns, user complaint spikes. Make sure someone is reviewing those indicators, at least periodically.
Plug into your company’s existing incident response process15. But update it to include AI-specific scenarios: “AI system provides harmful recommendation to customer,” “Data leak via AI output,” “Model produces discriminatory outcomes.” Define escalation paths — if an AI incident occurs, how should staff report it, and who convenes to investigate? Pre-define triggers for action. For example: “If more than 5% of outputs in an hour are flagged by users, shut down the service and escalate.” Having these thresholds agreed in advance removes hesitation in the heat of the moment. It gives your operations people the mandate to act without waiting for committee approval.
The cultural dimension matters as much as the technical. Encourage blameless reporting — you want employees to speak up about AI issues or near-misses without fear. If people hide problems to avoid punishment, you end up with paper compliance while real risks fester. Sydney Dekker’s Just Culture framework is useful here: distinguish between behaviours to encourage (early reporting, raising concerns), behaviours to coach (well-intentioned shortcuts, over-escalation), and behaviours to sanction (knowingly disabling safety controls, suppressing reports). Make it clear that discovering and reporting an AI flaw will be praised, not punished.
Over the first 100 days, try running a tabletop exercise — a simulated AI failure scenario with your cross-functional team. It will expose gaps in your response plan while stakes are low. Maybe two teams assumed the other would notify customers. Maybe no one knew who had authority to shut down the system. Find those issues now, not during a real crisis.
By Day 100, you will have kick-started an AI risk management program that aligns with your policy and leverages your whole coalition. It won’t be perfect. Expect to refine risk scoring, add controls, and adjust processes as you go. But you’ve moved from abstract principles to concrete action. You’re treating AI risks in a structured way, not as one-off surprises. And you’re building a culture that values speaking up and iterating on safety, rather than one of fear or wilful ignorance.
Cultivate an AI Safety Culture (Every day)
A thread running through all these steps is culture, the attitudes and behaviours towards AI across the organisation. You can have every policy and process in the world, but if the culture is hostile to governance, those mechanisms will atropy and ossify (I love those two words, by the way, they are the perfect and absolute opposite of good goverance).
So as you execute your 100-day plan, pay attention to the soft side of change.
Tone from the top. Get your senior leadership vocally supporting the AI governance effort. When the CEO or business unit heads say “We’re committed to doing AI right, not just fast,” it empowers everyone to prioritise safety. Secure an executive champion who will reinforce that message in company forums. Write their emails for them if you need to, but leadership have to make clear that governance isn’t a bureaucratic drill — it’s about protecting the business and customers. If you hear “AI Cowboys” in management pushing teams to move fast and ask forgiveness later, have a frank conversation about the risks. Bring your new coalition of allies to bear. Part of your role is educating leaders that reckless AI deployment can backfire horribly, and that safe AI enables sustainable innovation rather than hindering it.
Bridge the divide. Work actively to break down silos between disciplines. Champion cross-functional collaboration beyond the committee room. Set up brownbag sessions where engineers teach legal how a machine learning model gets built, and legal briefs engineers on upcoming AI regulations. Foster mutual respect: data scientists and lawyers should see each other as partners with different expertise, not adversaries. Act the same way, find the amazing in each other’s disciplines. When you draft policies, involve representatives from multiple teams so the language resonates with all of them. The biggest failure mode in AI governance is when the builders and the governors stop talking. It’s sad to see and completely avoidable. But it’s not somebody else’s job - you are the bridge.
Training and awareness. In these first 100 days, arrange training sessions or communications to bring the wider workforce along. Even a one-hour all-hands introducing the new AI Use Policy, with examples of do’s and don’ts, can go a long way. Share anonymised stories of AI close-calls so people understand why these rules matter — the Samsung source code leak, the Amazon hiring incident. Encourage audience participation - they’ll have stories of their own. Create quick-reference materials or an internal site with AI governance FAQs. Equip employees with the knowledge to make good choices day-to-day. Over time, you’ll need more in-depth training for specific roles, like AI risk training for developers, audit training for internal auditors, but start with broad awareness now. An informed organisation is your first line of defence.
Reinforce positive behaviour. Culture change isn’t just about warning people what not to do. It’s about celebrating what they do right. Did a team delay a launch because they found a bias issue in testing and fixed it? Praise that in the next town hall. Did an employee report an AI-generated output that looked like a privacy red flag? Thank them publicly. These stories reinforce that governance is everyone’s job and that it’s valued. You want employees to internalise that doing the safe, right thing will be recognised, not seen as hindering progress. Over time, these reinforcements create an environment where safe, secure and lawful AI is just part of how we do things here, not an external burden imposed by compliance.
Just Culture, not witch hunts. I discussed the Just Culture framework in the context of incident response, but it applies more broadly. If someone admits they unknowingly violated the AI use guidelines, your response should be “Thank you for telling us — let’s fix the process so it doesn’t happen again,” not immediate disciplinary action. This atmosphere is critical because the field is new and even the experts are learning. You want issues raised and addressed, not swept under the rug. When people feel safe to speak up, you catch failures in their infancy rather than after they’ve caused real damage.
A strong, adaptive culture will underpin all your governance mechanisms and make them effective. Without it, you risk compliance theatre — people going through motions to satisfy checkboxes while real risks remain hidden. By minding culture from day one, you steer towards genuine safety: where employees at all levels do the right thing because it’s ingrained, not just because a policy tells them to.
The Road Ahead
Your first 100 days as AI Governance Lead will be a whirlwind. If you’re not yet in that whirlwind, now is the time to learn — before someone taps you on the shoulder and the clock starts. You can learn the practices of AI Governance in the AI Governance Practitioner Program at AI Career Pro. The first track is available now and covers the knowledge and practical skills you’d need in your first 100 days, along with tools and templates you can apply immediately.
By the end of 100 days, you’ll have crafted core policies, stood up new processes, and hopefully prevented a few fires. Importantly, you’ll have shown that AI governance is practical and achievable. It’s not an academic exercise but a real operational function that supports the business. By Day 100, you should have early wins on the board and a foundation set for the longer journey.
Keep in mind that the next 100 days, and the 100 after that, will involve refining what you built, scaling it, and responding to new challenges. Technology will evolve, regulations will shift, new risks will emerge. That’s expected — you’ve designed your governance to be adaptive. Measure success not by the volume of paperwork but by the health and effectiveness of your mechanisms over time. Are issues being caught and addressed early? Are teams integrating safe practices into their workflows? Are you learning from minor misses to prevent major incidents? These are the metrics that matter.
Remember that your role is as much about enabling innovation as controlling risk. Done right, governance doesn’t block every new idea — it creates the confidence to pursue AI aggressively, knowing the guardrails will steer you away from cliffs. Strive to be seen as the person who makes AI work safely for the company, not the person who slows everything down.
I used to tell people that in the last few days of their first 100 days, they should build the business case for the coming year — map out benefits, costs, resources needed. I don’t give that advice anymore. Instead I say: take a holiday. Unwind. Recover. Get ready for the next sprint. This is a marathon. Of course, if you’re anything like me, you’re probably going to sketch out that business case while you’re on holiday anyway — but at least you’ll be in the right headspace to do it!
With the trust you build and the structures you implement, your organisation can harness AI’s potential without stumbling into its hazards. That’s the reward of these first 100 days: setting a course where AI can thrive, safely, securely, and lawfully.
Good luck. You’re off to a solid start.
A personal note to close. Writing this article, re-reading 50 articles from this year, and working with the stream of practitioners joining the AI Governance Practitioner Program, has so positively confirmed my decision to start AI Career Pro and contribute what I can to supporting this practitioner community’s growth. It’s the most meaningful work I’ve done. I’m genuinely honoured and humbled by the enthusiasm and commitment of every single person joining the program to go beyond knowledge and theory to the real messy and practical work of doing AI governance right. And I’m inspired by every leader who takes on this momentous task of making sure that AI, perhaps the most significant technology shift of our lifetimes, is done safely, securely, and lawfully.
Thank you.



This 100-day roadmap brilliantly balances urgency with practical governance. Embedding AI risk management into existing processes and culture is exactly what makes oversight both effective and sustainable.
I talk about the latest AI trends and insights. Do check out my Substack; I am sure you’ll find it very relevant and relatable.
The distinction between policies and mechanisms really resonates with me. So many organizations get stuck in the documentation phase, thinking that wrting comprehensive policies equals governance. But without actual mechanisms that enforce those policies automatically, you're just hoping people will do the right thing under presure. The Bezos quote about good intentions never working captures this perfectly. I've seen teams where bias testing was "required" but kept getting skipped due to deadlines, until it was finally baked into the deployment pipeline itself.