Two years ago, when a board asked about AI, they wanted to know if the company was doing it. The answer was almost always yes — or at least, yes-adjacent. Something with a model, something with a pilot, something in the roadmap.
That's no longer the question.
The questions I'm hearing from boards now are different in kind. They're not about capability. They're about accountability. And most technology leaders I see going into those rooms aren't ready for them.
What changed
The shift happened gradually and then quickly. A series of high-profile incidents — AI systems producing harmful outputs, companies unable to explain model decisions in legal or regulatory contexts, shadow AI use creating liability exposure the organisation wasn't aware of — changed what boards feel responsible for asking.
The context also shifted: regulatory frameworks around AI are maturing in Europe and the UK, governance expectations from institutional investors are increasing, and the gap between what companies say they do with AI and what they actually do is becoming harder to maintain. Boards are asking harder questions because the stakes for getting the answers wrong have gone up.
The four questions
These aren't the only questions, but they're the ones I see organisations caught off guard by most often.
"Do you know how AI is being used in this company?"
This is not a question about the AI projects the technology team is running. It's a question about the whole organisation. How many employees are using AI tools in their daily work? Which tools? For what purposes? On what data?
The honest answer at most companies is: we don't know exactly. Research suggests that the majority of employees are using AI tools — many of them unsanctioned, many of them with company data — and most organisations have no systematic visibility into this. That's the governance gap. Boards are now being asked to take accountability for it.
A credible answer here doesn't require having solved the problem. It requires having looked at it honestly and having a plan. "We haven't formally assessed this, but we're doing so" is a reasonable starting point. "We have a complete picture" is usually not credible without evidence.
"What happens when the model gets it wrong?"
This is a question about accountability, not engineering. When an AI system produces a wrong output that leads to a bad decision — a loan denial that shouldn't have happened, a medical recommendation that was incorrect, a customer interaction that caused harm — who is accountable? What is the remediation process? What does disclosure look like?
The honest answer requires having thought through specific failure scenarios for each AI system in use. Not in the abstract. Concretely: what are the worst-case outputs this system could produce, who would bear the consequences, and what controls exist to detect and address them?
Most technology leaders can answer this for the AI projects they actively own. Fewer can answer it for systems that were deployed twelve months ago and are now running mostly unattended. Even fewer can answer it for AI tools employees are using in their own workflows.
"Are we in compliance?"
The regulatory landscape for AI is still developing, but several frameworks are now law or near-law: the EU AI Act has risk classifications and conformity requirements for high-risk systems that apply from August 2026. Data protection frameworks (GDPR, UK GDPR) have always applied to personal data processed by AI systems; enforcement has become more active.
The question boards are actually asking is: has someone looked at this? Is there a legal or compliance opinion on our exposure? Are we tracking the obligations that apply to us?
The answer doesn't have to be "fully compliant with everything." It needs to be "we know what applies to us, we have a view on where we are, and we have a plan for the gaps." That's a credible answer. "We haven't formally looked at this" is increasingly not.
"What are our competitors doing?"
This one sounds easier than it is. The expected answer is a confident assertion that the company is at or ahead of the industry. But the credible version requires actually knowing — based on something more than assumption — what the competitive landscape looks like. Which competitors have deployed AI in ways that affect customer experience, operational efficiency, or product differentiation? What's the pace of change in the industry? Where would falling behind matter and where wouldn't it?
If the technology leader doesn't know this, the board usually does. That asymmetry is uncomfortable.
What the CTO's job is in these conversations
It is not to be the AI evangelist. The board already believes AI is significant — they've been reading about it for two years. What they need from the technology leader is honest translation.
What does the technology actually do, specifically, in concrete terms? What are the risks, named clearly and assessed honestly? What would a reasonable governance posture look like for a company of this size and risk profile? What don't we know yet?
The instinct to emphasise upside and minimise downside in board settings is understandable. It's also the thing that most undermines credibility on exactly these questions. Boards that feel they're being managed tend to ask harder questions at the next meeting.
The technology leaders I've seen handle these conversations well do roughly the same things: they've briefed themselves on the specific regulatory and governance context, they've thought through failure scenarios for their most consequential AI systems, and they're honest about what they don't yet know. That combination — prepared, concrete, honest about uncertainty — tends to land well.
If you're preparing for a board conversation about AI and want to think through the framing, I'm happy to help — get in touch.