One of the less-discussed consequences of faster software development is what happens to governance. Not security in the abstract sense — people talk about that constantly — but the specific, practical challenge of maintaining compliance requirements when the speed at which code moves into production is increasing.
For most organisations, compliance has historically worked through manual gates. A security team reviews a deployment. An architecture review board signs off on a design. An auditor checks configuration after the fact. These processes were designed for a world where significant changes happened infrequently enough that human review at each step was feasible.
That world is changing.
The Velocity Problem
When AI tools increase individual developer productivity substantially — and the evidence that they do is reasonably strong — the deployment frequency of a given team tends to increase with it. Features that took two weeks to build and ship now take three days. That is the intended outcome, and it is genuinely valuable.
But a compliance process designed around two releases per sprint does not automatically scale to eight. The bottleneck shifts. Either the compliance team grows proportionally (rarely practical), or the review quality degrades under volume, or the reviews start getting skipped because velocity is prioritised over process.
None of those outcomes is acceptable for teams operating in regulated industries, or for any team that takes reliability and security seriously. The solution is not to slow the developers down. The solution is to make compliance programmable.
What Policy as Code Actually Means
The core idea is straightforward: instead of expressing compliance requirements as documentation that humans read and apply, you express them as code that the system applies automatically. The policy lives in version control, gets reviewed like any other change, and executes at deployment time.
In practice, this means that a Kubernetes cluster can reject a deployment that does not meet your security baseline before it reaches production — without anyone having to review it manually. An infrastructure change that violates cost or configuration policy gets blocked at the CI stage, with a clear message about why. A service template that lacks the required logging configuration simply cannot be provisioned.
The tooling for this has matured considerably. OPA (Open Policy Agent) has become a de facto standard for policy enforcement across the stack, handling everything from Kubernetes admission control to API authorisation to Terraform plan validation. Kyverno offers a more Kubernetes-native approach with lower barrier to entry. Both are in use across large production environments, which means the failure modes are better understood than they were a few years ago.
The Part That Is Harder Than Installing a Tool
The challenge with Policy as Code is not the tooling selection. It is the work of deciding what the policies should actually be and keeping them current.
Writing a good policy requires understanding both the technical system and the compliance requirement it is meant to enforce. That understanding usually lives in different teams, and getting it into the same conversation is organisationally difficult. A security engineer who has never deployed a containerised workload and a platform engineer who has never read a compliance framework tend to talk past each other in specific ways. Bridging that gap takes sustained effort.
There is also the question of what happens when a policy is wrong or out of date. A manual compliance gate can be overridden through human judgment when circumstances warrant it. A policy that blocks a legitimate deployment because it was written for a context that no longer applies is a different kind of problem. Escape hatches need to exist, and they need to be auditable.
The teams that get this right tend to treat compliance as a product rather than a constraint. They have someone accountable for the quality of the policies themselves — not just their existence — and they invest in the feedback loops that surface when policies are causing more friction than they prevent.
Why This Matters More Now
The intersection with AI-assisted development is worth being explicit about. When developers are using AI tools that generate code more quickly, the incidence of configuration drift, non-standard patterns, and inadvertent policy violations tends to increase. Not because AI-generated code is worse, but because there is simply more of it, and it covers more surface area.
Policy as Code does not solve that problem entirely, but it is the only approach that scales with the velocity. Manual review at the end of the pipeline will always be slower than automated enforcement at the beginning.
This is also relevant for the AI agents that are beginning to interact with deployment infrastructure directly. An agent with permission to modify infrastructure configuration is, from a compliance standpoint, a new kind of actor. It needs the same guardrails as a human engineer, expressed in the same programmable way that the rest of the compliance regime operates.
The direction here is clear. The implementation is genuinely complex. If your organisation is working through how to build a governance model that holds up under increased velocity — whether from AI tools, team growth, or both — it is worth thinking about carefully. I am happy to discuss what that work typically involves.