The EU AI Act requires human oversight. No one is measuring it.
Articles 4, 14, and 26 demand demonstrable oversight competence. Yet the market offers no validated tool to assess whether people are psychologically equipped to provide it. That gap is not a niche problem — it is a systemic compliance risk. Organisations that rely on training completion records to demonstrate compliance are building their governance case on the wrong foundation. The question is not whether your people have been trained. It is whether they are equipped to act when it matters.
Read article →Why AI ethics training doesn't create AI-ready leaders
Knowledge of principles is necessary but insufficient. What determines governance effectiveness is whether people possess the psychological conditions to act on that knowledge when it counts — under pressure, against the grain of automation. The research on this is unambiguous. Awareness does not translate to action without the right psychological conditions in place.
Read article →Psychological safety is a governance condition, not an HR initiative
When reframed as a property of decision systems rather than individual traits, psychological safety becomes measurable, actionable, and directly relevant to AI governance — and to regulatory compliance. Edmondson's research provides the theoretical foundation. ALMA provides the governance-specific operationalisation.
Read article →From awareness to capability: what AI governance assessment should actually measure
The difference between knowing what AI governance requires and being able to deliver it is not a training gap. It is a measurement gap. Current assessment tools measure what organisations have built — frameworks, policies, technical controls. None measures whether the people operating those structures are psychologically equipped to do so.
Read article →The four layers of AI governance — and why most organisations only address three
Technical controls, compliance mechanisms, and authority allocation are necessary. They are not sufficient. Layer 4 — Human Oversight Capacity — is where governance either holds or fails. It is also the layer that no existing framework explicitly measures, and the one that the EU AI Act implicitly demands.
Read article →