Your organisation is deploying AI. But can your people actually govern it?
The EU AI Act requires demonstrable human oversight of AI systems. Most organisations have no way to measure whether their people are psychologically ready to provide it. We change that.
Compliance programmes build processes. They don't build the people who run them.
Organisations across Europe are investing heavily in AI governance frameworks, policies, and training — yet remain fundamentally blind to whether their leaders and teams feel safe to challenge AI outputs, escalate concerns, or override systems when it matters.
Traditional maturity assessments tell you where you stand on a generic five-stage model. They don't tell you whether your people can exercise the judgement that the EU AI Act explicitly demands. The gap is not awareness. The gap is measurable readiness.
The problem is not that psychological and cultural factors are immeasurable. The problem is that they have not been translated into decision-relevant, governance-usable proxies.
Your teams completed the AI ethics training. Can they actually override a system when it matters?
Organisations routinely conflate training delivered with capability embedded. A completed e-learning module does not mean a manager can challenge an automated lending decision under time pressure, or escalate an anomalous output from a clinical decision-support system.
Knowledge of AI principles is necessary but insufficient. What determines governance effectiveness is whether people possess the psychological conditions — safety, ownership, critical engagement — to act on that knowledge when it counts.
See how ALMA measures thisWhy do clients come to us?
Because they have governance structures on paper but no confidence that their people can operate them under pressure. Frameworks exist. Human readiness is assumed, not assessed.
What are they trying to achieve?
Demonstrable compliance with the EU AI Act — not just documentation, but provable human oversight capacity that survives regulatory scrutiny.
What hurdles do they face?
No measurement of psychological readiness. Cultural barriers to challenge and escalation. Leadership that delegates governance to compliance functions rather than embedding it operationally.
What options do they have?
Internal self-assessments (limited rigour), generic maturity scans (low value, no action), or The Responsible AI Center's diagnostic-to-intervention methodology that connects measurement directly to value.
High-risk AI systems must comply by August 2026. The clock is running.
"Compliance" under the EU AI Act is not a documentation exercise. Articles 4, 14, and 26 require organisations to demonstrate that the people operating and overseeing AI systems possess genuine competence — not just completed training records.
Moving from current state to demonstrable compliance takes 12–18 months when done properly: assessment, targeted intervention, behavioural embedding, and re-measurement. Organisations that start in 2025 will be ready. Those that wait will not.
Talk to us about your timelineAI Literacy Requirements — Article 4 in force
All providers and deployers of AI systems must ensure appropriate AI literacy across their workforce. No validated measurement standard yet exists.
GPAI Model Obligations
General-purpose AI model providers must comply with transparency and copyright requirements. Human oversight of GPAI outputs becomes a governance priority.
High-Risk AI Systems — Full Compliance Required
Organisations deploying high-risk AI must demonstrate human oversight capacity under Articles 14 and 26. This is the critical deadline. Assessment-to-embedding takes 12–18 months.
From diagnosis to embedded capability. A clear path — not a maturity curve.
We don't place you on a five-stage model and wish you luck. Every engagement follows a structured discovery-to-sustainment methodology that links measurement directly to action and value.
Find the real gaps
Where exactly are the gaps between your people's current mindset and what the EU AI Act requires? We map concrete readiness shortfalls across five governance-critical dimensions — not generic maturity labels.
Measure the exposure
What is the organisational cost of those gaps? We build conservative models that translate oversight deficits into compliance risk and operational exposure — giving decision-makers the numbers they need.
Close the gaps
Which specific interventions will close the gaps — and in what sequence? We deliver a phased adoption roadmap focused on behavioural change and measurable outcomes, not slide decks.
Embed the capability
How will you embed oversight capacity into governance structures that outlast any single programme? We design for permanence — with governance mechanisms, re-assessment cycles, and clear accountability.
Most organisations govern three layers. The fourth is the one that matters most.
Effective AI governance requires more than technical controls and compliance checklists. It requires that the humans in the system can actually exercise oversight — under pressure, in ambiguous situations, against the grain of automation bias.
Layer 4 — Human Oversight Capacity — is where most organisations have a blind spot. It is also the layer that Articles 4, 14, and 26 of the EU AI Act explicitly address. And it is the only layer The Responsible AI Center uniquely measures.
Discover how ALMA measures Layer 4Layer 1 — Technical Controls
Model monitoring, bias detection, data governance, algorithmic auditing
Layer 2 — Compliance Mechanisms
Policies, documentation, regulatory reporting, risk registers
Layer 3 — Authority Allocation
Decision rights, escalation protocols, accountability structures
Layer 4 — Human Oversight Capacity
Psychological readiness, critical engagement, conscious ownership
Our unique focusMost organisations invest heavily in Layers 1–3. Layer 4 is assumed, not assessed.
Structured engagement. Measurable outcomes.
Every service is anchored in the Value-Focused Discovery methodology and designed to produce outcomes that can be demonstrated to regulators, boards, and audit committees.
Value-Focused Discovery
Before we propose anything, we diagnose the real situation. ALMA provides the measurement foundation. The discovery output is a prioritised readiness diagnostic with concrete intervention recommendations — not a traffic-light slide deck.
Learn about ALMALeadership & Team Development
Closing the gaps ALMA identifies. Targeted workshops, coaching, and development programmes designed around specific ALMA findings — not generic AI ethics training. Always tied back to ALMA measurement with pre/post assessment.
Enquire about programmesOngoing Advisory
Governance capability erodes without sustained attention. Retainer-based advisory for organisations navigating rapid AI deployment — including periodic ALMA re-assessment, governance design reviews, and regulatory update briefings.
Discuss advisory optionsBefore we propose anything, we diagnose the real situation.
ALMA is not a maturity scan. It is a governance diagnostic that measures whether your people possess the psychological conditions required for effective AI oversight — as mandated by Articles 4, 14, and 26 of the EU AI Act.
Five dimensions. One governance picture.
ALMA measures five dimensions of AI oversight readiness, each mapped directly to observable governance behaviours and EU AI Act compliance requirements. Together, they provide a complete picture of your organisation's human oversight capacity.
Psychological Safety
Can your people question AI outputs, report errors, and challenge decisions without fear of consequences?
Art. 14 — Human OversightGrowth Orientation
Do they believe AI competence can be developed — or do they see it as fixed and outside their control?
Art. 4 — AI LiteracyAdaptive Flexibility
Can they tolerate ambiguity and revise their approach as AI technology and regulation evolve?
Art. 4 — AI LiteracyConscious Ownership
Do they take personal accountability for AI decisions — or delegate responsibility to the system?
Art. 26 — Deployer ObligationsCritical Engagement
Do they actively verify, question, and challenge AI outputs — or passively accept them?
Art. 14 — Human Oversight
ALMA governance diagnostic — individual and team readiness profiles
A maturity scan tells you that you're at "Level 2". ALMA tells you that 40% of your managers cannot psychologically challenge AI-driven decisions — and shows you exactly which interventions will change that.
How ALMA works
Assessment administration
50 validated items, 15–20 minutes per participant. Available for individuals, teams, or organisation-wide deployment. Five-point Likert scale with reflective questions.
Diagnostic analysis
Psychometrically validated scoring across five dimensions. Individual profiles, team aggregates, and organisational heat maps identifying governance risk clusters.
Intervention roadmap
A prioritised opportunity map linking specific gaps to value at risk, with a phased intervention roadmap and conservative quantification of compliance exposure.
Re-measurement & tracking
ALMA is a baseline, not a one-time snapshot. Periodic re-assessment tracks progress and validates that interventions are producing measurable change in oversight capacity.
What you receive
- Individual and team readiness profiles across all five dimensions
- Organisational heat map identifying governance risk clusters by role, function, and level
- Prioritised opportunity map linking gaps to specific value at risk
- Conservative quantification of compliance exposure under the EU AI Act
- Phased intervention roadmap with sequenced, adoption-focused recommendations
- Sensitivity analysis: what happens if key assumptions change?
- Executive summary for board and audit committee reporting
- Baseline measurement for tracking progress through re-assessment
How ALMA differs from what you've seen before
Generic maturity scans produce labels. ALMA produces decisions. Here's the difference:
| Generic Maturity Scan | ALMA Governance Diagnostic |
|---|---|
| Places you on a 5-stage model with a generic score | Identifies specific behavioural gaps across five governance-critical dimensions |
| Generic recommendations that apply to every organisation | Prioritised intervention roadmap linked to your actual readiness profile |
| Self-reported process maturity — what people say they do | Psychometrically validated indicators of actual oversight capacity |
| One-time snapshot with no measurement baseline | Baseline measurement that tracks progress through periodic re-assessment |
| Tells you what you already know — and what to do about nothing | Reveals the human factors governance frameworks cannot capture |
| No connection to EU AI Act articles or regulatory requirements | Directly mapped to Articles 4, 14, and 26 of the EU AI Act |
Built on psychological science. Designed for governance practitioners.
ALMA's approach isn't opinion-based. It is built on three decades of peer-reviewed research in organisational psychology, translated into governance-usable measurement instruments.
Psychological Safety
Amy Edmondson's foundational research on team psychological safety — reframed as a property of AI governance decision systems, not merely an HR aspiration. When people feel unsafe to challenge AI outputs, governance fails silently.
Growth Mindset & Adaptive Capacity
Carol Dweck's growth mindset research applied to AI governance contexts. Organisations where people believe AI competence is fixed — not developable — systematically underinvest in the human layer of oversight.
Automation Bias
Decades of research on automation bias — the tendency to over-rely on automated systems — translated into measurable governance indicators. Critical Engagement and Conscious Ownership dimensions directly address this failure mode.
The Responsible AI Center's academic foundation is established through strategic research partnerships, with papers targeting leading governance and public policy journals. This dual academic-practitioner approach ensures ALMA's practical applications are built on rigorous scholarly validation.
Thinking that shapes the field
Research-backed perspectives on AI governance, human oversight, and the gap between compliance and capability.
The EU AI Act requires human oversight. No one is measuring it.
Articles 4, 14, and 26 demand demonstrable oversight competence. Yet the market offers no validated tool to assess whether people are psychologically equipped to provide it.
Read article →Why AI ethics training doesn't create AI-ready leaders
Knowledge of principles is necessary but insufficient. What determines governance effectiveness is whether people possess the psychological conditions to act on that knowledge when it counts.
Read article →Why The Responsible AI Center exists
We founded The Responsible AI Center because we saw the same pattern everywhere: organisations investing millions in AI governance frameworks, yet unable to answer a simple question — can our people actually govern AI?
Mulya van Roon
Founder & Principal Advisor — The Responsible AI CenterMulya van Roon helps organisations move beyond checkbox AI compliance towards governance that actually works. Based in Amsterdam and active across Brussels and the wider EU policy landscape, he translates the EU AI Act and broader trustworthy AI frameworks into actionable governance structures, risk processes, and operational playbooks.
His career spans nearly two decades across Microsoft, KPMG, and IBM/Kyndryl — predominantly in highly regulated industries where governance is foundational, not optional. He is a Member of the European Commission's Apply AI Alliance, contributing to how AI regulation is operationalised across Europe.
Mulya is the architect of ALMA — the AI Literacy Mindset Assessment — a governance diagnostic that measures whether leaders and teams are genuinely equipped to oversee AI-enabled decisions, not merely trained to do so.
His conviction: the next frontier of AI governance is not about what AI systems do. It is about whether the people governing them are fit for purpose.
Research collaborations & academic validation
Our approach is not opinion-based. ALMA's measurement framework is built on peer-reviewed research in organisational psychology, with ongoing academic validation through strategic research partnerships.
Papers targeting leading governance and public policy journals are in preparation, co-authored with academic collaborators including Joana Michalska. This dual academic-practitioner foundation ensures that ALMA's practical applications are built on rigorous scholarly validation — not consulting intuition.
The insight that created The Responsible AI Center
The insight that created The Responsible AI Center was deceptively simple: psychological and cultural factors in AI governance are not immeasurable. They simply haven't been translated into decision-relevant, governance-usable proxies.
Every existing tool measures what organisations have built — frameworks, policies, technical controls. None measures whether the people operating those structures are psychologically equipped to do so. The Responsible AI Center was founded to close that gap — and ALMA is the instrument that makes it measurable.
Mission: bridging the gap between psychological science and governance practice.
Let's start with the real question — can your people govern AI?
A 30-minute discovery conversation to understand your specific governance challenge. No generic pitch. No obligation. Just a focused conversation about your situation.