AI Governance Maturity Assessment

Understand where your organisation stands on AI governance across five key dimensions. Takes approximately 10–15 minutes.

Before you begin

We collect your details to send you a personal results summary and to build sector-level benchmarks.

Please enter your name.
Please enter a valid email address.
Please select your sector.

AI Governance Maturity Assessment – Results

Overall Maturity Score (out of 5)

Maturity Radar

Your scores across the five governance dimensions.

Dimension Breakdown

Priority Actions

📊 Sector Benchmarks Coming Soon
As more organisations in your sector complete this assessment, Fair Accord will publish anonymised sector-level averages so you can compare your scores. Your responses contribute to building this picture.

Want a detailed report or a conversation with our team?

Talk to Fair Accord →

How to Use the AI Governance Maturity Assessment

A practical guide to scoring accurately, gathering the right evidence, and turning your results into action.

📋 Before you score
⬇️Base each score on evidence — policies, logs, meeting minutes, dashboards — not on what you plan to do.
⬇️If you are between two levels, choose the lower one and note what is missing.
🔄Dimensions can be scored independently — you do not need to complete them in order.
1
What each maturity level means
A plain-English picture of your organisation at each stage — click to expand
1
Initial
Ad hoc & reactive
AI is used without coordinated oversight. Decisions are made individually or per-project. There is no shared policy, no defined ownership, and no consistent process. Risk is managed by luck rather than design.
2
Developing
Awareness without consistency
Some teams have started to think about AI governance, but practices vary widely. Policies exist in draft or for specific projects only. Progress depends on individual champions rather than shared systems.
3
Defined
Structured & communicated
The organisation has documented and approved its approach to AI governance. Roles are clear, lifecycle controls exist, and most teams follow the same process. Governance is visible — though measurement is still being strengthened.
4
Managed
Measured & data-driven
Governance is actively monitored using defined metrics and KPIs. Leaders receive regular risk reports. Incidents trigger root-cause analysis and policy updates. AI governance is embedded in how decisions are made, not treated as a compliance checkbox.
5
Optimising
Proactive & continuously improving
AI governance is a strategic differentiator. The organisation anticipates emerging risks, engages with regulators, and uses feedback loops to keep policies ahead of practice. Continuous improvement is built into the culture.
2
What counts as evidence
Concrete artefacts that support each level — click to expand
Level Policy & Roles Risk & Lifecycle Monitoring & Metrics
1 – Initial No documents; individuals decide locally No inventory; testing informal or absent No dashboards, logs, or reporting
2 – Developing Draft policy; email threads showing informal escalation Spreadsheet list of AI projects; basic test notes Manual checks for one or two systems; incident log started
3 – Defined Approved AI policy; RACI chart; board sign-off minutes Central inventory with risk ratings; documented test criteria; lifecycle checklist Defined KPIs/KRIs; standard incident report template; monitoring for all material systems
4 – Managed Committee terms of reference; performance objectives referencing AI governance Go/no-go gate records; independent review reports; post-mortem write-ups Real-time dashboards; threshold alerts; root-cause analysis reports; bias/fairness metrics
5 – Optimising Policy update log tied to incidents and regulation changes; succession plans Red-team and stress-test results; continuous risk-scoring pipeline Predictive indicators; scenario models; contributions to external benchmarks or standards bodies
3
Who should be involved
Best run as a cross-functional conversation, not a solo exercise — click to expand
🏛️
Chief Risk Officer / Compliance
Holds the authoritative view on policy, escalation thresholds, and regulatory obligations. Essential for scoring Dimensions A and C accurately.
Core
💼
Business Owners of AI systems
Know which AI tools are in use, what decisions they influence, and what monitoring is actually in place day-to-day. Grounds the assessment in operational reality.
Core
⚙️
Technical Lead / AI/ML Engineer
Can speak to the development lifecycle, testing rigour, and monitoring infrastructure. Critical for Dimension B and D questions.
Core
📊
Data & Analytics Lead
Provides evidence on data quality controls, bias measurement, and fairness assessments. Often holds the inventory of models and datasets.
Key
⚖️
Legal / Data Protection Officer
Confirms the legal basis for AI uses, advises on regulatory exposure, and validates evidence for Dimension C documentation requirements.
Key
🎓
HR / Learning & Development
Can evidence training programmes, awareness campaigns, and how AI governance is embedded in onboarding and performance objectives (Dimension E).
Consult

Tip: Schedule a 90-minute working session with Core and Key roles present. Share the assessment in advance so each person can gather relevant evidence. Treat disagreements on scores as useful data — they often reveal governance gaps.

4
How to use your results
Turning a score into a governance improvement roadmap — click to expand
1
Identify your lowest-scoring dimensions first
Your weakest dimension is your greatest exposure. Governance is only as strong as its least mature component — a high score in culture means little if risk management is ad hoc. Focus improvement effort where the gap is largest.
2
Set a realistic target level for the next 12 months
Aim to move each lagging dimension up by one level. Jumping multiple levels in a single year is rarely sustainable. Define the specific artefacts or practices that would evidence a higher score, and assign ownership to named individuals.
3
Brief your board or leadership on the overall score
A scored maturity assessment is a concise, evidence-grounded way to bring AI governance onto leadership agendas. Use the radar chart output to show strengths and gaps at a glance, without technical detail.
4
Benchmark against your sector
As more organisations complete the assessment, Fair Accord will publish sector-level benchmarks. Your score will be compared anonymously so you can see where you stand relative to peers — and calibrate your ambition accordingly.
5
Reassess in 6–12 months
Governance maturity is not a one-time audit. Plan to repeat the assessment after implementing improvements. Use the change in scores as evidence of progress for regulators, auditors, or boards — and to sustain momentum internally.
Want support with your roadmap?
Fair Accord works with organisations to translate assessment results into a prioritised governance action plan. Visit fairaccord.com/contact to start a conversation.