The CAIR Index

The CAIR Index (Country AI Resilience Index)

How well countries protect people during the AI transition.

CAIR Index 2025 — Top 10 Countries (V2.1, trend-adjusted)

The CAIR Index measures how effectively countries protect
human lives, livelihoods and rights in the age of artificial intelligence.
It focuses not on how large a country’s AI sector is today, but on whether human outcomes are improving alongside technological adoption.

Countries with rising life expectancy, improving economic fairness and stronger employment security score higher, even if their AI capacity is modest.

Countries with rapid AI acceleration but worsening human conditions score lower.

CAIR Index (2.1) 2025

Rank Country CAIR Score
1 Sweden 0.77
2 Finland 0.76
3 Singapore 0.74
4 Netherlands 0.74
5 Germany 0.72
6 Japan 0.70
7 Canada 0.70
8 United Kingdom 0.64
9 France 0.62
10 United States 0.60

More countries and full scoring will be added as additional datasets are incorporated.

How the CAIR score works

CAIR uses two components:

  • CAIR-Base — current performance on AI readiness, life expectancy, inequality and unemployment.
  • CAIR-Shift — whether these human outcomes are improving year-on-year.

The final CAIR score is calculated as 70% current conditions + 30% direction of change.

This ensures the CAIR index rewards countries where AI is being used to improve life, not simply accelerate technology. Countries with high AI investment but worsening quality of life score poorly.

What CAIR measures

CAIR should attempt to use data which is factual, objective and corruption resistant, currently edition of the CAIR index uses:

  • Human Wellbeing — life expectancy, public health and suicide trends.
  • Economic and Social Equity — income distribution (Gini), unemployment and related poverty indicators.
  • AI Readiness & Adoption — government AI readiness, automation capacity and digital infrastructure.
  • Trajectory (Year-on-Year Shift) — whether human outcomes are getting better or worse over time.

This combination provides both a snapshot and a trajectory, making CAIR the first global index to assess whether AI benefits the population over time.

Why we built the CAIR Index

Most AI rankings measure innovation, investment, patents and computational capacity.

These metrics say little about whether ordinary people are actually better off.

Human Initiative built the CAIR index because:

  • AI readiness without human protection is not progress.
  • Countries must be assessed on the real impact of AI on all people’s lives.
  • Improvements in human wellbeing matter more than raw technical output.
  • Developing countries should receive recognition for positive trajectories, not be penalised for historical disadvantages.

CAIR brings responsibility, fairness and human impact into global AI measurement.

Method summary

In this early version, CAIR uses:

  • Normalisation of each metric on a 0–1 scale.
  • Inversion of negative metrics (inequality, unemployment, suicide rate).
  • A geometric mean so weak areas drag the score down.
  • A human-first rule so high AI investment cannot mask declining quality of life.
  • A trend component to reward countries improving wellbeing year-on-year.
  • Red-line filters so very low life expectancy, extreme inequality or very high unemployment cap a country’s score.

Full technical documentation will be published with the expanded dataset release.

Data sources

CAIR uses only independent international datasets, including:

  • World Bank
  • OECD
  • World Health Organization (WHO)
  • UNCTAD
  • WIPO (World Intellectual Property Organization)
  • International Federation of Robotics (IFR)
  • Oxford Insights AI Readiness Index
  • Pew, Edelman and IPSOS for selected opinion and trust trends

Governments do not supply data to CAIR and cannot influence their own score.

Comments & collaboration

Human Initiative welcomes constructive discussion, data corrections and collaboration on improving the CAIR Index.
If comments are enabled for this page in your site settings, they will appear below.