The short version

We do not score a job as risky just because AI can produce something that looks similar. We score risk higher when a large share of the role is digital, repeatable, draftable, searchable, checkable, or routable by software, and lower when the role depends on physical presence, regulated accountability, human trust, care, persuasion, leadership, or messy real-world judgement.

What raises risk

  • Repeatable text or admin tasks
  • High volume support, routing, or summarising
  • Clear rules and low ambiguity
  • Digital-only work with few physical constraints
  • Low accountability if the first draft is wrong

What lowers risk

  • Physical presence and manual skill
  • Regulated accountability
  • Human care, trust, and persuasion
  • Complex real-world context
  • Leadership, judgement, taste, and ownership

Score bands

0-34: lower change. AI is more likely to support the role than replace it. 35-64: medium change. Some tasks are exposed, but human judgement remains a major part of value. 65-100: high change. High change does not mean worthless; it means the task mix is likely to be reshaped faster.

How to read the score

  • A score is about task exposure, not personal worth.
  • A high score can still be an opportunity if you move into review, strategy, trust, or ownership.
  • A low score does not mean ignore AI; it means use it to remove admin drag.
  • Local labour markets, regulation, employer budgets, and customer expectations can change the outcome.
  • The safest workers are usually those who combine AI fluency with domain expertise.

Editorial limits

We avoid pretending that anyone can predict the labour market perfectly. Scores are deliberately directional and should be used for career planning, not as a guarantee about redundancy, hiring, salary, or investment decisions.

Sources