Is your AI producing
inequitable outcomes for the people you're here to protect?
Research published in 2025 found that AI tools used by more than half of England's local authorities are introducing gender bias into care decisions — and there is currently no requirement for providers to test or disclose bias results. Talk Forward's AI Equity Consulting helps health and social care organisations find the gap before regulators, inspectors, or service users do.
50%+
of England’s local authorities use AI tools with documented bias
— LSE / NIHR, 2025
0
vendors currently required to test or publish bias results
— Current regulatory position
£17.5m
maximum ICO fine for AI-driven discrimination under UK GDPR
— ICO, Equality Act 2010
— THE EVIDENCE
This is not a future risk.
It is happening now.
The bias embedded in AI tools used across health and social care is documented, measurable, and directly linked to safeguarding outcomes. The organisations deploying these tools are — in most cases — unaware of the specific risks they carry.
Gender Bias — Adult Social Care
AI tools are downplaying women's health needs in care assessments
Research from the LSE and NIHR found that leading AI models describe women's health and care needs in significantly softer, less urgent terms than identical needs presented by men. Since access to social care is determined by perceived need, this directly produces inequitable care allocation.
Source: Rickman et al., BMC Medical Informatics, 2025
Disability — Benefit & Assessment Systems
Automated systems exclude disabled people through biased screening
Amnesty International documented in 2025 how DWP automated decision systems produce discriminatory outcomes for people with disabilities — reducing access to welfare and social support through algorithmic screening that was never designed with equity in mind.
Source: Amnesty International, 2025
Racial Disproportionality — Child Protection
Predictive tools trained on historical data amplify existing racial bias
Black and minoritised children are already referred into child protection at significantly higher rates. AI risk tools trained on historical case data replicate these patterns at scale — embedding disproportionality into the automated layer of professional decision-making.
Source: Documented across multiple local authority studies, 2023–2025
Governance Gap — Public Sector
Most organisations lack the expertise to assess the tools they're using
Research confirms most local authorities and NHS trusts lack the technical expertise to evaluate AI vendors' claims, understand the bias risks of their tools, or assess the long-term implications of their technology choices. The procurement is moving faster than the governance.
Source: Multiple UK public sector AI governance reviews, 2024–2025
Your organisation retains legal liability for discriminatory AI outputs
Under the Equality Act 2010 and UK GDPR, organisations are responsible for the decisions their AI tools influence — regardless of what the vendor's contract says. The ICO can impose fines of up to £17.5 million or 4% of global turnover for discriminatory data processing. "We relied on the vendor" is not a defence.
WHAT WE OFFER —
Three ways to work
with Talk Forward
Tier 01
AI Equity Rapid Review
A focused half-day to full-day assessment of your organisation's current AI tool landscape. Designed for organisations who want to understand their equity risk exposure before committing to a full audit — and for those preparing for inspection, procurement, or leadership scrutiny.
What's included?
→ Structured mapping of all AI and algorithmic tools currently in use
→ Assessment of training data sources and known bias risk factors
→ Identification of which protected characteristics carry highest risk
→ Review of whether any bias testing or governance oversight exists
→ Written summary report with prioritised immediate actions
→ 30-minute debrief call with your senior lead
Tier 02
AI Equity Safeguarding Audit
A comprehensive, evidence-based assessment of how your AI tools interact with your EDI and safeguarding obligations — across workforce, governance, decision-making, and service user outcomes. Delivers a board-ready report with RAG ratings, prioritised recommendations, and a 90-day action plan.
What's included?
→ Full AI tool inventory and governance assessment
→ Training data and model bias risk analysis
→ Review of AI influence on safeguarding referrals, assessments, and case notes
→ Staff survey and focus groups on AI confidence and challenge mechanisms
→ Service user equity outcome analysis across protected characteristics
→ Board and leadership accountability assessment
→ Structured report with RAG ratings, executive summary, and board presentation deck
→ 90-day action plan with measurable milestones
Tier 03
Ongoing AI Equity Governance Retainer
Ongoing retained support to maintain and strengthen your AI equity governance over time. Most organisations complete an audit and then lack the internal capacity to implement its recommendations and manage the evolving AI landscape. This retainer bridges that gap — keeping your governance current as technology, regulation, and your organisation's tools change.
What's included? (Monthly)
→ Monthly governance call with your responsible lead
→ Quarterly review of any new AI tools being procured or introduced
→ Regulatory update briefings as guidance evolves
→ Support with any bias complaints, incidents, or escalations
→ Annual re-audit at a discounted retainer rate
→ Priority access to Talk Forward training and workshops
OUR APPROACH —
The five dimensions we examine
Our AI equity framework applies Talk Forward's established equity lens, developed through our EDI Safeguarding Audit methodology, to the specific risks that AI tools introduce in health and social care.
01
Access
Which groups are systematically included or excluded by your AI tools' design, training data, or outputs?
02
Experience
How do different groups experience the systems and decisions that your AI tools influence?
03
Outcomes
Who benefits from your AI tools — and who is disadvantaged by the decisions they shape?
04
Power & Voice
Who influences how AI tools are chosen, governed, and challenged within your organisation?
05
Safeguarding & Harm Reduction
Where are the inequities in your AI tools creating direct risk of safeguarding failure or harm?
WHO IS THIS FOR —
Built for the organisations
already navigating this
AI Equity Consulting is designed for senior leaders in health and social care who are responsible for safeguarding governance and know that their organisation's use of AI needs scrutiny — but lack the specialist support to do that well.
Local Authorities
Director of Children's / Adults' Services · Head of Safeguarding · Safeguarding Board Chairs
You're operating AI tools in child protection, adult social care, and early help — often without visibility of the bias risks they carry. Our audit gives you the evidence base your board needs and a clear implementation roadmap your teams can act on.
Multi-Academy Trusts
CEO / Executive Principal · DSL Lead · Head of Operations
AI tools in schools — from safeguarding platforms to attendance monitoring systems — carry equity risks that most DSLs are not equipped to assess. We provide the independent scrutiny that governance frameworks now demand.
NHS Trusts & ICSs
Chief Digital Information Officer · Named Nurse / Designated Doctor · Head of Safeguarding
Clinical AI tools are being deployed faster than governance frameworks can keep pace. We work with NHS organisations to assess how AI outputs are influencing safeguarding decisions — and whether those outputs are equitable for all patient groups.
Charities & Third Sector
CEO · Head of Policy · Data or Technology Lead
Funders and commissioners are beginning to require evidence of AI governance. We help voluntary sector organisations understand their risk exposure, meet their equalities obligations, and demonstrate credible oversight to the stakeholders who matter.
— WHY ACT NOW
The regulatory landscape
is already moving
AI governance for public sector organisations is not a future compliance requirement. It is a current legal obligation — one that most organisations are not yet meeting.
FRAMEWORK
Equality Act 2010
UK GDPR / ICO
ATRS
Ofsted / CQC
WHAT IS REQUIRES
Organisations are legally responsible for discriminatory outcomes produced by AI tools — regardless of vendor contracts. AI-influenced decisions must not disadvantage people with protected characteristics.
Organisations must actively monitor AI systems for discriminatory outcomes — not assume algorithmic objectivity. Data Protection Impact Assessments are required for high-risk processing.
The Algorithmic Transparency Recording Standard requires public sector bodies to disclose and explain AI use in decision-making. Expected to extend beyond central government.
Inspectors are scrutinising AI governance in safeguarding-relevant contexts. An inability to demonstrate oversight of AI tools is increasingly read as a governance weakness.
RISK OF NON-COMPLIANCE
Employment tribunals, judicial review, reputational harm, ICO investigation
Fines up to £17.5m or 4% of global turnover
Regulatory non-compliance; reputational exposure on publication
Adverse inspection findings; leadership challenge; intervention
— COMMON QUESTIONS
What organisations
ask us first
-
Almost certainly. AI tools in health and social care are often embedded in systems you already use — case management platforms, note-taking tools, risk screening tools, safeguarding software. Most organisations are using more AI than they realise, and very few know what the underlying models are or what data they were trained on. The Rapid Review is specifically designed to map this.
-
No. Our consultants are practitioners, not data scientists. We work at the governance level — assessing whether your organisation understands the tools it uses, whether the right oversight structures are in place, and whether the right questions are being asked of vendors and decision-makers. We translate the technical into the organisational.
-
A standard EDI audit looks at how equity operates in your workforce, culture, policies, and service delivery. An AI Equity Audit focuses specifically on the layer of automated and algorithmic decision-making that increasingly sits underneath all of that. The two complement each other — and for organisations that want both, we offer an integrated package.
-
Yes — and we'd recommend it. Organisations that want a complete picture of how equity operates in their practice should be examining both the human systems and the AI systems. We can integrate AI Equity assessment into any of our existing packages, or commission it as a standalone piece. Contact us to discuss a combined approach.
-
You'll receive a structured report with prioritised recommendations, a 90-day action plan, and a board-ready presentation. For organisations who want ongoing support to implement findings and maintain governance as the AI landscape evolves, our retainer service provides monthly oversight, regulatory updates, and an annual re-audit at a discounted rate.
— READY TO START —
Find out where your organisation stands — before someone else does
A 30-minute discovery call is the starting point. We'll talk through your current AI landscape,
your governance position, and where the most significant risks are likely to sit.
