{{org_field_logo}}
{{org_field_name}}
Registration Number: {{org_field_registration_no}}
Using AI in Domiciliary Care Policy
1. Introduction and Purpose
Artificial intelligence (AI) tools are becoming more common in health and social care for administrative tasks, care planning and note‑taking. AI can free up staff time and improve efficiency, but it also introduces risks if not used carefully. {{org_field_trading_name}} is committed to harnessing AI responsibly, in a way that supports our clients’ independence, dignity and safety. This policy sets out how AI should and should not be used, how we ensure compliance with data protection and CQC standards, and the roles and responsibilities of staff and managers.
Our overall aim is to ensure that AI augments—not replaces—professional judgement and human interaction. We recognise that using AI without proper oversight can expose staff to personal risk and lead to inaccurate records or breaches of privacy. We therefore require that any AI-generated notes, letters or care plans are always checked and approved by a human before being recorded, and that no personally identifiable information is shared with AI platforms unless we have a lawful basis and a contractual arrangement ensuring confidentiality.
2. Scope
This policy applies to all staff of {{org_field_trading_name}}, including managers, care workers, administrators and contractors, when providing domiciliary personal care and support in clients’ homes. It covers any use of AI or AI-enabled tools, including generative AI (e.g., ChatGPT‑like services) for note‑taking, letter‑writing or decision support. It also covers AI‑powered devices and software used by the organisation, such as scheduling systems, falls sensors, voice transcribers, or language translation tools. It does not apply to simple rule‑based systems that do not learn from data (e.g., a payroll calculator), though data protection duties still apply.
3. Legislative and Regulatory Framework
Use of AI must comply with the Health and Social Care Act 2008 (Regulated Activities) Regulations 2014, the Care Act 2014 (where adult safeguarding duties are relevant), the Children Act 1989 and 2004 (for under‑18s), the Mental Capacity Act 2005, and the UK General Data Protection Regulation (UK GDPR) and Data Protection Act 2018. The NHS Digital Clinical Risk Management Standards (DCB0129 for developers and DCB0160 for adopters) and the Digital Technology Assessment Criteria (DTAC) apply when procuring AI tools; they require risk assessment, governance structures and evidence of clinical safety. CQC expects providers to have a hazard log, a responsible clinical safety officer (or equivalent senior person), and clear audit trails demonstrating that AI is used with appropriate human oversight and not as a substitute for professional judgement – cqc.org.uk. The Information Commissioner’s Office (ICO) guidance emphasises that organisations must determine the lawful basis for each AI processing activity and document it – ico.org.uk. Staff must also follow the NHS Data Security and Protection Toolkit and the Accessible Information Standard.
4. Definitions
Artificial intelligence (AI) refers to technologies that simulate human intelligence by learning from data, producing dynamic responses rather than static outputs. Generative AI, a subset of AI, uses models trained on vast datasets to produce new content (e.g., text, images) based on prompts. Such models can create plausible‑sounding outputs but do not “think” like humans; they predict the next word in a sequence and may introduce factual errors or “hallucinations”. Personal data means any information relating to an identifiable individual, whether directly (e.g., name, NHS number) or indirectly through combinations of data (e.g., diagnoses, addresses); even anonymised data can sometimes be re‑identified. AI platforms refers to any online or offline AI tool, whether free or paid, used by staff.
5. Principles
Our use of AI is governed by these principles:
- Human oversight and accountability – AI is a support tool, not a decision‑maker. Staff remain responsible for ensuring that care notes, care plans and communications are accurate and appropriate. The organisation accepts ultimate accountability for what is recorded in service users’ records.
- Data minimisation and confidentiality – Staff must never share personally identifiable details with AI platforms unless a Data Protection Impact Assessment (DPIA) has been completed and the sharing is part of an approved process. Names, addresses, NHS numbers and unique combinations of diagnoses should not be entered into public AI tools. Even if names are removed, combinations of medical history, location and age can re‑identify a person, so information should be fully anonymised or summarised.
- Transparency and consent – People should be informed when AI is used to assist in their care. Where AI tools are used for transcription or summarising visits, staff must document the person’s consent and explain the purpose and storage of the recording, in line with CQC guidance. For administrative functions not directly related to individual care, implied consent may suffice but transparency is still required – cqc.org.uk.
- Accuracy and safety – AI outputs can contain errors or biases; therefore, every AI-generated document must be checked by a competent member of staff before being filed or actioned. Staff should treat AI outputs as drafts to be reviewed and corrected; failure to correct errors is regarded as a breach of professional duty.
- Fairness and bias mitigation – AI systems may reflect biases from their training data. Staff must be alert to biased recommendations and avoid discrimination. Decisions that affect care plans or eligibility must not be made by AI alone.
- Legal and ethical compliance – Use of AI must comply with UK GDPR, Data Protection Act 2018, relevant health and social care laws, and the Caldicott Principles. When using AI developed by third parties, the organisation must check the vendor’s compliance with DCB0129 and DCB0160 standards, ensure appropriate UKCA or CE marking if the AI qualifies as a medical device, and confirm data residency and security provisions.
6. Roles and Responsibilities
- Board of Directors / Owner – Provide strategic oversight, approve AI initiatives, ensure sufficient resources for training and governance, and sign off Data Protection Impact Assessments (DPIAs).
- Nominated Individual ({{org_field_nominated_individual_first_name}} {{org_field_nominated_individual_last_name}}) – Responsible for overall compliance with CQC regulations and this policy; ensures that any AI adoption is incorporated into the Statement of Purpose and that appropriate regulatory notifications are made.
- Registered Manager ({{org_field_registered_manager_first_name}} {{org_field_registered_manager_last_name}}) – Leads day‑to‑day implementation, ensures risk assessments and DPIAs are completed, maintains a hazard log of AI-related risks, designates a digital/clinical safety officer if needed, and reviews AI-generated notes for quality. The Registered Manager ensures that staff understand that the organisation is ultimately responsible for the accuracy of records and that AI outputs do not absolve staff of accountability.
- Data Protection Officer ({{org_field_data_protection_officer_first_name}} {{org_field_data_protection_officer_last_name}}) – Advises on compliance with data protection laws, oversees DPIAs, ensures that data processing agreements are in place with AI vendors, and supports staff in anonymising data.
- Care Staff and Administrators – Must only use AI tools approved by the organisation, attend training, anonymise data before entry, refrain from sharing identifiable information, check all AI outputs, and record in care notes when AI has been used. Staff must report any AI errors, data breaches or concerns to their line manager immediately.
- Digital Lead / Clinical Safety Officer (as designated) – Oversees technical evaluation of AI tools, ensures compliance with DCB0160, maintains documentation of clinical risk management, and provides technical support.
7. Consent and Information Governance
We respect individuals’ rights to know when AI is involved in their care. Before using AI for recording or summarising a visit, staff must explain the purpose, how the recording will help with care, and how it will be stored or disposed. Consent should be documented. For administrative tasks unrelated to individual care, transparency notices may suffice, but we will still inform people that AI tools are used in our operations.
No personal data will be processed through AI platforms without a lawful basis under the UK GDPR. For each AI tool, we will conduct a DPIA and record the lawful basis (e.g., legitimate interests, vital interests, contract). We will ensure that special category data (e.g., health information) is processed only with an additional condition and that processing is necessary and proportionate. Where possible, we will rely on in‑house or contracted AI tools that store data within the UK/EU and provide contractual safeguards.
We will anonymise data before entering it into AI platforms. Anonymisation means removing or aggregating any identifiers (names, dates of birth, addresses, NHS numbers) and ensuring that combinations of information cannot reasonably re‑identify someone. Staff must be mindful that simply removing the person’s name is insufficient. When in doubt, staff should consult the Data Protection Officer.
8. Procurement and Implementation
AI tools may only be procured by authorised managers following a risk assessment. We will:
- Verify that the AI supplier complies with UK GDPR and data security standards and that there is a formal data processing agreement. Using free or unverified tools without proper contracts is prohibited because of the serious risks.
- Confirm whether the AI tool qualifies as a medical device under MHRA regulations and, if so, ensure it has an appropriate CE or UKCA mark and classification – cqc.org.uk. We will obtain evidence of clinical safety and regulatory compliance.
- Complete the DTAC checklist and ensure the developer meets DCB0129 (developer standard). As an adopter, we will meet DCB0160 by appointing a responsible person, completing a hazard log and risk management plan, and documenting risk mitigation – cqc.org.uk.
- Test the AI tool in a controlled environment with anonymised data, evaluate outputs for accuracy and bias, and obtain sign‑off from the Registered Manager before full deployment.
- Provide staff training on how to use the tool, limitations, and how to recognise and correct AI errors. We will include training on bias, fairness and digital inclusion.
- Establish procedures to audit AI usage, including random checks of AI-generated notes, and maintain logs of data submitted to AI tools and corrections made.
9. Use of AI in Daily Practice
When using AI for drafting care notes, letters or plans, staff must:
- Prepare – Determine if AI use is necessary. If simple templates or manual notes suffice, avoid AI. If AI is used, enter only anonymised or pseudonymised information. For instance, instead of specifying the client’s name and address, describe the situation in general terms (e.g., “A 75-year-old client in London who needs support with medication”).
- Generate – Use the AI tool to draft the document. Do not rely on generative AI for clinical decision-making; any decision support suggestions should be considered advisory and checked against professional guidelines. Recognise that AI may produce plausible but incorrect statements.
- Review – Carefully read the AI-generated draft. Check facts, tone and person-centred language, and ensure there is no insertion of “hallucinated” details or missing information. Correct the text manually. Where appropriate, review the final wording with the service user to confirm accuracy and respect for their preferences.
- Record – Once verified, copy the corrected text into the client’s care record. Document in the record that AI was used as a drafting tool and that the final version was reviewed by a named staff member. Any errors identified should be logged and fed back to the AI administrator for continuous improvement.
- Secure – Delete any local or temporary file containing the AI draft, and ensure the AI tool does not retain the data beyond the session (if retention cannot be controlled, the tool should not be used).
10. Managing Risk and Incident Reporting
We recognise that AI systems may introduce errors, perpetuate bias or breach confidentiality. Staff must report any issues related to AI use, including:
- Inaccurate or misleading AI outputs.
- Unintentional inclusion of personal data in AI outputs.
- Evidence of bias or discrimination.
- Technical faults or security concerns.
- Feedback from clients expressing discomfort with AI involvement.
All incidents must be recorded as a significant event and investigated by the Registered Manager. Where appropriate, we will inform the local authority, the ICO, CQC, or other regulators, and we will inform affected clients. Lessons learned will be shared in team meetings and used to update this policy and related procedures.
11. Bias Mitigation and Equality
AI outputs can reflect biases present in their training data. To mitigate this, we will:
- Select AI tools whose developers can demonstrate steps to reduce bias and provide transparency about training data and model performance.
- Provide staff training on recognising and challenging biases in AI outputs.
- Monitor outcomes across different groups (e.g., by age, ethnicity, disability) to identify patterns of unequal treatment and adjust practice accordingly.
- Ensure AI tools are not the sole source of information when making decisions about care, to prevent discriminatory outcomes.
12. Training and Competency
All staff using AI must complete initial and refresher training covering:
- Basics of AI and its limitations.
- Data protection principles, anonymisation techniques and security.
- How to use approved AI tools, review outputs and correct errors.
- How to explain AI use to clients and obtain consent.
- Recognising biases and ensuring equality.
- Roles and responsibilities under this policy.
Training records will be maintained, and staff competence will be assessed through supervision and audit.
13. Monitoring, Audit and Review
We will monitor AI use by auditing a sample of AI-generated documents each quarter. Audits will assess adherence to anonymisation, completeness of human review, accuracy of final records, and effectiveness of bias mitigation. Any issues will be addressed promptly.
This policy will be reviewed annually or sooner if there are significant changes in law, guidance or technology. It will also be updated following incidents, audits or new evidence. The Registered Manager is responsible for initiating the review and consulting the Board, Data Protection Officer and frontline staff.
14. Related Policies and Documents
This policy should be read alongside:
- Data Protection and Confidentiality Policy.
- Record Keeping Policy.
- Consent Policy.
- Safeguarding Policy.
- Professional Boundaries Policy.
- Complaints Policy.
- Incident Reporting and Learning Policy.
- Recruitment and Training Policy.
- Equality, Diversity and Inclusion Policy.
- Risk Management Policy.
Responsible Person: {{org_field_registered_manager_first_name}} {{org_field_registered_manager_last_name}}
Reviewed on: {{last_update_date}}
Next Review Date: {{next_review_date}}
Copyright © {{current_year}} – {{org_field_name}}. All rights reserved.