AI and the Workplace

a seminar by Daniel Barnett

9 June 2025

Answers to Red Flag / Green Flag questions from Workbook

1. An algorithm screens out CVs with gaps of more than 12 months.

Red Flag

This practice risks indirect discrimination, particularly on the grounds of sex, disability, or age. For example, women who have taken maternity leave, people recovering from illness, or older workers with career breaks could all be disproportionately excluded. Under the Equality Act 2010, ostensibly neutral criteria can be discriminatory if they disproportionately impact a protected group and cannot be justified as a proportionate means of achieving a legitimate aim. Unless the employer can demonstrate both necessity and minimal discriminatory impact, this is a likely tribunal risk.

2. Line managers use ChatGPT to write employee appraisals

Red Flag

This is risky, not because of the tool itself, but because of the potential for breach of confidentiality and data protection rules. If personal performance data, disciplinary history, or health-related content is fed into ChatGPT (a public, non-UK-hosted model), it may breach UK GDPR and Article 5’s data minimisation and security principles. Even if no data breach occurs, it raises ethical concerns about transparency and undermines the integrity of performance management. Safe usage would require anonymisation, internal-only tools, and clear policy guidance.

3. An employee uploads a confidential grievance summary into Copilot for wording suggestions.

Red Flag

This is a clear breach of confidentiality and potentially a personal data breach under UK GDPR. Grievance documents often include names, accusations, health details, or protected characteristics - all of which qualify as special category data. This scenario could trigger legal claims, reputational fallout, and mandatory ICO reporting, depending on severity.

4. You use ChatGPT to shortlist candidates but invite them to contest the decision.

Green Flag

This approach aligns with GDPR Article 22, which restricts solely automated decision-making. By inviting candidates to contest and review outcomes, the employer introduces a human-in-the-loop safeguard. However, the system must still be explainable, fair, and free from bias. Employers must also update their privacy notices and carry out a Data Protection Impact Assessment (DPIA) before deploying such a tool. Provided those legal safeguards are in place, this can be a compliant and transparent use of AI.

5. AI ranks candidates for interview based on writing style and tone.

Red Flag

Using subjective markers like tone or style - especially without transparency - creates a high risk of indirect discrimination and bias. For example, neurodivergent applicants or those with English as a second language may be unfairly penalised. Unless the employer can demonstrate that such traits are directly relevant to the role and that the tool is bias-tested and auditable, this use of AI is unlikely to be defensible under the Equality Act 2010 or GDPR fairness principles.

6. ChatGPT is used to draft an internal training guide on interview technique, without inputting any sensitive data.

Green Flag

This is a low-risk, productivity-enhancing use of AI, provided no personal, confidential, or commercially sensitive information is input. The key legal safeguard is ensuring compliance with IP ownership (if relevant) and that employees verify and edit the output.

7. Your AI usage policy requires that all AI-generated work must be reviewed and approved by a human before use.

Green Flag

This is good practice and aligns with accountability principles under GDPR and fairness obligations under employment law. It reinforces the idea that AI is an assistive, not autonomous, tool. It also helps guard against both accuracy errors and unlawful automated decision-making.

8. The employer includes a clause in offer letters requiring candidates to confirm their CV was not generated by AI alone.

Green Flag

This type of honesty clause is enforceable and provides leverage if misrepresentations come to light during probation. It also sets expectations early, without banning AI entirely. However, clarity is needed: the clause should focus on truthfulness of content rather than authorship style alone, and allow for reasonable assistance (e.g. spellcheck or rephrasing).