Click the links below to read more about some upcoming changes in employment law, as well as our special feature on managing risks associated with using Artificial Intelligence (AI) in the workplace:
- Introduction of Neonatal Care (Leave and Pay) Act 2023 on 6 April 2025
- National Minimum Wage and National Living Wage: April 2025 increases
- AI at work: Managing Risks
Introduction of Neonatal Care (Leave and Pay) Act 2023 on 6 April 2025
The current legal position is that the parents of children receiving neonatal care are not entitled to take leave. However, from 6 April 2025 employers need to be aware of the new Neonatal Care (Leave and Pay) Act 2023, which provides eligible parents with up to 12 weeks of paid leave when their baby requires neonatal care, in addition to other entitlements.
The new rules include:
-
- A day one right to one week of leave for each qualifying period that a child has spent in neonatal care.
- A maximum of 12 weeks of leave.
- Leave must be taken in weekly blocks and may be taken in addition to other types of statutory family leave. Leave must be taken within the first 68 weeks of a child’s birth.
- The neonatal care must begin within 28 days of the child’s birth and last for a continuous period of at least seven days.
- The employee must have a qualifying parental or other personal relationship with the child.
- Employees are protected from detriment or dismissal relating to leave.
- Employees who are taking or have recently returned to work from leave are entitled to be offered suitable alternative employment on redundancy in priority to other employees.
- A minimum of 26 weeks’ service is required to receive neo-natal pay.
- The employee must have received normal weekly earnings, for a period of eight weeks ending with the relevant week, of not less than the lower earnings limit.
National Minimum Wage and National Living Wage: April 2025 increases
On 29 October 2024, the Government announced that from 1 April 2025 the rates of the National Minimum Wage (NMW) will be:
- National Living Wage (NLW) 21 and over: £12.21 (6.7% increase)
- 18-20 year old rate: £10.00 (16.3% increase)
- 16-17 year old rate: £7.55 (18% increase)
- Apprentice rate: £7.55 (18% increase)
- Accommodation offset: £10.66 (6.7% increase)
The Government’s intention for increasing the minimum wage for 18–20-year-olds is a step towards unifying it with the NLW, into a single adult rate, with the goal of ending age-based wage differences. These changes also mark the first time the NLW has factored in cost of living and inflation.
AI at Work: Managing Risks
According to a 2024 Forbes poll, 79% of Brits have used generative AI to help them at work.
What is AI?
AI refers to the ability of a computer or machine to do things that usually require human intelligence, like learning from previous experiences to understand and respond to language, decisions, and problems. Generative AI, like ChatGPT, can create original content such as text or images in response to user prompts, which can be useful in a wide range of work settings.
AI risks and potential claims in the workplace
AI can generate incorrect or ‘hallucinated’ information that appears credible but is factually wrong. There is also a lack of transparency to AI processes. AI decisions, especially in complex models, can be difficult to interpret, creating a ‘black box’ effect, which means it can be difficult or even impossible to understand why an AI system reaches certain conclusions.
The AI system being used may also have a hidden bias as it only performs as well as its training data allows. For example, it was reported by Reuters in 2018, that Amazon had to stop using an algorithmic recruitment tool because it tended to favour male applicants. The tool was trained on Amazon’s recruitment data from the previous ten years, which led to the algorithm teaching itself that male candidates were preferable to female candidates.
Data protection issues with AI
For AI tools to work, data is gathered from an increasingly wide range of sources. For example, tools such as ChatGPT rely heavily on data which is ‘scraped’ from the internet, an automated process of extracting data from the web and importing it locally.
Theoretically the information you input could be fed back to another user. These data sources could include personal data, special category data and data about minors.
AI systems holding vast amounts of personal information can also be targets for cyber-attacks and interference, especially if information is kept and stored for longer than necessary.
AI and discrimination in the workplace
- Direct discrimination: An AI system might make a particular decision because it has been trained to consider an employee’s protected characteristics. If so, less favourable treatment of that employee by the AI system as a result could lead to claims of direct discrimination.
- Indirect discrimination: Consider a situation where an automated shift allocation tool uses AI to analyse data on the previous availability and productivity of workers. If this analysis then leads to the offer of reduced shifts and, therefore, reduced pay to employees who have low availability or productivity due to a disability, this could lead to indirect discrimination claims.
AI and unfair dismissal
Managers may not understand how the algorithms work in the AI system, and how to interpret and use any resulting data. This can be problematic when AI forms the basis for important decisions, such as dismissing an employee for conduct or capability issues, with the resulting risk of an unfair dismissal claim being brought by the employee.
AI and employment cases
- Manjang v Uber Eats Ltd and others (ET/3206212/21):
- This case involved an Uber Eats driver who claimed indirect race discrimination on the basis that Uber Eats’ facial recognition technology (FRT) failed to identify him.
- He argued that the technology is less accurate when used by people who are black, non-white or of African descent, leaving them more vulnerable to losing their jobs.
- His claim was supported by the Equality and Human Rights Commission.
- A final hearing was scheduled to be heard by East London Employment Tribunal over 17 days in November 2024. However, it settled in March 2024.
Commerzbank AG v Keen [2006] EWCA Civ 1536:
- This case does not involve AI, but instead involved a decision about payment of a discretionary bonus.
- The Court of Appeal clarified in this case that the implied obligation to maintain mutual trust and confidence often requires employers to provide an explanation for any decision taken under a contractual discretion.
- Employers who rely on AI to make decisions about disciplinary action, dismissal, pay or promotions may therefore struggle to explain and justify the basis for the decision in a way that complies with this duty.
Using AI tools in recruitment
The Information Commissioner’s Office (ICO) has issued guidance on using AI tools in recruitment to comply with UK data protection law that is relevant for organisations using (or are thinking of using) an AI tool in their recruitment, or those who develop or provide AI recruitment tools.
The ICO’s seven key recommendations are:
- Fairness
- Transparency and explainability
- Data minimisation and purpose limitation
- Data protection and impact assessments
- Data controller and processor roles
- Explicit processing instructions
- Lawful basis and additional condition.
The UK Government has also released a Responsible AI in Recruitment Guide that also advises that AI recruitment algorithms can be unfair, learn to emulate human bias, and perpetuate digital exclusion of minorities. It warns that AI can process personal information in an untransparent and unexplainable way, or rely on consent that is not valid and informed.
In summary, the Government and the ICO advise that employers using AI tools for recruitment should make sure they:
- process personal information fairly in the AI
- explain the processing clearly
- keep any personal information collected to a minimum.
Tips to manage AI risks in the workplace
Employers who use AI can manage the risks by:
Adopting a workplace policies such as:
- a general AI usage policy to clarify what tools employees can use, under what circumstances, and for which tasks.
- a transparency policy where the employer will clearly tell staff when and how AI tools are being used, particularly in recruitment and performance management.
Reducing the risk of human oversight by:
- ensuring a trained human manager reviews AI-driven outcomes rather than relying solely on automated systems, so a human makes any final decisions.
- educating HR teams and line managers on how AI processes work, potential biases, and data privacy obligations.
Ensuring the AI system is audited and monitored by:
- carrying out regular bias checks
- monitoring AI-generated outcomes to spot patterns of discrimination
- conducting Data Protection Impact Assessments whenever introducing a new AI tool that processes personal data.
- Ensuring the impact on employees or candidates with protected characteristics is considered carefully before using any AI system.
If you cannot explain how a system reaches its conclusions, it should probably not be used for decisions with serious legal consequences, such as dismissals or promotions.
Employers should ensure they document the reasoning behind any AI use and be prepared to disclose some explanation of how they work to employees or tribunals, in case its use is challenged.
The contents of this article are intended for general information purposes only and shall not be deemed to be, or constitute legal advice. We cannot accept responsibility for any loss as a result of acts or omissions taken in respect of this article.