Artificial intelligence (AI) could very well fundamentally reshape how workplace disputes emerge, develop and are resolved across Britain. As employers increasingly consider AI-powered hiring algorithms and performance management systems, a debate is emerging about whether this technology will make our day-to-days more efficient or create an entirely new category of employment law challenges.
The emerging legal environment
UK employment tribunals are already grappling with unprecedented cases involving AI decision-making. The landmark case of Mr Manjang, who challenged AI-based facial recognition technology used by Uber for racial discrimination, represents just the tip of the iceberg. The Equality and Human Rights Commission’s involvement signals that AI workplace discrimination is being taken seriously at the highest levels.
The challenge for employment law lies in applying existing frameworks to technology that operates in ways human decision-makers never could. Traditional concepts of bias, unfair dismissal and discrimination must be reinterpreted when algorithms make split-second decisions based on thousands of data points.
AI as both problem and solution
The paradox of AI in workplace decision-making is striking. The TUC’s “The AI Bill Project” found that:
- 71% of working adults in the UK oppose AI being used in performance management and bonus decisions
- 77% of working adults in the UK oppose AI being used to make hiring decisions
- 86% of working adults in the UK oppose AI being used to make firing decisions
However, the same technology is being deployed to resolve workplace disputes more efficiently. Acas, a UK conciliation body, announced recently that they are planning to use AI and digital services to handle the record 117,000 individual disputes it managed in 2024-25.
The organisation aims to maintain its impressive 70% settlement rate for individual disputes whilst scaling operations through technology. This reflects a broader trend of using AI to manage the very conflicts that AI systems often create.
Regulatory challenges
The UK government’s “pro-innovation” stance on AI regulation means there’s no comprehensive liability framework for AI-related workplace harm. This regulatory gap creates uncertainty for both employers and workers. Companies struggle to understand their liability when AI systems make controversial decisions, whilst employees face complex legal challenges in proving algorithmic bias or discrimination.
Industry perspectives and concerns
Business leaders recognise the growing threat to workplace relations. With workplace conflict costing the British economy £28.5bn annually and 44% of workers reporting increased organisational conflict, there’s pressure to find technological solutions.
But, many argue, technology alone isn’t the fix. It must be paired with a strong culture of trust, psychological safety and clear policies that empower people, not replace them.
The human element under threat
Perhaps the greatest challenge lies in preserving human agency within increasingly automated workplaces. Employment law has always recognised that workplace relationships involve complex human dynamics that require nuanced understanding.
As AI systems take over recruitment, performance evaluation and disciplinary processes, the risk emerges of reducing complex human situations to algorithmic outputs. This mechanisation of workplace decisions may create more disputes than it prevents.
There’s long road ahead
The intersection of AI and decision-making will likely define the next decade of workplace relations in Britain. Success will depend on developing regulatory frameworks that harness AI’s efficiency whilst protecting worker rights and maintaining human oversight where it matters most.