As artificial intelligence (AI) becomes more deeply integrated into the workplace, many employers are exploring its use in recruitment, performance evaluation, promotion, and even termination decisions. While AI promises efficiency, cost savings, and data-driven objectivity, it also introduces serious risks, both legally and ethically, that cannot be overlooked.
Bias and discrimination risks
One of the biggest misconceptions about AI is that it is inherently neutral. In reality, AI systems reflect the data they are trained on. If historical data contains bias, whether based on race, gender, age, or other protected characteristics, AI can perpetuate or even amplify that bias. A recruitment algorithm trained on CVs from a predominantly male industry may learn to favour male applicants, even if unintentionally. This would leave the employer open to accusations of discriminatory recruitment processes.
Or if an employer dismissed an employee for poor attendance based solely on AI produced data, without considering the possibility of disability, then the employer could find themselves facing a complaint of disability discrimination.
The Equality Act 2010 requires employers to ensure that the policies and procedures they adopt are not discriminatory. Employers may inadvertently introduce discrimination into their decision-making, exposing themselves to claims in the employment tribunal.
Lack of transparency
Many AI systems provide results without a clear explanation of how those results were reached. In employment decisions, this lack of transparency poses serious problems and the potential risk of litigation.
Employers should view AI as a support tool, not a decision-maker. AI can help flag trends or provide recommendations, but final decisions should always involve human oversight.
Employee efficiency
AI may be a good way of increasing efficiency and removing the routine administrative tasks that employees carry out.
However, employees who use AI tools to process or input personal or sensitive information may breach data protection laws. For example, submitting client data, patient records, or internal HR information into an external AI tool (especially one not vetted by the employer) could result in:
Unauthorised data sharing with third parties
Breach of confidentiality obligations
Regulatory penalties for mishandling personal data
Accountability and liability Issues
Employees might assume that if AI generates a response or decision, the tool is responsible. In reality, legal and professional accountability remains with the human user.
If an AI tool makes a recommendation that leads to a contractual error, a discriminatory outcome, or even reputational harm to the business, it is often the employee who will be asked to explain the decision.
Intellectual property
The use of AI can raise complex intellectual property (IP) issues. For example:
If AI helps generate creative or written content, who owns the copyright?
If AI is trained on copyrighted material, is the output legally safe to use?
Could your use of AI violate another company’s IP rights?
Employees using AI tools to create content may unknowingly expose their employers, or themselves, to legal risks.
AI can be a powerful tool in workforce management, but it is not a substitute for human judgment and fairness. Ultimately, AI should be used to support, not replace, the judgment, skill, and accountability of professionals.
Employers must tread carefully, balancing innovation with responsibility, to avoid making costly mistakes that can harm their employees and their organisations. Used wisely, AI can enhance decision-making. Used blindly, it can lead to litigation and reputational damage. Any employer aware of these issues will ensure that they have a clear AI policy that has been communicated to its employees.
Saltworks employment team can help you to draft an AI policy and provide training to employers and their teams.