© DC Studio - Freepik
Artificial intelligence is increasingly becoming part of everyday business operations. But when an AI tool makes a mistake, can the employee’s disciplinary responsibility be engaged?
More and more employees are using generative artificial intelligence tools to draft texts, analyze data, or automate certain tasks. Time savings, increased efficiency, and reduced human error are among the many promises. But what happens when an incorrect decision, a faulty document, or inaccurate information results from the use of AI? Can the employer sanction the employee who relied on such a tool? The answer is more nuanced than it may seem.
AI remains a tool, not a responsible party
In labor law, disciplinary responsibility is based on the employee’s conduct. An artificial intelligence system, no matter how sophisticated, has neither legal personality nor its own responsibility. As a result, the use of AI does not relieve employees of their duty of care, loyalty, and diligence. In principle, the work delivered remains their responsibility, even when it has been partially or entirely generated by an automated tool.
Disciplinary fault: it all depends on the context
Not every AI-related error automatically justifies a sanction. The employer must assess the situation based on factors such as the degree of autonomy granted to the employee in using the tool, the existence (or absence) of internal guidelines governing AI use, the nature of the role and the level of expertise expected, and whether the error was foreseeable.
An employee who follows established procedures and uses a company-approved tool is not in the same position as someone who, without authorization, relies on an uncontrolled AI system for sensitive decisions.
The key role of internal rules and training
The question of sanctions inevitably leads to the broader issue of AI governance within the company. Without a clear policy, adequate training, and precise guidelines, it will be difficult for an employer to hold an employee accountable for an error made through a tool whose use is tolerated—or even encouraged. Conversely, explicit, well-known, and proportionate internal rules strengthen legal certainty for both employers and employees.
An error made by AI does not automatically exempt employees from all responsibility, but it does not justify systematic sanctions either. In a context of accelerated digital transformation, the main challenge is less about sanctioning and more about framing, training, and fostering responsible use of artificial intelligence within the company.
By Nicolas Tancredi.