Artificial intelligence at work: what employers need to know

June 20, 2025 by
Beci Community

Artificial intelligence (AI) is fundamentally transforming the way people work, interact with each other and make business decisions. It's already omnipresent in our daily working lives, whether through automatic writing tools, performance analysis systems or recruitment assistants. But what does the law say? And what precautions should employers and their teams take?

A silent revolution... which is already taking place

Many companies are already using AI tools without necessarily knowing that they are doing so. Automatic email summaries, suggested replies in messages, document translation and content generation are just some of the functions integrated into familiar software (Microsoft 365 Copilot, Notion AI, ChatGPT, etc.). These tools can boost productivity, automate repetitive tasks and free up time for higher added-value tasks.

But these advances also raise many questions about accountability, transparency and data protection.

The AI Act: a European legal framework in progress

The European Union has introduced an ambitious regulatory framework with the AI Act, establishing harmonised rules for artificial intelligence which will gradually come into force from 2025. The aim of this regulation is to build trust in AI while encouraging innovation.

It distinguishes four levels of risk:

  • Unacceptable risk: AI prohibited (e.g. behavioural manipulation, social scoring).
  • High risk: strictly regulated use, particularly in HR (e.g. automated recruitment systems, performance evaluation).
  • Limited risk: obligation to inform users (e.g. chatbots, AI-generated content).
  • Minimal risk: few or no constraints (e.g. spellcheckers, spam filters).

Any employer using a high-risk AI tool will have to comply with documentation requirements, ensure the transparency of the system and offer employees appeal mechanisms.

Best practices for employers

Given this situation, companies need to anticipate the need for compliance and provide a framework for the use of these technologies by their employees. Here is some useful advice:

  1. Map all AI tools used internally, whether they are integrated into commercial software or custom-developed.
  2. Inform and train employees about the capabilities, limitations and risks associated with AI.
  3. Avoid surveillance misuse: automated monitoring tools can quickly become intrusive and counterproductive.
  4. Ensure GDPR compliance: AI does not exempt organisations from data protection organisations.
  5. Implement an internal AI usage policy, clarifying responsibilities, authorised tools, prohibited uses and validation procedures.

Conclusion

Artificial intelligence is no longer science fiction. It's already at the heart of the tools used every day in the workplace, sometimes even without the users' knowledge. While it is a powerful lever for innovation and efficiency, it cannot be used without a framework. The AI Act provides a regulatory foundation that requires heightened awareness from all stakeholders—employers, developers, and users alike. For businesses in Brussels, it's time to get ready: audit interal practices and train teams so that they can evolve with confidence in this new technological environment.

By Nicolas Tancredi – Lawyer-Associate – DWMC


This article may also interest you: Teleworking: employer intervention in work equipment

Share this post
Archive