Whether with ChatGPT, smart AI assistants or self-programmed applications – Artificial Intelligence accelerates and simplifies work in many companies. So far, however, without a dedicated legal framework. The EU AI Act changes this and at the same time builds pressure to act. In this blog article, you will learn everything L&D managers need to know about the AI Act: from the basics of the law to the next steps for personnel development.
1. Basics: This Is What the EU AI Act Is All About
The EU AI Act is in fact the world's first comprehensive legislation on the regulation of Artificial Intelligence. As the first regulation of its kind, it provides all EU member states with a framework for the introduction and use of AI systems in organizations and companies.
A major goal of the AI Act is the safe, traceable and responsible use of Artificial Intelligence to protect fundamental rights.
In order to minimize the risks for users and society, the regulation therefore specifies, among other things, the criteria according to which AI must be developed and used in order to ensure transparency and ethically acceptable use.
The AI Act has been in force since August 2, 2024, but it will really start making a difference – especially for personnel development – from February 2, 2025: the first measures will apply from this date, including the verifiability of AI skills among employees.
L&D Takeaway
The EU AI Act regulates the development and use of Artificial Intelligence to ensure the safe use of the technology. It affects companies in all EU member states. The first measures will take effect from February 2, 2025.
2. Relevance: These Companies Are Affected by the AI Act
In short, the law affects organizations and companies in all industries and of all sizes – provided they use, develop or sell AI in any form. It therefore already applies when employees use ChatGPT, for example.
The AI Act lists various players that are affected by the regulation. The most important group – because it is growing particularly quickly – is probably deployers, i.e. companies that use Artificial Intelligence in their day-to-day work under their own responsibility.
If we assume that the majority of companies are already deployers within the meaning of the AI Act (and that this number is set to increase), the much more interesting question seems to be:
How much is your own company affected by the AI Act?
This is where it gets exciting, because the AI Act follows a so-called risk-based approach. It categorizes the type of Artificial Intelligence in use and what it is used for:
The more risky or critical the AI use is, the stricter the requirements and regulations.
L&D Takeaway
Quite simply, if your company uses Artificial Intelligence, you are also affected by the AI Act. How much depends on how you use the technology.
3. Requirements: How Companies Need to Respond to the AI Act
To be compliant with the AI Act, companies must meet various criteria, including
- Identification and categorization of AI systems in use
- Organization and definition of internal AI compliance requirements
- Introduction and implementation of control mechanisms and compliance checks
- Identifying and filling skill gaps in the area of AI.
When working with high-risk AI, there are also documentation and reporting obligations.
Generally speaking, the more critical the use under the AI Act, the longer the to-do list for companies.
Unlike the sensitive issue of data protection, companies are not obliged to appoint an AI officer. If personal data is processed with AI, the General Data Protection Regulation (GDPR) applies.
L&D Takeaway
Scan, categorize, train - companies need to deal with the AI Act in order to derive and implement their individual measures.
4. Personnel Development: New To-Dos and Role Profiles
Article 4 of the AI Act is particularly important for personnel development:
{{blog-eu-ai-act-article-4-en="/custom-rich-text"}}
In less legal terms:
Companies that use Artificial Intelligence must ensure that their staff have sufficient AI skills.
Or more to the point: AI training is mandatory!
The responsibility of L&D departments is therefore growing and they must adapt to this. Be it with new training initiatives, departmental restructuring of the team or new roles such as a person responsible for AI compliance.
HR managers themselves are initially one of their most important target groups. On the one hand, because the use of AI in personnel development can be just as relevant, but above all because they themselves need to have a good understanding of which legal requirements need to be covered by training.
L&D Takeaway
All employees must be verifiably trained in the use of Artificial Intelligence. To ensure this, new training courses should be designed and if necessary new roles introduced in L&D.
5 AI Literacy: Skills That Everyone Will Need in the Future
The AI Act answers the big question of what exactly AI literacy ist that all employees need with:
{{blog-eu-ai-act-article-3-en="/custom-rich-text"}}
So it's not about knowing exactly how the algorithm in the background generates the result. Rather, it is about basic knowledge that enables you to
- Technical: Understand how Artificial Intelligence works (especially in the case of high-risk AI) and correctly assess and explain AI products in an understandable way.
- Legal: Weigh up and implement information input in accordance with data protection law.
- Ethical: Use Artificial Intelligence in accordance with defined ethical standards.
- Security-related: Utilize the technology with the necessary risk awareness for potential damage.
Note that the regulation does not specify what level of AI-skills qualifies as “sufficient”.
L&D Takeaway
From functionality to risks: Employees' AI literacy includes a basic technical, ethical, legal and security-related understanding of Artificial Intelligence
6. Transparency: Understanding AI and Openness Are of Great Importance
When it comes to the basic technical understanding of Artificial Intelligence, the same principle applies as with other tools such as PowerPoint or Excel:
You have to master the commands, but not the algorithm behind them.
The difference to these programs lies mainly in the creative, analytical or generative contribution of AI systems. According to the AI Act, employees must therefore be particularly capable of
- understanding AI-supported processes,
- evaluating decisions made by Artificial Intelligence and
- explaining results in a comprehensible manner.
Again, the riskier the area of application, the more in-depth the knowledge must be - especially if the technology is not only used, but also (further) developed.
What the AI Act, with its combination of various recitals and articles, indirectly requires of companies is ultimately a culture of openness that is facilitated through transparency, risk management and ethical standards. Such a culture ensures that employees are encouraged to use the new technology and report potential risks or errors in the use of AI.
L&D Takeaway
Training courses must provide employees with sufficient information to build up basic AI knowledge. In addition, openness towards the use of Artificial Intelligence should be encouraged, for example through a practiced error culture.
7. Ethics: Important Principles for Dealing With AI
The ethical aspect of using Artificial Intelligence is a central point in the AI Act. In fact, according to the lawmakers, compliance with fundamental rights is one of the central goals of the regulation.
This includes, for example, the protection of human dignity and non-discrimination. The principles of fairness, responsibility and traceability also play an important role in AI decisions.
{{blog-eu-ai-act-high-recruiters-info-en="/custom-rich-text"}}
From the precise definition to the training format to compliance monitoring – raising awareness of these issues is part of the new tasks in HR development. Ideally, it is possible to promote a corporate culture that supports and safeguards the ethical use of AI.
L&D Takeaway
Employees at all levels must be taught the ethical standards of AI use, internalize them and act accordingly.
8. Opportunities: Great Potential, Despite Regulation
It is true that a law like the EU AI Act implies regulation and restriction. Almost half of the decision-makers surveyed in a Deloitte survey even see the AI Act as an obstacle to AI applications in organizations. Only just under a quarter of respondents believe that the new rules will help.
Despite all the regulations, the act does have positive effects – including for personnel development:
- For AI-critical companies in particular, the clear rules can ensure that confidence in the technology is boosted.
- The time pressure to implement measures can accelerate the introduction and use of AI-supported tools and processes.
- With trained employees and the responsible use of AI, internal processes can be optimized in a targeted manner.
Greater clarity and understanding about the use of AI can therefore increase a company's productivity and future viability. With its legal framework, the AI Act makes engaging with the technology obligatory.
L&D Takeaway
If personnel development successfully and compliantly trains all employees in Artificial Intelligence, it can make an important contribution to the company's long-term competitiveness and innovative capacity.
9. Penalties: The Threat of Non-compliance With the Regulations
As often the case with laws, the AI-Act also includes penalties if the given regulations are not complied with. The level of penalty depends on the offense, the extent and the size of the company:
- The use of prohibited AI practices can result in fines of up to €35 million or 7% of the global turnover achieved in the previous year.
- Violations of compliance checks or risk management measures can result in fines of up to €15 million or 3% of the previous year's turnover.
- False, incomplete or misleading information provided to authorities will be penalized with up to 7.5 million euros or 1.5% of the previous year's turnover.
To protect small and medium-sized enterprises and start-ups, these financial sanction rules are capped at a lower level.
The regulation does not specify the degree to which the failure to demonstrate AI skills will be sanctioned. However, it can be assumed that sanctions will be “effective, proportionate and dissuasive” (Article 99). The obvious tip: don't take chances.
L&D Takeaway
If it cannot be proven in time and with proof that the employees have sufficient AI skills, the company could face severe fines.
10. Further Development: How Employees Learn to Implement the Requirements of the AI Act
Although the AI Act explains what is meant by AI skills, it does not clearly define what skills exactly employees need,what level of these skills is sufficient and in what form they must be demonstrated.
This makes it difficult for companies, and more specifically for HR development, to offer suitable further training. Nevertheless, they have to react.
However, upon a close reading of the Act, the following set of rules can be derived.
Training that complies with the AI Act should:
- be available to all employees who work with or on Artificial Intelligence.
- explain basic technical knowledge about how AI systems work and how they can be used.
- highlight the risks and opportunities of the technology both from a social and a data protection angle.
- convey ethical standards for use and promote the ability to correctly evaluate AI results.
- be documented for the purpose of verifiability, for example through the documentation of degrees or certificates.
The Masterplan AI fundamentals course meets precisely these criteria. From basic AI knowledge to examples in everyday working life, employees are given a comprehensive introduction to the use of Artificial Intelligence and receive a certificate upon successful completion.
Important: This set of rules is particularly suitable for companies in which Artificial Intelligence is used with limited risk. If, for example, employees work with high-risk AI, the learning content must be adapted and deepened accordingly.
L&D Takeaway
AI training courses must provide a basic understanding of Artificial Intelligence, how it works and possible risks and should be documented, e.g. through a certificate. When working with high-risk AI, more in-depth learning content is required.
{{blog-lp-link-ai-course-en="/custom-rich-text"}}
Compliant in Time! Next Steps for AI Competence in the Company
The AI Act urges all organizations that already use Artificial Intelligence or want to do so in the future to take action. This is because all employees should have the necessary AI skills by February 2, 2025.
For this to work, HR developers already need to:
- Identify training and knowledge gaps in the AI context: Knowing early on what technical, ethical and AI-specific knowledge is missing and needs to be built up will speed up the start of an AI training initiative.
- Adapt and execute the training strategy: Building AI skills requires new learning content and impetus. The sooner the plan is in place and implemented, the sooner all legal requirements are met.
- Promote continuous AI training: Those who offer employees a dynamic learning environment today enable further development in step with technology - and ensure the future viability of the company.
So, don't just comply with all EU directives on time, set your teams up for a successful future. The AI Act clock is ticking!