1. Introduction
EU Regulation 2024/1689 on artificial intelligence (AI Act) came into force on 1 August 2024. The AI Act will apply in full from 2 August 2026, with individual chapters already applying before this date. The AI Act aims to strengthen the EU internal market, protect fundamental rights and promote innovation by promoting trustworthy artificial intelligence (AI).
The first provisions, namely the obligations on AI literacy (Chapter I, in particular Art 4) and bans on certain AI systems (Chapter II, Art 5), have been in force since 2 February 2025. This article focusses on AI literacy.
2. Scope of application of the AI Act (Art 2)
The scope of application of the AI Act is very broad -> it applies to all companies that place AI systems on the market, put them into operation, operate (= use), introduce, trade, develop and/or have them developed.
The specific measures that need to be taken depend on the risk level of the AI system used. A distinction is made between ‘minimal’, ‘limited’, ‘high’ and ‘unacceptable’ risk levels.
3. AI literacy (Art 4)
The provisions on AI literacy that have been in force since 2 February 2025 are relevant for all companies.
Art 4 obliges providers and operators of AI systems to ensure that their personnel and other persons involved in the operation and use of AI systems on their behalf have a sufficient level of knowledge in the field of artificial intelligence (= AI literacy). This competence includes technical, legal and ethical knowledge, risk awareness and practical application skills. There are no specific requirements for the implementation of these specifications, whereby the following must be taken into account in accordance with Art. 4:
technical knowledge, experience and training,
context in which the AI systems are used,
people or groups of people for whom the AI systems are to be used.
Due to the broad application of this regulation, the risk level also plays a role. If a company only deals with the use of an AI system, it is generally only in a limited risk area (e.g. use of chatbots). Accordingly, training is also organised differently than in a company that develops AI systems, for example, and is therefore in a higher risk area.
Examples of AI literacy skills that should be provided:
|
4. To-Dos: what should companies do?
Companies that are obliged to have AI expertise should take the following measures:
Identify which AI systems are used in the company.
Note: As many software products use AI systems, it is possible that these are used unknowingly!
Strategy on how and which AI should be used specifically
Creation of an internal guideline
Create and implement a training programme: this should take into account the specific needs of the company and its employees. They should also be as user-friendly as possible (e.g. online courses, avoidance of legal jargon)
Regularly update the training programmes to ensure that the latest AI advances are taken into account
Documentation of the training measures
Considerations and development of an AI strategy as well as the details of training courses (type, organiser, content and timing) must be documented in writing.