Introduction

With the Artificial Intelligence Regulation (AI Regulation), the European Union has created a comprehensive legal framework for the use of artificial intelligence (AI) for the first time. The AI Act, as the regulation is known in English, applies directly in all EU member states and will come into force in stages. The first provisions, including the ban on certain AI practices and the obligation for AI competence, have already been in force since 2 February 2025, with further regulations to follow, including on criminal provisions (from 2 August 2025) and the regulation of high-risk AI systems (from 2 August 2026).

The regulation is intended not only to create a uniform European standard for dealing with AI systems, but also to promote investment and innovation while ensuring a high level of protection for fundamental rights, safety and health.

What is artificial intelligence?

AI refers to systems that are able to analyse data, recognise patterns and draw conclusions independently. It can generate texts, images, videos, voices or computer codes and is increasingly being used in companies. The possible areas of application are broad and range from automated customer communication, marketing and HR management tasks to recognising irregularities in tax or accounting data.
While AI can make many processes more efficient, it is not error-free. Human control remains essential to avoid misinterpretations and incorrect results.

Mandatory AI expertise in companies from 2 February 2025

Since 2 February 2025, companies that develop or use AI systems have been obliged to ensure that their employees have sufficient AI skills (Article 4 of the AI Regulation). This applies not only to permanent employees, but also to external service providers or cooperation partners.

The necessary expertise can be imparted through internal guidelines, company policies or training courses. Companies should ensure that their employees have the necessary knowledge:

  • the technical limits of AI,
  • Data protection regulations (GDPR),
  • Copyrights (UrhG),
  • the protection of trade and business secrets
  • and possible liability issues.

In practice, it is recommended that internal guidelines on the use of AI should make it clear that AI-supported processes are subject to human review. In companies with a works council, a formal works agreement on the use of AI systems can also be concluded. Employees who work with AI tools in the company should be trained as soon as possible on the AI tools used, the technical limits of AI, data protection, copyright, security aspects and associated liability issues!

Risk classes for AI systems: gradual regulation

The AI Regulation categorises AI systems into different risk levels, each of which entails specific requirements:

  • Prohibited AI practices (Article 5 AI Regulation)
  • High-risk AI systems (Article 6 and Annex III AI Regulation)
  • AI systems for direct interaction with humans (Article 50 AI Regulation)
  • AI systems with a general purpose (general-purpose AI - GPAI, Article 53 of the AI Regulation)

The respective regulations depend on the risk level of the application: While certain AI practices have already been prohibited since February 2025, stricter obligations will apply to high-risk AI systems from August 2026.

Prohibited AI practices since 2 February 2025

Since 2 February 2025, certain AI applications have been banned in the EU - however, penalties for violations will not come into force until 2 August 2025. The prohibited practices include, among others

  • Manipulative AI techniques that influence people's behaviour to their disadvantage.
  • Emotion recognition in the workplace, except for medical or safety-related purposes (e.g. concentration monitoring for flight personnel).
  • Biometric categorisation if it is used to collect sensitive data such as ethnicity, political views or sexual orientation.

Practical recommendationCompanies should check the AI tools they use to ensure that no prohibited technology is being used. If an application is affected, it must be discontinued immediately.

AI systems for direct interaction with humans: Transparency obligations from 2 August 2026

From 2 August 2026, additional transparency requirements will apply to companies that use AI-supported systems in direct communication with people. This primarily concerns AI-generated content such as deepfakes, the use of which must be disclosed (Article 50 (4) of the AI Regulation).

High-risk AI systems: Strict requirements from 2 August 2026

Certain AI systems are considered high-risk technologies and will be subject to particularly strict requirements from 2 August 2026. The areas affected include critical infrastructure (e.g. hospitals, power grids), education and personnel management (e.g. application filtering, performance assessments, workplace monitoring) or automated decisions on employment conditions (e.g. promotions or dismissals).
Companies that use high-risk AI must:

  • Take technical and organisational measures for safe use.
  • Ensure human supervision of the AI systems.
  • Keep automatically generated logs for at least six months.
  • Inform employees and the works council about the use of these systems.

Sanctions for non-compliance with the AI Regulation

The AI Regulation provides for severe penalties for companies that violate the regulations. From 2 August 2025, violations can be punished with fines of up to 35 million euros or 7 % of annual global turnover (Article 99 of the AI Regulation). In addition, the member states are responsible for determining further sanctions and procedural rules. It remains to be seen which specific provisions will be enacted in Austria.

Important noteThe obligation to train employees in the use of AI systems is not directly punishable by law, but the lack of training can lead to legal violations being committed. Companies should therefore ensure that all relevant employees are informed and trained in good time.

What companies should do now

The EU's AI regulation brings far-reaching changes for companies that use AI technology. To comply with the new regulations, companies should:

  1. Check AI systems and ensure that no prohibited practices are used.
  2. Introduce training for employees to ensure the necessary AI expertise.
  3. Adapt transparency obligations and data protection guidelines.
  4. Consider strict requirements for high-risk AI systems.

Status: 03/03/2025
Source: Kraft & Kronberger specialised publications
Photo: Tara Winstead