Digital & Semiconductor: TÜV NORD subsidiary, TÜVIT accompanies the transition to safe AI without unacceptable risks
Investments in AI developments are increasing worldwide - from February 2025, new regulations of the EU AI Act will come into force in the European Union to strengthen safety and trust in AI technologies. With its subsidiary TÜVIT, TÜV NORD GROUP supports developers and operators in complying with these new regulations.
The EU AI Act aims to ban AI systems with “unacceptable risks” from February onwards. This includes systems that manipulate people using subliminal techniques, exploit the vulnerabilities of certain groups or enable real-time biometric identification in public spaces - with the exception of applications for law enforcement.
The EU-wide AI regulation also stipulates that from February 2025, certain organizations must ensure that their employees have an appropriate level of AI competence. This includes understanding, using, monitoring and critically reflecting on AI-based applications.
“These new regulations are a crucial step in ensuring the integrity and security of AI applications,” explains Vasilios Danos, Head of AI Security and Trustworthiness at TÜVIT. “Our expertise helps companies to comply with the new regulations while developing innovative and secure AI solutions,” says Thora Markert, Head of AI Research and Governance at TÜVIT. Among other things, TÜVIT offers training courses and workshops to strengthen AI competence in companies.
In addition to the “unacceptable risk” category, the EU AI Act distinguishes between three further risk classes. The lowest level is the “low risk” category, i.e. minimal risk. This includes spam filters or AI avatars in video games, for example. These are applications where there is no risk of physically endangering people, violating their rights or causing them financial harm. These applications are largely exempt from all requirements. The next level is called “limited risk”. This includes, for example, simple chatbots that interact with users. In future, such applications must make it clear to users that they are dealing with an AI and not a human. Deepfakes and other AI-generated content must also be labeled as such. The third level is the “high risk” category. This includes biometric access systems, where you identify yourself using facial recognition, for example, or AI applications that automatically screen applications. It must be ruled out that applicants are discriminated against because of their name, for example. An AI-controlled industrial robot, which can injure people in the event of a fault, and certain critical infrastructures such as telecommunications, water and electricity supplies also fall into this high-risk area. For this risk level, the AI Act also provides for an independent third-party audit, a kind of AI TÜV, from August 2026.
The TÜV NORD GROUP is highly committed to the development of test criteria and methods. TÜVIT is already implementing the first projects and tests. The TÜV NORD GROUP is also a partner in the TÜV AI.Lab together with other leading TÜV companies. For example, TÜVIT and the TÜV AI.Lab colleagues are supporting the development of standards and regulations for AI applications.
Founded over 150 years ago, we stand for security and trust worldwide. As a knowledge company, we have our sights firmly set on the digital future. Whether engineers, IT security experts or specialists for the mobility of the future: in more than 100 countries, we ensure that our customers become even more successful in the networked world.