4Hoteliers
SEARCH
ITB 2025 Special Reporting
SHARE THIS PAGE
NEWSLETTERS
CONTACT US
SUBMIT CONTENT
ADVERTISING
Does the AI Act Adequately Allocate Responsibilities along the Value Chain for High-Risk Systems?
By Variety of Authors
Friday, 7th February 2025
 

The European Union’s Artificial Intelligence (AI) Act regulates high-risk systems by allocating responsibilities to designated actors throughout the systems’ value chain.

In this blogpost, we discuss the allocation of these responsibilities and argue that while the Act’s linear approach promotes compliance and accountability at each stage of the systems’ design, development, and deployment, it also has notable limitations that could pose risks to individuals.

By Joanita Nagaba, Cristina Almaraz López, Nathalie Koubayova, Mathy Vandhana Sannasi, Prekshaa Arunachalam, Aleksandra Klosinska

Introduction

In 2024, the European Union adopted the AI Act to promote the uptake of human-centric and trustworthy AI while also safeguarding people’s health, safety, and fundamental rights. The Act adopts a risk-based approach that categorises AI systems as unacceptable risk, high-risk, limited-risk and low-risk, in addition to specific provisions for general-purpose AI models.

This blogpost focuses on high-risk AI systems and examines whether the AI Act adequately allocates responsibilities throughout the systems’ life cycle. We begin by unpacking the definition of high-risk AI systems and identifying the key actors at each stage of the value chain.

Then, we analyse the adequacy of the responsibility allocation outlined in Chapter III of the Act. The roles and obligations of the actors are allocated linearly in a flexible regulatory environment in order to promote transparency, compliance and accountability.

However, we argue that further refinement is necessary to better address the unique complexity, opacity and autonomy of AI systems, which introduce particular liability issues that the Act does not fully address. We conclude by emphasising the need to tighten this flexibility to ensure better protection of individuals’ safety, health and fundamental rights.

Decoding high-risk AI systems and their key actors

According to Article 6 of the AI Act, an AI system is classified as high-risk in two instances: (1) the AI system is intended to be used as a safety component of a product, or a product covered by EU laws in Annex I of the Act and is required to undergo a third-party conformity assessment (e.g. in vitro medical devices, lifts, toys, etc.); or (2) the system is referred to in Annex III (mainly dealing with fundamental rights concerns).

However, paragraph 3 of Article 6 provides an exemption to this categorisation. It clarifies that an AI system referred to in Annex III is not considered high-risk when it is intended to: (a) perform a narrow procedural task; (b) improve the result of a previously completed human activity; (c) detect decision-making patterns or deviations, and is not meant to influence the previously completed human assessment without proper human review; or (d) perform a preparatory task of the evaluation relevant to the use cases under Annex III.

An AI system is exempted where it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. In that case, the systems’ providers must document their assessment before the system is placed on the market or put into service, and register themselves and the system in a new EU database.

Read the full article here.

About the Authors

Joanita is the founder of Uzawi Initiative, a nonprofit organization in Uganda that focuses on AI, Society and Democracy (LinkedIn).

Cristina is a PhD candidate at the University of Salamanca specialized in Social Studies of Science and Technology (LinkedIn).

Nathalie is a PhD candidate at the Institute of Communication Studies and Journalism (Charles University, Prague), researching how people form relationships with conversational agents (LinkedIn).

Mathy (PhD) is a Lecturer in Business Analytics at the Royal Holloway University of London, teaching business analytics, data analysis, and cloud computing (LinkedIn).

Prekshaa is working in the Product team at Wadhwani AI, driving Generative AI initiatives for the public health ecosystem in India (LinkedIn).

Aleksandra has over ten years of international experience in diverse fields of human rights, migration, digital rights, counter-terrorism and humanitarian programming (LinkedIn).

This article solely reflects the views of the authors, and does not represent the position of the Faculty or the University.

Brand Awareness - Online Marketing at 4Hoteliers.com ...[Click for More]
 Latest News  (Click title to read article)




 Latest Articles  (Click title to read)




 Most Read Articles  (Click title to read)




~ Important Notice ~
Articles appearing on 4Hoteliers contain copyright material. They are meant for your personal use and may not be reproduced or redistributed. While 4Hoteliers makes every effort to ensure accuracy, we can not be held responsible for the content nor the views expressed, which may not necessarily be those of either the original author or 4Hoteliers or its agents.
© Copyright 4Hoteliers 2001-2025 ~ unless stated otherwise, all rights reserved.
You can read more about 4Hoteliers and our company here
Use of this web site is subject to our
terms & conditions of service and privacy policy