1. Introduction
The rapid development of artificial intelligence (AI) technology has prompted a sense of crisis around the globe to build relevant legal and regulatory systems. New legislation is now necessary to protect the basic rights and interests of citizens that have been harmed or are at risk. To this day, the European Union's AI Act is the first all-encompassing legal system for artificial intelligence.
The phenomenon of regional legislation promoting the formation of global rules is relatively common in Latin America. Historical experience shows that the region has shown a strong inclination towards the systematic transplantation of European legal models in the emergence of new technologies, especially in terms of data governance. As some comparative legal studies have indicated, since the introduction of the EU's General Data Protection Regulation (GDPR) in 2016, a so-called "Brussels Effect" has been produced, that is, "the unilateral regulatory power of Europe over global markets." As a result, many countries in this area have launched waves of reform or formulation of data protection laws, providing ready-made models and top-tier standards for regulating domestic high-tech industries.
Regarding the regulatory problems brought about by AI, Latin American countries will carry out legal transplants of the EU AI Act again. Nations that have either passed or are in the process of passing similar legislation, such as Brazil and Peru, are all influenced by the EU's AI Act at their core definition; risk-based classification system; anti-discrimination principle and data governance requirements.
It cannot be a case of legal transplantation, but there is still a certain degree of risk. The EU's legal system is based on its particular social-economic background, regulatory strength and technological advancement at the time. In contrast, Latin American countries are quite different from the EU in terms of institutional enforcement power, economic development status, degree of the digital divide problem, and local prioritised problems (such as the protection of certain indigenous people). The transplantation of EU legislation may lead to a lack of connection between the law and reality, thereby generating misalignment risks such as regulatory failure and weak enforcement.
This paper will examine the emulation and adoption of the EU AI Act by some Latin American countries that have begun to formulate AI regulatory laws, such as Brazil and Peru. Focusing on the potential misalignment problems caused by such legal transplants, that is, ambiguous definitions, rigid risk classification, weak bias mitigation mechanism, and difficulties in law enforcement implementation due to differences in social reality, technological foundation and regulatory capacity between the two places. In order to provide references for the future development of an AI governance framework that is more localized in terms of region, practical in operation and Inclusive.
Taking anti-racial discrimination as an example. The institutional Demand of the EU and Latin America to fight against racial discrimination is different. In Europe's Union, racial discrimination is often closely linked to historical migrations and existing political or social problems, and it became more severe after the refugee crisis. In the context of post-Cold War, immigrants and asylum seekers from countries that are seen as suffering from "overpopulation" and "socio-economic instability" have been constructed in mainstream discourse as a threat to economic prosperity and national identity, leading to a new form of discrimination against specific impoverished groups. The form of "xenophobia", but the essence is racism - xenoracism. In addition, some specific social anxieties after the 9/11 terrorist attacks have also contributed to this. At present, "Islamophobia", which is defined as "unfounded hostility towards Islam" and fear or aversion to Muslims, has become a new type of biological and cultural racism. It is actually a kind of fear and aversion to outsiders regarded as "carriers of different cultures" that may disrupt social integration and national identity in the host country. The objective is to provide insights for the future development of an AI governance framework that is more tailored to regional realities, effective, and inclusive.
5. Comparison of Legislation of AI Between EU and Latin America
Artificial Intelligence is divided into two areas of focus by Peru's Legislative System: an Artificial-Intelligence-Based system; and an artificial intelligence-based one. The first is an electro-mechanical system for human-defined purposes to generate predictions, suggestions or decisions that affect the real or virtual environment; It has varying degrees of autonomy and adaptability after being deployed. The latter has been recognised as a new type of general-purpose technology that can improve people's lives, promote sustainable development of the economy, and solve some important problems facing human beings at present
.
Delineate an artificial-intelligence system in Brazil as a computational system with varying degrees of autonomy. Machine Learning and/or logic-based, knowledge representation techniques are employed to derive strategies for achieving the goal of this paper based on the input data obtained from machines or humans; The purpose is to predict, recommend or make decisions that will affect the virtual environment or reality in some way.
Although there are some commonalities among them, a significant deficiency exists in the treatment of rapidly changing areas such as General-Purpose AI (GPAI) and foundation models by existing laws. Although the European Union's AI Act is trying to deal with GPAI, Brazil and Peru do not have clear regulations on such systems and thus are subject to a regulatory gap in this field. The above situation also shows that there is a problem in the pace at which laws and regulations are developed relative to technological progress.
Considering that AI has a certain degree of impact on society, Brazil and Peru have set up regulatory rules around risk level; However, the specific classification criteria and attention points vary. Given that there is an urgent necessity for the development of artificial intelligence to establish a social confidence foundation according to the spirit embodied in Article 2 of the Treaty on European Union (TEU)
| [4] | The European Union. Consolidated version of the treaty on european union. Off J Eur Union. 2012 Oct 26; C 326(13): 1. |
[4]
, all three countries need fundamental guidance from this value. In line with the fundamental rights and interests guaranteed by all relevant treaties, as well as in accordance with the Charter under Article 6 of the TEU. In other words, the definition of AI here involves technology that provides a basis for addressing people's problems or improving the quality of life, thus it has the characteristic feature of promoting human development and improvement.
The legislative Logic of the EU's AI Act is that it should be built on a reasonable and effective regulatory System for AI systems to achieve a balance between regulation and development; The essence lies in establishing an innovative Risk-Oriented framework-based regulatory approach. In short, it is an adjustment method for the number and content of regulations in response to different levels of risk or effects in the application scenarios of artificial intelligence technology. According to this reasoning, from an explicit prohibition of unacceptable AI application behaviours in legislation and setting specific compliance requirements and operator obligations for high-risk AI systems; Additionally, it also proposes a requirement for transparency norms regarding some AI Systems
.
For independent AI systems (i.e., AI systems that are neither safety components of products nor products themselves), if, taking into account their intended application, considering the severity and probability of the potential harm posed by such an AI system to human health, safety or fundamental rights, and if it is being applied in several specific fields, it can be deemed a high-risk system.
The EU's new AI Regulation, in addition to general principles that have been introduced by the GDPR and other laws before it, also has a tiered risk management mechanism based on AI technology.
First of all, there are Unacceptable Risks, which means that the AI practice has been expressly forbidden by law. These types of risk directly violate the values (such as human rights, respect for person's dignity and personality) and basic rights stipulated in the EU Charter of Fundamental Rights that there shall be no exception. Their core feature is that the Design or application of AI systems involves "fundamental harm"; they include severe violations of people's lives and deaths, infringements on people's dignity, damage to people's freedom of choice, or significant harm to social justice. Specifically, the prohibited scenarios and core risks include: first, manipulative or deceptive technologies, which include technologies that seriously distort people's behaviour by means of subliminal techniques beyond human consciousness or intentional deception (such as imperceptible audio-visual stimuli) that may cause significant harm; Secondly, exploitation of the vulnerability of disadvantaged groups, which involves using groups in a vulnerable situation because they are young, have disabilities, belong to low socio-economic status, and other factors for distortion of their behavior; The third is social scoring system, that is, making long-term scores of natural person or group based on social behavior or personality traits, leading to adverse consequences in areas unrelated to the scenario where data originated, or unfair treatment disproportionate to the degree of violation; Fourth, crime risk prediction based only on profiling, it will be banned to make assessments and predictions of criminal risk for natural person solely through personal profile or personal traits, excluding auxiliary analysis based on objective criminal evidence; Fifth, facial image database with non-targeted acquisition, such as creating or expanding facial recognition databases without any restrictions on collecting facial images from the Internet or surveillance videos; Sixth, emotional recognition in workplace and education, meaning using artificial intelligence to infer the emotion of individuals (except for medical and safety uses, such as monitoring drivers' fatigue), seventh, sensitive biometric classification, prohibiting inferring sensitive attributes of natural persons (such as race, political views, religion, sexual orientation) from biometric data (including but not limited to face features, fingerprints); Eight, real-time remote biometric identification for law enforcement in public places is generally prohibited, except under the three specifically prescribed circumstances: Search for victims of kidnapping, human trafficking or missing persons; To prevent specific and imminent threats to life safety or to eliminate a terrorist attack; Ninth, sensitive personal information collection management, if involving access rights over sensitive personal information, there should also be detailed regulations on the management standards for the quality and security level of the collected information. Applications of these requirements involve requiring a court to issue an order; they are time-limited and do not have the effect of directly serving as evidence
.
Secondly, there are high-risk AI systems, which refer to those that may bring "serious risks" to people's lives and health, as well as their interests in the field of life and death, human security, etc., from using them. In such systems, in addition to ensuring the compliance requirements for a normal system as stipulated by regulatory standards, they also require that risk be assessed, a data governance system established and maintained, humans are put on guard and transparency be increased. The classification of their categories is based on "severity of risk + probability of occurrence", and these areas include some high-impact fields. According to Annex III of the act, high-risk areas and particular Systems are remote biometric identification Systems, AI systems applied in key Infrastructure for power Grids, water supply Systems, and Traffic Control; AI systems associated with education and vocational Training, employment and labour management, basic Social service and welfare, law enforcement and Justice decision-making, immigration, asylum and Border control.
Thirdly, there is a group of AI systems with moderate to low risk. Although the Act does not use the term "medium and low risks", if an AI system is neither forbidden nor highly risky, it will be governed by transparency obligations or voluntary compliance. Such a system has relatively small risks and an impact on a limited number of people. Among them, the medium-risk systems need to make a public disclosure obligation. They are low-risk but may impact users' rights to know or the authenticity of the information. Specific requirements are as follows: AI that directly interacts with natural persons needs to explicitly inform users that "they are interacting with an AI assistant" (except for cases where it is obvious, such as intelligent speakers); Artificially generated synthetic audio, images and text (such as deepfakes), etc., must mark the content as "synthetic artificial", and such information should be displayed in a machine-readable manner; If there is an emotional recognition or biometric classification system (for non-high-risk scenarios) used, the affected individuals need to be informed of its existence and purpose. Low-risk systems refer to AI with an extremely low risk factor (auxiliary tools), and the Act encourages voluntary application by these entities of specific conditions related to high-risk systems via a "code of conduct"
.
Brazil's regulations on artificial intelligence in risk classification primarily include two types: excessive risks and high-risk cases. Prohibit the development and application of artificial intelligence systems with a high risk; The use of subliminal techniques to induce harmful behaviour is prohibited; Exploit the vulnerability of some groups, such as the elderly, people who are weak, sick and disabled, to cause harm by using such technology; Systems where public authorities illegally or disproportionate scoring based on social behavior or personality traits affect one's right and interest access must not be established. High-risk systems involve situations such as the safety monitoring of major infrastructure equipment, assessment systems related to educational and training for workers, recruitment and employee placement systems in employment, evaluation systems for basic public services and private enterprises, credit evaluations, priority determination for emergency rescue forces, auxiliary use of the judiciary, automated driving technologies, healthcare services applications, biometric identification systems, evaluation systems concerning criminal investigation and public security matters, research on criminal analysis, proof of evidence provided by administrative organs' institutions, border inspection and management work, etc. The relevant authorities may adjust the list of high-risk or excessively risky systems in terms of their size and extent of damage
.
The classification of risks in Peru is as follows: unacceptable risk, high risk, medium risk and low risk. Systems with unacceptable risks are prohibited, such as those that modify people's behaviour through subliminal or deceptive methods, which can cause serious harm; And systems that improperly evaluate individuals or groups based on quantified social behaviour, thereby having adverse effects and causing an inconsistency between social behaviour. The high-risk system is that in terms of biometric identification and classification, support for securing critical infrastructure, Access and assessment vocational education, Management employee recruitment and dismissal, Assessment related social programs and emergency services, Personal credit evaluation, Support for judicial decisions and assessments and predictions related to medical Services, and Criminal Risk Assessment by law enforcement agencies. Without a result interpretation mechanism, the system is also considered to be at high risk. The competent authority can work together with relevant units and departments to update the risk List(3).
A critical analysis reveals the insufficiencies of these risk-based approaches in the legislation. First of all, the criteria for moving systems from low-risk to medium-risk or high-risk categories and updating the 'high-risk' list are unclear and lacking in specific trigger conditions or timeframes, potentially leading to regulatory stagnation. Secondly, the focus is primarily on ex-ante conformity assessment of high-risk systems and lacks development in the frameworks for post-market tracking, incident reporting and liability determination for sudden harm. The categorisation in Brazil and Peru is unlikely to be able to fully reflect the context-dependent risks; that is, the risk level of a system can vary significantly depending on its application scenario and environment for implementation.
One type of the risk management approach focusing on protecting justice and equal treatment for all individuals without discrimination by ethnicity, religion, gender and others. EU, Brazil and Peru have all mentioned the anti-discrimination principle in their AI acts, but there are different implementation paths and effects, which also show a lack of ensuring substantive fairness.
All the AI Acts of the European Union, Brazil and Peru also propose the principle of anti-discrimination.
The European Union's proposed AI Act adds the prohibition of discrimination to its core provisions. In the category of high-risk AI systems, if an AI system has a tendency to produce biased output under specific data conditions or environments, for example, in education and job placement scenarios, such systems will be considered high-risk and need to adhere to strict duties such as ensuring fairness of information release and enhancing human oversight capabilities to mitigate potential biases.
In Brazil's draft AI legislation, individuals whose rights may be infringed upon by decision-making, prediction outcomes or other results of AI applications have the right to request fair treatment in accordance with law. Prohibit the implementation and application of AI systems that can cause direct, indirect, illegal or abuse discrimination, including: (1) significant adverse effects due to the use of sensitive personal information or personal attributes such as geographical origin, race, skin colour, ethnicity, gender, sexual orientation, socio-economic status, age, physical condition, disability, religion, political stance, etc.; (2) adding disadvantages to members of some groups or aggravating their vulnerable conditions while using apparently neutral standards.
Peru's AI Act has established the non-discrimination principle, requiring that throughout all stages of the entire lifecycle of AI-based systems, discrimination or bias should not arise, be reinforced, or continue to exist and persist. To avoid discrimination against people of different races, genders, ethnic origins, economic situations and other factors that affect their social position; religion beliefs and various forms of disabilities, etc., all such causes. Additionally, systems with clear discriminatory effects, such as "exploit [ing] the weakness of disadvantaged groups" and "unreasonable classification based on quantified social behaviour", are identified as "unacceptable risks", and their application is forbidden.
Anti-discrimination Principles and provisions in the AI Acts of the EU, Brazil, and Peru all aim to protect people from discrimination based on gender, age, disability, etc., but their functional emphases differ. As there is a high risk bias in algorithms within education and employment affecting social values
; EU regulations also impose stricter requirements for monitoring systems used by businesses to prevent automatic transmission of erroneous biased predictions
. Brazil intends to shield weaker parties, such as ethnic minorities in this process of regulation; it forbids any kind of discriminatory action stemming from the usage of sensitive information or personal traits. Peru is proposing to regulate discrimination across the entire life cycle of AI, and it explicitly prohibits "unacceptable risks" systems that exploit the vulnerability of disadvantaged groups. All are aimed at avoiding potential discriminatory consequences that AI may bring, and promoting fairness by limiting and supervising the applications of these products
| [7] | National Council for Economic and Social Policy, National Department of Planning. National Policy on Digital Transformation and Artificial Intelligence. CONPES Document 3975. 2019. |
[7]
.
Contextual Legal Transplantation and Legislative Deficiency Addressing
Compared with the AI regulatory systems in other regions such as Brazil and Peru, there is a general pattern of legal transplantation at the EU level; The EU system has been introduced to some extent into other countries' regulation systems. However, this process frequently reveals a deficiency in merely copying the foreign legal structure and does not adapt locally. As mentioned earlier, the design principles of legal system framework in a specific social-technical situation have not been able to meet the needs when faced with changes from other backgrounds.
The first deficiency is that there is a lack of an assumption of uniform regulatory power. The EU's GDPR and AI Act assume that there is a sufficiently powerful national data protection authority and a well-established judiciary to implement the law when it comes to enforcing the rules. The ANPD of Brazil and the corresponding authority in Peru face severe problems with funds, technical capabilities and enforcement power; Therefore, these more specific concepts of proportionality or strict pseudonymisation standards may not be very practical in application. Transplanted Laws may cause a compliance gap where people have obligations under law, yet these duties are difficult to enforce in practice; thus, it will affect the overall implementation of supervision
.
In addition, the Frameworks usually do not consider adequately local demographic, economic and technological backgrounds.
The local adaptation feature of Peru's special protection for "indigenous peoples in isolation" is an excellent case; however, other problems need to be addressed at this time. For example, the digital gap and low level of digital literacy in some areas have made it harder for the laws to ensure the effectiveness of concepts such as "meaningful consent". A large amount of high-cost compliance would make it difficult for developing countries to maintain continuous development in this area due to strict requirements such as those set by the European Union. Therefore, they would inevitably become subject to others' technologies at some point. There are deficiencies in the regulatory system at this time regarding graded compliance pathways and support systems for both local enterprises with more limited scales such as small and micro-enterprises and those with relatively low levels of development (startups)
.
Although the risk classification system has a basic structure in common with other systems, it cannot show what are the most urgent local problems directly. A risk identified as high in Europe may not hold equal importance for Brazil or Peru. For instance, the application of AI in agriculture or for forest patroling in Amazonia countries could be treated differently under a different risk level from that in other parts of Europe
| [9] | Horna-Saldaña CJ, Perez Perez JE, Toro Galeano ML. Artificial intelligence in the preservation of native languages and bridging the information access gap for indigenous peoples. J Enabling Technol. 2025 Mar 5; 19(1): 63–75.
https://doi.org/10.1108/JET-09-2024-0063 |
[9]
. Deficiency refers to the fact that there is a lack of adaptability and local driving mechanisms for establishing and adjusting risk classification categories in each country according to its own priority issues and public policy objectives. Mirroring the EU's Annex III on high-risk Systems without an essential, continuous local risk assessment is a serious deficiency
| [10] | Aloisi A, De Stefano V. Between risk mitigation and labour rights enforcement: Assessing the transatlantic race to govern AI-driven decision-making through a comparative lens. Eur Labour Law J. 2023 Jun; 14(2): 283–307.
doi: https://doi.org/10.1177/20319525231167982 |
[10]
.
Overall, although the European Union, Brazil and Peru have made considerable progress in establishing AI laws, their legal systems are lacking some key deficiencies related to defi-nitional ambiguity, enforcement capabilities, risk classification adaptation ability and bias mitigation effects. In addition to this point, the process of legal transplantation highlights that only adopting advanced regulatory systems is not enough. In short, in the context of deep contextualisation, when promoting the construction of legal systems in artificial intelligence regulation at present, we need to consider many factors such as institutional capability, economic development level, social environment, technology status and public sentiment; Otherwise, regulations will lack legitimacy. In future legislation, there should be no technical gaps left in the current legal system; meanwhile, efforts must also be made to speed up the preparation of the implementation plan for such a reasonable and budgetary system at the local level. Otherwise, these well-meaning laws are either unrelated or debilitating of the very ecosystems they intend to regulate.
The meaning of legal transplantation is generally: “the transplantation of a certain legal rule or institution from a specific country to another country (or region).” Terms equivalent to “transplantation” include borrowing, absorption, emulation, importation, influence, introduction, and so on. From the perspective of functional comparative law, the purpose of legal transplantation is to solve social problems. The transplantation of the EU AI Act by Latin America aims to address the question of how to regulate artificial intelligence as an emerging technology, so as to prevent this new technology—for which humanity has no prior experience of governance—from infringing upon basic human rights in its application, thereby striking a balance between technology and human rights.
The existing AI legislation of various Latin American countries, in adopting the EU’s risk-based AI regulatory framework, prohibits practices that may infringe upon vulnerable groups as unacceptable risks—such as the inference of sensitive attributes from sensitive data and social scoring systems—with the goal of preventing algorithmic violations of the fundamental interests of vulnerable groups and the occurrence of discrimination. In other words, Latin America has adopted Europe’s risk-prevention legislation to protect the basic rights of citizens. Yet beneath this similar social agenda, the two regions face problems with distinct concrete dimensions.
Taking anti-racial discrimination as an example. The institutional Demand of the EU and Latin America to fight against racial discrimination is different. In Europe's Union, racial discrimination is often closely linked to historical migrations and existing political or social problems, and it became more severe after the refugee crisis. In the context of post-Cold War, immigrants and asylum seekers from countries that are seen as suffering from "overpopulation" and "socio-economic instability" have been constructed in mainstream discourse as a threat to economic prosperity and national identity, leading to a new form of discrimination against specific impoverished groups. The form of "xenophobia", but the essence is racism - xenoracism. In addition, some specific social anxieties after the 9/11 terrorist attacks have also contributed to this. At present, "Islamophobia", which is defined as "unfounded hostility towards Islam" and fear or aversion to Muslims, has become a new type of biological and cultural racism. It is actually a kind of fear and aversion to outsiders regarded as "carriers of different cultures" that may disrupt social integration and national identity in the host country
.
The form of "xenophobia", but the essence is racism - xenoracism. In addition, some specific social anxieties after the 9/11 terrorist attacks have also contributed to this. At present, "Islamophobia", which is defined as "unfounded hostility towards Islam" and fear or aversion to Muslims, has become a new type of biological and cultural racism. It is actually a kind of fear and aversion to outsiders regarded as "carriers of different cultures" that may disrupt social integration and national identity in the host country
.
Exclusionary sentiment has been further enlarged and systematised in the problem of migration. Immigrants and asylum seekers have been positioned by mainstream discourse, particularly by rising right-wing political frameworks within the EU, as potential threats to social economic safety and public welfare, and are associated with criminal activities and terrorism. These narratives have been institutionalised in specific laws, policies and Bodies (such as border Management agencies), thus securitising migration and forming an opposing and unresolvable national-immigrant Identity gap. Under this background, The EU's "global migration management" strategy and its internal policies have tended to view certain non-European migrant groups as a "suspicious population" that requires control and supervision in terms of implementation. Muslim men, especially young people from Arab and African countries, are more likely to be stopped and searched by the police than other people; This is institutional religious and racial discrimination. In the name of "national security", the freedoms and rights guaranteed by law are actually restricted. They may also send out a social signal that hatred is "permitted", thereby promoting a cycle of hostility and discrimination against the Muslims
.
Therefore, the EU’s AI legislation classifying algorithms with significant impacts on employment and other sectors as high risk, with a specific focus on discrimination risks related to migration, is not an isolated, purely technical risk assessment. Rather, it forms part of a broader response to the core threats to fundamental rights posed by religion-based (Islamic) and culturally “foreign” racism and xenophobia in the post 9/11 era. In the current context of AI development, such discriminatory attitudes may undermine algorithmic fairness and adversely affect the welfare and employment opportunities of these groups. Accordingly, the EU AI Act’s high risk categorization includes AI systems pertaining to race, employment, worker management, and access to self employment opportunities as “high risk” and requires conformity assessments prior to their placement on the market.
Racial discrimination and structural inequality in Latin America have their roots in colonial times, have greater cultural embedding and institutional rigidity. Although the emerging artificial intelligence legislation in the region, such as the relevant laws in Peru, explicitly mentions the protection of indigenous people, its implementation is subject to difficulties due to the complexity of history and society. Espinoza et al. (2025) point out that many technical and social interventions in Peru over the years have been unsuccessful because there was no cultural and ethical foresight, ignoring language barriers and the cultural significance of communities, and even infringing upon people's rights. Even in Latin America, an AI system that meets the "low-risk" standard technically may still lead to actual exclusion and discrimination by failing to consider indigenous languages such as Quechua and Aymara or cultural logic. To ensure the accessibility and accountability of AI intervention, a cultural-specific System Design should be considered instead of a simple risk classification
| [13] | Espinosa Zárate Z, Camilli Trujillo C, Plaza-de-la-Hoz J. Digitalization in vulnerable populations: A systematic review in latin America. Soc Indic Res. 2023 Dec; 170(3): 1183–207. https://doi.org/10.1007/s11205-023-03239-x |
[13]
.
Aloisi and De Stefano (2023) point out that AIs work with "Groups, Communities and populations" in professional contexts; therefore, a collective governance framework beyond individuals' rights over their data is required. When indigenous people encounter algorithmic discrimination, in addition to information asymmetry and technological obstacles, there is also a lack of culturally sensitive grievance channels. Just ranking AI systems that affect the fundamental rights of such vulnerable groups based on risk levels is not enough. Enforcement mechanisms that can accommodate collective claims and cultural Differences need to be established, and the right of consultation and Supervision for indigenous Communities over AI applications needs to be granted.
The European Union and Latin America have both adopted a risk-based AI governance model, but their underlying problems differ: The EU seeks to close the implementation gap among many countries' systems; In contrast, rebirth institutional legitimacy and effectiveness have arisen from post-colonial background problems in Latin America
| [10] | Aloisi A, De Stefano V. Between risk mitigation and labour rights enforcement: Assessing the transatlantic race to govern AI-driven decision-making through a comparative lens. Eur Labour Law J. 2023 Jun; 14(2): 283–307.
doi: https://doi.org/10.1177/20319525231167982 |
[10]
.