Research Article | | Peer-Reviewed

Dilemmas and Reflections on the Legal Transplantation of the EU AI Act by Latin American Countries from a Functional Comparative Perspective

Received: 23 March 2026     Accepted: 21 April 2026     Published: 15 May 2026
Views:       Downloads:
Abstract

This paper examines the systematic legal transplantation phenomenon of the European Union's Artificial Intelligence Act (EU AI Act) among some Latin American countries, such as Brazil and Peru, through the lens of function-oriented comparison, taking anti-discrimination rules as an example. It has been discovered that, while these countries have generally introduced the EU's leading, risk-oriented regulatory model for new AI applications and safeguards against core interests, the resulting regulatory mismatch is substantial. Analysis finds deficiencies in this transplantation include the lack of corresponding institutional enforcement capabilities and inflexible risk classification systems that are not tailored to local priorities or technological situations. Although the papers deal with apparently identical themes such as anti-discrimination, in fact they have different socio-cultural and historical backgrounds. For example, in terms of anti-discrimination, although the focus of the EU is generally on problems such as immigration and religious discrimination, such as Islamophobia; The deep-seated reasons for discrimination in Latin America stem from colonial rule, impacting indigenous people. The paper has found that effective AI governance requires deep localisation modifications of transplanted regulatory models, and rules and implementation systems suitable for the specific cultural environment need to be developed; Only then can an appropriate regulation for the Digital Age be generated.

Published in Humanities and Social Sciences (Volume 14, Issue 3)
DOI 10.11648/j.hss.20261403.11
Page(s) 200-208
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2026. Published by Science Publishing Group

Keywords

Artificial Inteligence, Latin America, Legal Transplant

1. Introduction
The rapid development of artificial intelligence (AI) technology has prompted a sense of crisis around the globe to build relevant legal and regulatory systems. New legislation is now necessary to protect the basic rights and interests of citizens that have been harmed or are at risk. To this day, the European Union's AI Act is the first all-encompassing legal system for artificial intelligence.
The phenomenon of regional legislation promoting the formation of global rules is relatively common in Latin America. Historical experience shows that the region has shown a strong inclination towards the systematic transplantation of European legal models in the emergence of new technologies, especially in terms of data governance. As some comparative legal studies have indicated, since the introduction of the EU's General Data Protection Regulation (GDPR) in 2016, a so-called "Brussels Effect" has been produced, that is, "the unilateral regulatory power of Europe over global markets." As a result, many countries in this area have launched waves of reform or formulation of data protection laws, providing ready-made models and top-tier standards for regulating domestic high-tech industries.
Regarding the regulatory problems brought about by AI, Latin American countries will carry out legal transplants of the EU AI Act again. Nations that have either passed or are in the process of passing similar legislation, such as Brazil and Peru, are all influenced by the EU's AI Act at their core definition; risk-based classification system; anti-discrimination principle and data governance requirements.
It cannot be a case of legal transplantation, but there is still a certain degree of risk. The EU's legal system is based on its particular social-economic background, regulatory strength and technological advancement at the time. In contrast, Latin American countries are quite different from the EU in terms of institutional enforcement power, economic development status, degree of the digital divide problem, and local prioritised problems (such as the protection of certain indigenous people). The transplantation of EU legislation may lead to a lack of connection between the law and reality, thereby generating misalignment risks such as regulatory failure and weak enforcement.
This paper will examine the emulation and adoption of the EU AI Act by some Latin American countries that have begun to formulate AI regulatory laws, such as Brazil and Peru. Focusing on the potential misalignment problems caused by such legal transplants, that is, ambiguous definitions, rigid risk classification, weak bias mitigation mechanism, and difficulties in law enforcement implementation due to differences in social reality, technological foundation and regulatory capacity between the two places. In order to provide references for the future development of an AI governance framework that is more localized in terms of region, practical in operation and Inclusive.
Taking anti-racial discrimination as an example. The institutional Demand of the EU and Latin America to fight against racial discrimination is different. In Europe's Union, racial discrimination is often closely linked to historical migrations and existing political or social problems, and it became more severe after the refugee crisis. In the context of post-Cold War, immigrants and asylum seekers from countries that are seen as suffering from "overpopulation" and "socio-economic instability" have been constructed in mainstream discourse as a threat to economic prosperity and national identity, leading to a new form of discrimination against specific impoverished groups. The form of "xenophobia", but the essence is racism - xenoracism. In addition, some specific social anxieties after the 9/11 terrorist attacks have also contributed to this. At present, "Islamophobia", which is defined as "unfounded hostility towards Islam" and fear or aversion to Muslims, has become a new type of biological and cultural racism. It is actually a kind of fear and aversion to outsiders regarded as "carriers of different cultures" that may disrupt social integration and national identity in the host country. The objective is to provide insights for the future development of an AI governance framework that is more tailored to regional realities, effective, and inclusive.
2. Background
The rapid development of artificial intelligence (AI) technology has triggered an urgent global demand for the establishment of corresponding legal and regulatory frameworks. Against this backdrop, legislative regulation has emerged as a critical instrument for safeguarding fundamental rights and interests and mitigating potential risks. To date, the European Union’s AI Act represents the earliest comprehensive legal framework governing artificial intelligence.
This phenomenon of regional legislation spearheading the formation of global rules is not unprecedented in Latin America. Historical experience indicates that the region exhibits a pronounced tendency toward the systematic transplantation of European legal models in emerging technological fields, particularly in the realm of data governance. As comparative legal studies have shown, since its adoption in 2016, the EU’s General Data Protection Regulation (GDPR) has exerted a so-called Brussels Effect, which is defined as ‘Europe’s unilateral power to regulate global markets.’ This effect has spurred a wave of data protection law reforms or enactments across numerous countries in the region, providing ready-made templates and high-level standards for the regulation of local high-tech sectors.
In addressing the regulatory challenges posed by AI, Latin American countries are again undertaking legal transplants of the EU AI Act. Nations such as Brazil and Peru—which have either enacted or are drafting relevant legislation—clearly reflect the influence of the EU AI Act in key aspects, including core definitions, risk-based classification frameworks, anti-discrimination principles, and data governance requirements.
Nevertheless, while such legal transplantation offers a regulatory template, it may also entail hidden risks. The EU legal framework is rooted in its specific socio-economic context, regulatory capacity, and level of technological development. By contrast, Latin American countries differ substantially from the EU in terms of institutional enforcement capacity, economic development levels, the state of the digital divide, and local priority issues (e.g., the protection of specific indigenous groups). The transplantation of EU legislation may result in a disconnect between law and reality, giving rise to misalignment risks such as regulatory failure and weak enforcement.
This paper aims to examine the emulation and adoption of the EU AI Act by Latin American countries that have taken initial steps in AI regulatory legislation, including Brazil and Peru. It focuses on analyzing the potential misalignment issues arising from such legal transplants—including ambiguous definitions, rigid risk classification, ineffective bias mitigation mechanisms, and difficulties in law enforcement implementation—stemming from divergent social realities, technological foundations, and regulatory capacities between the two regions. The objective is to provide insights for the future development of an AI governance framework that is more tailored to regional realities, effective, and inclusive.
3. Materials and Methods
This article use the methods of doctrinal comparison.
4. Definition of AI in Concern Regulation
As the technological foundation of the regulatory System for Artificial Intelligence, first clarify what constitutes AI and the guidelines for application. However, a comparison shows that even the basic definition has ambiguity that leads to regulatory vacuum and cross-regional mismatch problem of legislation; tracing back its technical origin, artificial intelligence (AI) originated from Dartmouth Conference in 1956, which refers to systems that simulate human intelligence based on advances in mathematical logic and computer science. Although there are similarities in the current Legal definitions of various forms of Artificial Intelligence regulation among different countries, to a lesser extent due to the differences inherent within those standards .
The European definition of an AI system is a machine-based system that can operate autonomously to a certain extent and demonstrate adaptability after deployment. Inference based on the received input to obtain an output, such as prediction, content generation, recommendation and decision-making for the physical world or virtual environment .
5. Comparison of Legislation of AI Between EU and Latin America
Artificial Intelligence is divided into two areas of focus by Peru's Legislative System: an Artificial-Intelligence-Based system; and an artificial intelligence-based one. The first is an electro-mechanical system for human-defined purposes to generate predictions, suggestions or decisions that affect the real or virtual environment; It has varying degrees of autonomy and adaptability after being deployed. The latter has been recognised as a new type of general-purpose technology that can improve people's lives, promote sustainable development of the economy, and solve some important problems facing human beings at present .
Delineate an artificial-intelligence system in Brazil as a computational system with varying degrees of autonomy. Machine Learning and/or logic-based, knowledge representation techniques are employed to derive strategies for achieving the goal of this paper based on the input data obtained from machines or humans; The purpose is to predict, recommend or make decisions that will affect the virtual environment or reality in some way.
Although there are some commonalities among them, a significant deficiency exists in the treatment of rapidly changing areas such as General-Purpose AI (GPAI) and foundation models by existing laws. Although the European Union's AI Act is trying to deal with GPAI, Brazil and Peru do not have clear regulations on such systems and thus are subject to a regulatory gap in this field. The above situation also shows that there is a problem in the pace at which laws and regulations are developed relative to technological progress.
Considering that AI has a certain degree of impact on society, Brazil and Peru have set up regulatory rules around risk level; However, the specific classification criteria and attention points vary. Given that there is an urgent necessity for the development of artificial intelligence to establish a social confidence foundation according to the spirit embodied in Article 2 of the Treaty on European Union (TEU) , all three countries need fundamental guidance from this value. In line with the fundamental rights and interests guaranteed by all relevant treaties, as well as in accordance with the Charter under Article 6 of the TEU. In other words, the definition of AI here involves technology that provides a basis for addressing people's problems or improving the quality of life, thus it has the characteristic feature of promoting human development and improvement.
The legislative Logic of the EU's AI Act is that it should be built on a reasonable and effective regulatory System for AI systems to achieve a balance between regulation and development; The essence lies in establishing an innovative Risk-Oriented framework-based regulatory approach. In short, it is an adjustment method for the number and content of regulations in response to different levels of risk or effects in the application scenarios of artificial intelligence technology. According to this reasoning, from an explicit prohibition of unacceptable AI application behaviours in legislation and setting specific compliance requirements and operator obligations for high-risk AI systems; Additionally, it also proposes a requirement for transparency norms regarding some AI Systems .
For independent AI systems (i.e., AI systems that are neither safety components of products nor products themselves), if, taking into account their intended application, considering the severity and probability of the potential harm posed by such an AI system to human health, safety or fundamental rights, and if it is being applied in several specific fields, it can be deemed a high-risk system.
The EU's new AI Regulation, in addition to general principles that have been introduced by the GDPR and other laws before it, also has a tiered risk management mechanism based on AI technology.
First of all, there are Unacceptable Risks, which means that the AI practice has been expressly forbidden by law. These types of risk directly violate the values (such as human rights, respect for person's dignity and personality) and basic rights stipulated in the EU Charter of Fundamental Rights that there shall be no exception. Their core feature is that the Design or application of AI systems involves "fundamental harm"; they include severe violations of people's lives and deaths, infringements on people's dignity, damage to people's freedom of choice, or significant harm to social justice. Specifically, the prohibited scenarios and core risks include: first, manipulative or deceptive technologies, which include technologies that seriously distort people's behaviour by means of subliminal techniques beyond human consciousness or intentional deception (such as imperceptible audio-visual stimuli) that may cause significant harm; Secondly, exploitation of the vulnerability of disadvantaged groups, which involves using groups in a vulnerable situation because they are young, have disabilities, belong to low socio-economic status, and other factors for distortion of their behavior; The third is social scoring system, that is, making long-term scores of natural person or group based on social behavior or personality traits, leading to adverse consequences in areas unrelated to the scenario where data originated, or unfair treatment disproportionate to the degree of violation; Fourth, crime risk prediction based only on profiling, it will be banned to make assessments and predictions of criminal risk for natural person solely through personal profile or personal traits, excluding auxiliary analysis based on objective criminal evidence; Fifth, facial image database with non-targeted acquisition, such as creating or expanding facial recognition databases without any restrictions on collecting facial images from the Internet or surveillance videos; Sixth, emotional recognition in workplace and education, meaning using artificial intelligence to infer the emotion of individuals (except for medical and safety uses, such as monitoring drivers' fatigue), seventh, sensitive biometric classification, prohibiting inferring sensitive attributes of natural persons (such as race, political views, religion, sexual orientation) from biometric data (including but not limited to face features, fingerprints); Eight, real-time remote biometric identification for law enforcement in public places is generally prohibited, except under the three specifically prescribed circumstances: Search for victims of kidnapping, human trafficking or missing persons; To prevent specific and imminent threats to life safety or to eliminate a terrorist attack; Ninth, sensitive personal information collection management, if involving access rights over sensitive personal information, there should also be detailed regulations on the management standards for the quality and security level of the collected information. Applications of these requirements involve requiring a court to issue an order; they are time-limited and do not have the effect of directly serving as evidence .
Secondly, there are high-risk AI systems, which refer to those that may bring "serious risks" to people's lives and health, as well as their interests in the field of life and death, human security, etc., from using them. In such systems, in addition to ensuring the compliance requirements for a normal system as stipulated by regulatory standards, they also require that risk be assessed, a data governance system established and maintained, humans are put on guard and transparency be increased. The classification of their categories is based on "severity of risk + probability of occurrence", and these areas include some high-impact fields. According to Annex III of the act, high-risk areas and particular Systems are remote biometric identification Systems, AI systems applied in key Infrastructure for power Grids, water supply Systems, and Traffic Control; AI systems associated with education and vocational Training, employment and labour management, basic Social service and welfare, law enforcement and Justice decision-making, immigration, asylum and Border control.
Thirdly, there is a group of AI systems with moderate to low risk. Although the Act does not use the term "medium and low risks", if an AI system is neither forbidden nor highly risky, it will be governed by transparency obligations or voluntary compliance. Such a system has relatively small risks and an impact on a limited number of people. Among them, the medium-risk systems need to make a public disclosure obligation. They are low-risk but may impact users' rights to know or the authenticity of the information. Specific requirements are as follows: AI that directly interacts with natural persons needs to explicitly inform users that "they are interacting with an AI assistant" (except for cases where it is obvious, such as intelligent speakers); Artificially generated synthetic audio, images and text (such as deepfakes), etc., must mark the content as "synthetic artificial", and such information should be displayed in a machine-readable manner; If there is an emotional recognition or biometric classification system (for non-high-risk scenarios) used, the affected individuals need to be informed of its existence and purpose. Low-risk systems refer to AI with an extremely low risk factor (auxiliary tools), and the Act encourages voluntary application by these entities of specific conditions related to high-risk systems via a "code of conduct" .
Brazil's regulations on artificial intelligence in risk classification primarily include two types: excessive risks and high-risk cases. Prohibit the development and application of artificial intelligence systems with a high risk; The use of subliminal techniques to induce harmful behaviour is prohibited; Exploit the vulnerability of some groups, such as the elderly, people who are weak, sick and disabled, to cause harm by using such technology; Systems where public authorities illegally or disproportionate scoring based on social behavior or personality traits affect one's right and interest access must not be established. High-risk systems involve situations such as the safety monitoring of major infrastructure equipment, assessment systems related to educational and training for workers, recruitment and employee placement systems in employment, evaluation systems for basic public services and private enterprises, credit evaluations, priority determination for emergency rescue forces, auxiliary use of the judiciary, automated driving technologies, healthcare services applications, biometric identification systems, evaluation systems concerning criminal investigation and public security matters, research on criminal analysis, proof of evidence provided by administrative organs' institutions, border inspection and management work, etc. The relevant authorities may adjust the list of high-risk or excessively risky systems in terms of their size and extent of damage .
The classification of risks in Peru is as follows: unacceptable risk, high risk, medium risk and low risk. Systems with unacceptable risks are prohibited, such as those that modify people's behaviour through subliminal or deceptive methods, which can cause serious harm; And systems that improperly evaluate individuals or groups based on quantified social behaviour, thereby having adverse effects and causing an inconsistency between social behaviour. The high-risk system is that in terms of biometric identification and classification, support for securing critical infrastructure, Access and assessment vocational education, Management employee recruitment and dismissal, Assessment related social programs and emergency services, Personal credit evaluation, Support for judicial decisions and assessments and predictions related to medical Services, and Criminal Risk Assessment by law enforcement agencies. Without a result interpretation mechanism, the system is also considered to be at high risk. The competent authority can work together with relevant units and departments to update the risk List(3).
A critical analysis reveals the insufficiencies of these risk-based approaches in the legislation. First of all, the criteria for moving systems from low-risk to medium-risk or high-risk categories and updating the 'high-risk' list are unclear and lacking in specific trigger conditions or timeframes, potentially leading to regulatory stagnation. Secondly, the focus is primarily on ex-ante conformity assessment of high-risk systems and lacks development in the frameworks for post-market tracking, incident reporting and liability determination for sudden harm. The categorisation in Brazil and Peru is unlikely to be able to fully reflect the context-dependent risks; that is, the risk level of a system can vary significantly depending on its application scenario and environment for implementation.
One type of the risk management approach focusing on protecting justice and equal treatment for all individuals without discrimination by ethnicity, religion, gender and others. EU, Brazil and Peru have all mentioned the anti-discrimination principle in their AI acts, but there are different implementation paths and effects, which also show a lack of ensuring substantive fairness.
All the AI Acts of the European Union, Brazil and Peru also propose the principle of anti-discrimination.
The European Union's proposed AI Act adds the prohibition of discrimination to its core provisions. In the category of high-risk AI systems, if an AI system has a tendency to produce biased output under specific data conditions or environments, for example, in education and job placement scenarios, such systems will be considered high-risk and need to adhere to strict duties such as ensuring fairness of information release and enhancing human oversight capabilities to mitigate potential biases.
In Brazil's draft AI legislation, individuals whose rights may be infringed upon by decision-making, prediction outcomes or other results of AI applications have the right to request fair treatment in accordance with law. Prohibit the implementation and application of AI systems that can cause direct, indirect, illegal or abuse discrimination, including: (1) significant adverse effects due to the use of sensitive personal information or personal attributes such as geographical origin, race, skin colour, ethnicity, gender, sexual orientation, socio-economic status, age, physical condition, disability, religion, political stance, etc.; (2) adding disadvantages to members of some groups or aggravating their vulnerable conditions while using apparently neutral standards.
Peru's AI Act has established the non-discrimination principle, requiring that throughout all stages of the entire lifecycle of AI-based systems, discrimination or bias should not arise, be reinforced, or continue to exist and persist. To avoid discrimination against people of different races, genders, ethnic origins, economic situations and other factors that affect their social position; religion beliefs and various forms of disabilities, etc., all such causes. Additionally, systems with clear discriminatory effects, such as "exploit [ing] the weakness of disadvantaged groups" and "unreasonable classification based on quantified social behaviour", are identified as "unacceptable risks", and their application is forbidden.
Anti-discrimination Principles and provisions in the AI Acts of the EU, Brazil, and Peru all aim to protect people from discrimination based on gender, age, disability, etc., but their functional emphases differ. As there is a high risk bias in algorithms within education and employment affecting social values ; EU regulations also impose stricter requirements for monitoring systems used by businesses to prevent automatic transmission of erroneous biased predictions . Brazil intends to shield weaker parties, such as ethnic minorities in this process of regulation; it forbids any kind of discriminatory action stemming from the usage of sensitive information or personal traits. Peru is proposing to regulate discrimination across the entire life cycle of AI, and it explicitly prohibits "unacceptable risks" systems that exploit the vulnerability of disadvantaged groups. All are aimed at avoiding potential discriminatory consequences that AI may bring, and promoting fairness by limiting and supervising the applications of these products .
Contextual Legal Transplantation and Legislative Deficiency Addressing
Compared with the AI regulatory systems in other regions such as Brazil and Peru, there is a general pattern of legal transplantation at the EU level; The EU system has been introduced to some extent into other countries' regulation systems. However, this process frequently reveals a deficiency in merely copying the foreign legal structure and does not adapt locally. As mentioned earlier, the design principles of legal system framework in a specific social-technical situation have not been able to meet the needs when faced with changes from other backgrounds.
The first deficiency is that there is a lack of an assumption of uniform regulatory power. The EU's GDPR and AI Act assume that there is a sufficiently powerful national data protection authority and a well-established judiciary to implement the law when it comes to enforcing the rules. The ANPD of Brazil and the corresponding authority in Peru face severe problems with funds, technical capabilities and enforcement power; Therefore, these more specific concepts of proportionality or strict pseudonymisation standards may not be very practical in application. Transplanted Laws may cause a compliance gap where people have obligations under law, yet these duties are difficult to enforce in practice; thus, it will affect the overall implementation of supervision .
In addition, the Frameworks usually do not consider adequately local demographic, economic and technological backgrounds.
The local adaptation feature of Peru's special protection for "indigenous peoples in isolation" is an excellent case; however, other problems need to be addressed at this time. For example, the digital gap and low level of digital literacy in some areas have made it harder for the laws to ensure the effectiveness of concepts such as "meaningful consent". A large amount of high-cost compliance would make it difficult for developing countries to maintain continuous development in this area due to strict requirements such as those set by the European Union. Therefore, they would inevitably become subject to others' technologies at some point. There are deficiencies in the regulatory system at this time regarding graded compliance pathways and support systems for both local enterprises with more limited scales such as small and micro-enterprises and those with relatively low levels of development (startups) .
Although the risk classification system has a basic structure in common with other systems, it cannot show what are the most urgent local problems directly. A risk identified as high in Europe may not hold equal importance for Brazil or Peru. For instance, the application of AI in agriculture or for forest patroling in Amazonia countries could be treated differently under a different risk level from that in other parts of Europe . Deficiency refers to the fact that there is a lack of adaptability and local driving mechanisms for establishing and adjusting risk classification categories in each country according to its own priority issues and public policy objectives. Mirroring the EU's Annex III on high-risk Systems without an essential, continuous local risk assessment is a serious deficiency .
Overall, although the European Union, Brazil and Peru have made considerable progress in establishing AI laws, their legal systems are lacking some key deficiencies related to defi-nitional ambiguity, enforcement capabilities, risk classification adaptation ability and bias mitigation effects. In addition to this point, the process of legal transplantation highlights that only adopting advanced regulatory systems is not enough. In short, in the context of deep contextualisation, when promoting the construction of legal systems in artificial intelligence regulation at present, we need to consider many factors such as institutional capability, economic development level, social environment, technology status and public sentiment; Otherwise, regulations will lack legitimacy. In future legislation, there should be no technical gaps left in the current legal system; meanwhile, efforts must also be made to speed up the preparation of the implementation plan for such a reasonable and budgetary system at the local level. Otherwise, these well-meaning laws are either unrelated or debilitating of the very ecosystems they intend to regulate.
The meaning of legal transplantation is generally: “the transplantation of a certain legal rule or institution from a specific country to another country (or region).” Terms equivalent to “transplantation” include borrowing, absorption, emulation, importation, influence, introduction, and so on. From the perspective of functional comparative law, the purpose of legal transplantation is to solve social problems. The transplantation of the EU AI Act by Latin America aims to address the question of how to regulate artificial intelligence as an emerging technology, so as to prevent this new technology—for which humanity has no prior experience of governance—from infringing upon basic human rights in its application, thereby striking a balance between technology and human rights.
The existing AI legislation of various Latin American countries, in adopting the EU’s risk-based AI regulatory framework, prohibits practices that may infringe upon vulnerable groups as unacceptable risks—such as the inference of sensitive attributes from sensitive data and social scoring systems—with the goal of preventing algorithmic violations of the fundamental interests of vulnerable groups and the occurrence of discrimination. In other words, Latin America has adopted Europe’s risk-prevention legislation to protect the basic rights of citizens. Yet beneath this similar social agenda, the two regions face problems with distinct concrete dimensions.
Taking anti-racial discrimination as an example. The institutional Demand of the EU and Latin America to fight against racial discrimination is different. In Europe's Union, racial discrimination is often closely linked to historical migrations and existing political or social problems, and it became more severe after the refugee crisis. In the context of post-Cold War, immigrants and asylum seekers from countries that are seen as suffering from "overpopulation" and "socio-economic instability" have been constructed in mainstream discourse as a threat to economic prosperity and national identity, leading to a new form of discrimination against specific impoverished groups. The form of "xenophobia", but the essence is racism - xenoracism. In addition, some specific social anxieties after the 9/11 terrorist attacks have also contributed to this. At present, "Islamophobia", which is defined as "unfounded hostility towards Islam" and fear or aversion to Muslims, has become a new type of biological and cultural racism. It is actually a kind of fear and aversion to outsiders regarded as "carriers of different cultures" that may disrupt social integration and national identity in the host country .
The form of "xenophobia", but the essence is racism - xenoracism. In addition, some specific social anxieties after the 9/11 terrorist attacks have also contributed to this. At present, "Islamophobia", which is defined as "unfounded hostility towards Islam" and fear or aversion to Muslims, has become a new type of biological and cultural racism. It is actually a kind of fear and aversion to outsiders regarded as "carriers of different cultures" that may disrupt social integration and national identity in the host country .
Exclusionary sentiment has been further enlarged and systematised in the problem of migration. Immigrants and asylum seekers have been positioned by mainstream discourse, particularly by rising right-wing political frameworks within the EU, as potential threats to social economic safety and public welfare, and are associated with criminal activities and terrorism. These narratives have been institutionalised in specific laws, policies and Bodies (such as border Management agencies), thus securitising migration and forming an opposing and unresolvable national-immigrant Identity gap. Under this background, The EU's "global migration management" strategy and its internal policies have tended to view certain non-European migrant groups as a "suspicious population" that requires control and supervision in terms of implementation. Muslim men, especially young people from Arab and African countries, are more likely to be stopped and searched by the police than other people; This is institutional religious and racial discrimination. In the name of "national security", the freedoms and rights guaranteed by law are actually restricted. They may also send out a social signal that hatred is "permitted", thereby promoting a cycle of hostility and discrimination against the Muslims .
Therefore, the EU’s AI legislation classifying algorithms with significant impacts on employment and other sectors as high risk, with a specific focus on discrimination risks related to migration, is not an isolated, purely technical risk assessment. Rather, it forms part of a broader response to the core threats to fundamental rights posed by religion-based (Islamic) and culturally “foreign” racism and xenophobia in the post 9/11 era. In the current context of AI development, such discriminatory attitudes may undermine algorithmic fairness and adversely affect the welfare and employment opportunities of these groups. Accordingly, the EU AI Act’s high risk categorization includes AI systems pertaining to race, employment, worker management, and access to self employment opportunities as “high risk” and requires conformity assessments prior to their placement on the market.
Racial discrimination and structural inequality in Latin America have their roots in colonial times, have greater cultural embedding and institutional rigidity. Although the emerging artificial intelligence legislation in the region, such as the relevant laws in Peru, explicitly mentions the protection of indigenous people, its implementation is subject to difficulties due to the complexity of history and society. Espinoza et al. (2025) point out that many technical and social interventions in Peru over the years have been unsuccessful because there was no cultural and ethical foresight, ignoring language barriers and the cultural significance of communities, and even infringing upon people's rights. Even in Latin America, an AI system that meets the "low-risk" standard technically may still lead to actual exclusion and discrimination by failing to consider indigenous languages such as Quechua and Aymara or cultural logic. To ensure the accessibility and accountability of AI intervention, a cultural-specific System Design should be considered instead of a simple risk classification .
Aloisi and De Stefano (2023) point out that AIs work with "Groups, Communities and populations" in professional contexts; therefore, a collective governance framework beyond individuals' rights over their data is required. When indigenous people encounter algorithmic discrimination, in addition to information asymmetry and technological obstacles, there is also a lack of culturally sensitive grievance channels. Just ranking AI systems that affect the fundamental rights of such vulnerable groups based on risk levels is not enough. Enforcement mechanisms that can accommodate collective claims and cultural Differences need to be established, and the right of consultation and Supervision for indigenous Communities over AI applications needs to be granted.
The European Union and Latin America have both adopted a risk-based AI governance model, but their underlying problems differ: The EU seeks to close the implementation gap among many countries' systems; In contrast, rebirth institutional legitimacy and effectiveness have arisen from post-colonial background problems in Latin America .
6. Results
Although both regions apply AI governance models based on classfication of risk, their core governance objectives diverge. The EU aims to bridge implementation gaps across its multi‑national system, while Latin America needs to rebuild institutional legitimacy and effectiveness against its post‑colonial background, making cross‑regional regulatory transplantation inevitably accompanied by functional divergences and practical implementation dilemmas.
7. Discussion
While both the European Union and Latin America have adopted risk based AI governance models, their underlying challenges differ fundamentally: the EU must reconcile implementation gaps within a complex multi national framework, whereas Latin America needs to rebuild institutional legitimacy and effectiveness amid the legacies of colonial history.
The following is just one instance of the functional divergences that may occur in the transposition of the AI regulatory framework featuring risk classification and risk prevention, as well as one problem that the regulation of AI might face. Based on the problem mentioned above, generalising by analogy, Artificial Intelligence and other ground-breaking Technologies in Contemporary Society Need To Be Considered at Local Levels of Technological Development, Social Conditions And Historical Backgrounds In Their Legal Regulatory Practices. Although Latin America has transplanted the EU model, the problems it aims to solve are similar to those in the European Union at first glance. But the causes of the problems, the specific issues and the corresponding solutions are not the same. Such problems cannot be solved simply by transplanting foreign law; Instead, according to the local environment and developing appropriate laws and related systems based on them.
8. Conclusions
Latin American countries have systematically transplanted the EU AI Act’s risk based regulatory framework at the core of their artificial intelligence legislation, aiming to draw on its high standard model to govern emerging technologies and protect fundamental rights. However, the transplantation of such legal texts carries multiple risks of misalignment by ignoring profound differences in local contexts. These divergences appear not only in institutional enforcement capacity, economic development levels, and the digital divide but, more crucially, in the legislation’s inadequate consideration of local priority issues and socio historical backgrounds. Although legal transplantation can provide a template for new technologies domestically, even under similar thematic concerns, concrete social problems and implementation pathways may be shaped by local conditions. Effective AI governance cannot rely solely on the imitation of advanced legal texts. It must instead undertake deep contextual adaptation on the basis of transplantation, developing legal frameworks and supporting implementation mechanisms that truly fit local institutional capacity, social structure, and historical background.
Abbreviations

EU

European Union

AI

Artificial Intelligence

GDPR

General Data Protection Regulation

GPAI

General-Purpose Artificial Intelligence

TEU

Treaty on European Union

ANPD

National Data Protection Agency

Acknowledgments
The authors would like to express sincere gratitude to the reviewers for their valuable comments and suggestions on this manuscript.
Author Contributions
Yang Ximing: Conceptualization, Resources, Writing – original draft
Conflicts of Interest
The author declares no conflicts of interest.
Appendix
Appendix I: Core Terminology Definition
1. Legal Transplantation: The process of transferring a legal rule or institution from one jurisdiction to another, including borrowing, emulation, and introduction of regulatory frameworks.
2. Risk-Based Regulatory Framework: A regulatory approach that adjusts regulatory intensity and content according to the risk level of AI applications to balance innovation and security.
3. Regulatory Misalignment: The mismatch between transplanted legal rules and local institutional capacity, social context, technological level, and historical background, leading to regulatory ineffectiveness.
4. Xeno-racism: A form of discrimination that appears as xenophobia but is essentially racism, targeting immigrants and specific ethnic groups.
5. Islamophobia: Unfounded hostility toward Islam and fear or aversion toward Muslims, a typical religious and cultural racism in the EU.
6. General-Purpose AI (GPAI): A versatile AI system that can be applied to multiple scenarios and tasks, with wide adaptability and scalability.
Appendix II: Comparison of AI Risk Classification Systems (EU, Brazil, Peru)
Table 1. AI Risk Classification Systems of EU, Brazil, Peru.

Jurisdiction

Risk Categories

Core Prohibited/Regulated Behaviors

EU

Unacceptable Risk, High Risk, Medium-Low Risk

Prohibits social scoring, sensitive biometric classification, real-time remote biometric identification (except special cases); High-risk systems involve education, employment, infrastructure, etc.

Brazil(draft)

Excessive Risk, High Risk

Prohibits exploitation of vulnerable groups and illegal social scoring; High-risk systems cover infrastructure, employment, medical care, biometric identification, etc.

Peru

Unacceptable Risk, High Risk, Medium Risk, Low Risk

Prohibits deceptive behavior manipulation and unreasonable social evaluation; High-risk systems include biometric identification, vocational education, labor management, judicial assistance, etc.

References
[1] Alfiani FRN, Santiago F. A comparative analysis of artificial intelligence regulatory law in asia, europe, and America. Jusman Y, Mutiarin D, Paksie A, Saptutyningsih E, Pau Loke S, Khaliq A, et al., editors. SHS Web Conf. 2024; 204: 07006.
[2] The European Parliament, The Council of the European Union. Regulation (EU) 2024/1689 of the european parliament and of the council of 13 june 2024 laying down harmonised rules on artificial intelligence and amending regulations (EC) no 300/2008, (EU) no 167/2013, (EU) no 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (artificial intelligence act) (text with EEA relevance). Off J Eur Union [Internet]. 2024 Jul 12; L series(2024/1689). Available from:
[3] Authorize the travel of the Minister of Housing, Construction and Sanitation to France and entrust his official duties to the Minister of Production. Available from:
[4] The European Union. Consolidated version of the treaty on european union. Off J Eur Union. 2012 Oct 26; C 326(13): 1.
[5] Edwards L. The EU AI act: A summary of its significance and scope [Internet]. Ada Lovelace Institute; 2022 Apr. Report No. Available from:
[6] Congresso Nacional do Brasil. PROJETO DE LEI N°, DE 2023 (dispõe sobre o uso da inteligência artificial) [bill no., of 2023 (provides for the use of artificial intelligence)] [Internet]. 2023. Available from:
[7] National Council for Economic and Social Policy, National Department of Planning. National Policy on Digital Transformation and Artificial Intelligence. CONPES Document 3975. 2019.
[8] egulAite. AI Sovereignty in Latin America: Regional Digital Dependence and the Potential to Overcome It. Available from:
[9] Horna-Saldaña CJ, Perez Perez JE, Toro Galeano ML. Artificial intelligence in the preservation of native languages and bridging the information access gap for indigenous peoples. J Enabling Technol. 2025 Mar 5; 19(1): 63–75.
[10] Aloisi A, De Stefano V. Between risk mitigation and labour rights enforcement: Assessing the transatlantic race to govern AI-driven decision-making through a comparative lens. Eur Labour Law J. 2023 Jun; 14(2): 283–307. doi:
[11] Fekete L. The emergence of xeno-racism. Race Cl. 2001 Oct 1; 43(2): 23–40.
[12] Ghosh, D. The European Union’s Response to Islamophobia: An Assessment. Canadian Journal of European and Russian Studies. 2022, 15(1), 1-23.
[13] Espinosa Zárate Z, Camilli Trujillo C, Plaza-de-la-Hoz J. Digitalization in vulnerable populations: A systematic review in latin America. Soc Indic Res. 2023 Dec; 170(3): 1183–207.
Cite This Article
  • APA Style

    Yang, X. (2026). Dilemmas and Reflections on the Legal Transplantation of the EU AI Act by Latin American Countries from a Functional Comparative Perspective. Humanities and Social Sciences, 14(3), 200-208. https://doi.org/10.11648/j.hss.20261403.11

    Copy | Download

    ACS Style

    Yang, X. Dilemmas and Reflections on the Legal Transplantation of the EU AI Act by Latin American Countries from a Functional Comparative Perspective. Humanit. Soc. Sci. 2026, 14(3), 200-208. doi: 10.11648/j.hss.20261403.11

    Copy | Download

    AMA Style

    Yang X. Dilemmas and Reflections on the Legal Transplantation of the EU AI Act by Latin American Countries from a Functional Comparative Perspective. Humanit Soc Sci. 2026;14(3):200-208. doi: 10.11648/j.hss.20261403.11

    Copy | Download

  • @article{10.11648/j.hss.20261403.11,
      author = {Ximing Yang},
      title = {Dilemmas and Reflections on the Legal Transplantation of the EU AI Act by Latin American Countries from a Functional Comparative Perspective},
      journal = {Humanities and Social Sciences},
      volume = {14},
      number = {3},
      pages = {200-208},
      doi = {10.11648/j.hss.20261403.11},
      url = {https://doi.org/10.11648/j.hss.20261403.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.hss.20261403.11},
      abstract = {This paper examines the systematic legal transplantation phenomenon of the European Union's Artificial Intelligence Act (EU AI Act) among some Latin American countries, such as Brazil and Peru, through the lens of function-oriented comparison, taking anti-discrimination rules as an example. It has been discovered that, while these countries have generally introduced the EU's leading, risk-oriented regulatory model for new AI applications and safeguards against core interests, the resulting regulatory mismatch is substantial. Analysis finds deficiencies in this transplantation include the lack of corresponding institutional enforcement capabilities and inflexible risk classification systems that are not tailored to local priorities or technological situations. Although the papers deal with apparently identical themes such as anti-discrimination, in fact they have different socio-cultural and historical backgrounds. For example, in terms of anti-discrimination, although the focus of the EU is generally on problems such as immigration and religious discrimination, such as Islamophobia; The deep-seated reasons for discrimination in Latin America stem from colonial rule, impacting indigenous people. The paper has found that effective AI governance requires deep localisation modifications of transplanted regulatory models, and rules and implementation systems suitable for the specific cultural environment need to be developed; Only then can an appropriate regulation for the Digital Age be generated.},
     year = {2026}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Dilemmas and Reflections on the Legal Transplantation of the EU AI Act by Latin American Countries from a Functional Comparative Perspective
    AU  - Ximing Yang
    Y1  - 2026/05/15
    PY  - 2026
    N1  - https://doi.org/10.11648/j.hss.20261403.11
    DO  - 10.11648/j.hss.20261403.11
    T2  - Humanities and Social Sciences
    JF  - Humanities and Social Sciences
    JO  - Humanities and Social Sciences
    SP  - 200
    EP  - 208
    PB  - Science Publishing Group
    SN  - 2330-8184
    UR  - https://doi.org/10.11648/j.hss.20261403.11
    AB  - This paper examines the systematic legal transplantation phenomenon of the European Union's Artificial Intelligence Act (EU AI Act) among some Latin American countries, such as Brazil and Peru, through the lens of function-oriented comparison, taking anti-discrimination rules as an example. It has been discovered that, while these countries have generally introduced the EU's leading, risk-oriented regulatory model for new AI applications and safeguards against core interests, the resulting regulatory mismatch is substantial. Analysis finds deficiencies in this transplantation include the lack of corresponding institutional enforcement capabilities and inflexible risk classification systems that are not tailored to local priorities or technological situations. Although the papers deal with apparently identical themes such as anti-discrimination, in fact they have different socio-cultural and historical backgrounds. For example, in terms of anti-discrimination, although the focus of the EU is generally on problems such as immigration and religious discrimination, such as Islamophobia; The deep-seated reasons for discrimination in Latin America stem from colonial rule, impacting indigenous people. The paper has found that effective AI governance requires deep localisation modifications of transplanted regulatory models, and rules and implementation systems suitable for the specific cultural environment need to be developed; Only then can an appropriate regulation for the Digital Age be generated.
    VL  - 14
    IS  - 3
    ER  - 

    Copy | Download

Author Information