Review Article | | Peer-Reviewed

Barriers and Enablers of AI Integration into Social Studies Education: A Systematic Review of University Lecturers and Students Experiences

Received: 15 April 2026     Accepted: 24 April 2026     Published: 16 May 2026
Views:       Downloads:
Abstract

This research explored facilitators and barriers to the integration of AI in Social Studies education in the context of higher educational institutions (HEIs). Anchored on the Technology Acceptance Model (TAM) and directed by three research questions, the study assessed lecturers and students AI perceptions and AI competencies, and various facilitators and challenges to AI integration in Social Studies education in order to inform policy and best practices. Employing a systematic review approach, the study reviewed 10 research papers published in peer-reviewed journals. The results show that the integration of AI in Social Studies education has potential, but the level of preparedness is uneven. Although lecturers and students appreciate the transformative potential of AI for personalized instruction, increased engagement, and innovative teaching, the potential benefits are offset by the challenges of maintaining academic integrity, ethical concerns, biases, over-dependence on AI, and the loss of problem solving and critical thinking skills. Digital literacy deficits, poor university governance, inadequate training, and a lack of AI policies were identified as main barriers to the effective use of AI in universities. The evidence also points to a lack of educational equity, particularly in the Global South. The study suggests developing AI policy frameworks according to each discipline, strengthening AI literacy through continuous professional development programmes, establishing institutional policies on AI utilization, investing in digital infrastructure, integrating the teaching and learning with AI into curriculum and assessment, and incorporating ethical use of AI into curriculum and assessment practices in universities.

Published in Higher Education Research (Volume 11, Issue 3)
DOI 10.11648/j.her.20261103.12
Page(s) 49-61
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2026. Published by Science Publishing Group

Keywords

AI Competencies, Barriers to AI Integration, Enablers of AI Integration, Social Studies Education, Universities

1. Introduction
The rapid expansion of AI is gradually transforming teaching, research, learning, administration, and assessment practices in most universities globally. The introduction of AI into education has marked a new era that signal technology revolution in higher education. The AI-supported chatbots for instance, can quickly respond to students’ questions, give detail explanation to concepts, provide additional resources, offer individualized instruction, support students with their assignments, and help learners to prepare for examination . Moreover, AI platforms such as Grammarly, ChatGPT, Quilbot, and copilot are now frequently employed lecturers and students for personalized instruction, problem-solving activities, creative related tasks, language editing, text paraphrasing, and immediate feedback to queries. In universities with large lecturer-to-student ratios (large class sizes), chatbots can mimic direct communication with teachers by giving out questions and immediate feedback. AI also have the capacity to enhance university students research and writing skills. Generative AI is now frequently been used to support students to research and write like professionals without any hindrance. GenAI platforms can perform functions such as, reframing sentences, checking grammar, polishing drafts, and providing immediate feedback . In research settings, GenAI platforms are now employed by researchers to support frame research topics, design research objectives and research questions, formulate research hypotheses, review literature, generate new ideas, and facilitates the design of research projects concepts . AI platforms are also used by some researchers to calculate sample size, propose statistical tools for data collection and analysis, data transcription and interpretation of research findings.
In Social Studies, AI and especially generative AI, can enhance inquiry and problem-based learning, critical thinking, and knowledge construction in a digital learning environment . AI algorithms can interpret students learning patterns, and give personalized recommendations and adaptive learning experiences. AI can also analyze large amount of data to identify correlation, patterns and trends relevant to Social Studies curriculum content. AI technology platforms like virtual reality (VR) and augmented reality (AR) can take students to real-world contexts where they can learn Social Studies concepts in situ. VR can be used to organize virtual excursions with the potential to reduce time, risk and resources associated with real excursions.
Despite these potentials, the use of AI in Social Studies education is uneven across most universities due to inadequate access to digital resources, limited IT readiness, and administrative support. More importantly, several empirical studies have demonstrated that, in educational settings, the perception of users is a key factor in determining the use of AI. University academic staff and students have mixed perception of AI, ranging from enthusiasm on how AI can improve productivity, to concerns of repercussions on academic honesty, discrimination, or a decline in problem-solving abilities . There is a perception gap that is influenced by discipline, previous use of technology, and institutional readiness which offer a strong justification for a designed study to explain how practitioners of Social Studies view AI as a tool for teaching, research, administration, and assessment. From a socio-cognitive perspective, consolidating existing evidence on these perceptions would likely illuminate the socio-cognitive mechanisms involved in the use of AI in higher education.
Social Studies education, on the whole, has not yet fully benefitted from factors internal to AI-integration such as the limited students and lecturers’ digital competencies, the multi-layered complexity of integrating AI at the lecturer and student levels, and the considerable changes which the education system has had to cope with following the advent of AI. Outside of attitudinal disposition, successful AI integration into teaching and learning, students must also have a range of competencies that modern frameworks describe as including cognitive, operational, pedagogical, and ethical . However, evidence from empirical research into the distribution and depth of these competencies within Social Studies education remain fragmented. Moreover, limited attention has been paid to how these competencies intersect with the development of the 21st century skills. Addressing these gaps is imperative for advancing a wholistic understanding of AI literacy within disciplinary specific teaching and learning contexts. Additionally, AI integration in Social Studies education is shaped by a constellation of enabling and constraining factors operating across individual, institutional, and systemic level. Enablers such as robust digital infrastructure, institutional policy support, and targeted professional development programmes have demonstrated as effective facilitators of meaningful AI integration, whereas barriers including digital divides, limited technical expertise, resistance to pedagogical change, and ethical concerns continue to impede its widespread implementation .
1.1. Problem Statement
The rapid development of AI in HEIs has brought many changes regarding how research, teaching, learning, administration, and engagement of students occur in lecture halls, and how students acquire 21st century skills. Recent studies have demonstrated that AI technology, and especially Generative AI, has the potential to personalize learning and teaching, and boost students' performance and engagement in different learning contexts . Notwithstanding these prospects, the integration of AI into Social Studies education remains uneven, complex and fraught with multidimensional challenges, particularly within discipline specific context such as Social Studies, where interpretive reasoning, ethical reflection, and contextual analysis are central to the curriculum.
Emerging empirical evidence posits that the effective AI adoption is impeded by a constellation of structural, pedagogical and socio-cultural barriers. These include insufficient digital infrastructure, limited digital literacy among some lecturers and students, high implementation cost, and inadequate institutional support frameworks . Furthermore, resistance from academics often rooted in concerns about academic integrity, epistemic authority, and the erosion of critical thinking continues to limit AI adoption . In parallel, students use of AI introduces tensions related to overreliance, superficial learning, and ethical ambiguity, thereby complicating its pedagogical value .
Within African higher education setting, these challenges are further exacerbated by systemic inequalities, policy incoherence, and limited institutional readiness for pedagogical and digital transformation . While some studies have examined AI adoption in STEM and technical disciplines, there remains a paucity of comprehensive, discipline-specific synthesis focusing on Social Studies, where pedagogical goals extend beyond technical proficiency to include civic competence, critical consciousness, environmental sustainability, and socio-political awareness. Moreover, Social Studies education is currently challenged by rigid curriculum, large class sizes, inability of the curriculum to meet the diverse needs of learners, use of outdated teacher-centered instructional methods which facilitate rote memorization and stifle critical thinking and problem-solving skills in students.
Additionally, existing literature focusses on either student or lecturer perspectives in isolation, thereby ignoring the dynamic interplay of experiences that shape AI integration in authentic classroom settings. AI adoption is not mainly constrained by barriers; it is also facilitated by enabling factors such as institutional policy support, professional development opportunities, pedagogical innovation and positive user perception . However, these enablers are inconsistently documented and insufficiently theorized, especially in relation to how they intersect with contextual constraints in developing educational systems. Consequently, there exist a critical gap in the literature for a systematic, integrative review that synthesizes both barriers and facilitators of AI integration in Social Studies education, drawing on the lived experiences of university lecturers and Social Studies students. Addressing this gap is critical to inform evidence-based policy, guide pedagogical practices, and advance theoretical understanding of AI adoption in context-specific learning environments. Without such a synthesis, efforts to mainstream AI in Social Studies risk remaining fragmented, inequitable, and pedagogically misaligned.
1.2 Research Questions
The following questions provide framework for the study.
1) What evidence exist concerning university students and academic staff perception about AI use as a tool for research, teaching and learning, administration, and assessment of learning in Social Studies?
2) What is the level of university students and lecturers AI cognitive competencies, AI operational competences, AI pedagogical competences, and competencies about ethical and responsible use of AI and how are these competences measured?
3) What factors serves as enablers and barriers to the integration of AI into research, teaching and learning, administration, and assessment of learning in Social Studies?
2. Literature Review
This review is grounded in Technology Acceptance Model (TAM) propounded by . TAM theorize that user acceptance of new technology is influenced by perceived usefulness and perceived ease of use. Perceived usefulness is defined as the degree to which a user believes that employing a particular technology (in this case AI) would improve job performance. On the other hand, perceived ease of use is defined as the degree to which a person believes using a particular technology (in this context AI) will be free of efforts . Figure 1 depicts concept map of Technology Acceptance Model (TAM) adopted by the study.
Figure 1. Technology Acceptance Model (TAM) Adopted from Davis (1989) for HEI Context.
The theory suggests that when an individual is presented with new technology (in this case AI), a number of factors come to play. Among the factors, perceived usefulness and perceived ease of use determine how and when the individual will use the technology (in this context AI). (TAM does not consider factors such as economic, suppliers, customers as well as competitors .
3. Methodology
3.1. Research Design
The study employed a Systematic Literature Review (SLR) design, guided by the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) framework. Systematic reviews are suitable for synthesizing empirical evidence, identifying research gaps, and generating insights in fast evolving issues such AI adoption in HEIs . Figure 2 presents diagrammatical illustration on the PRISMA model used by the study.
Figure 2. PRISMA Flow Diagram for Study Selection (adapted from Page et al., 2021).
3.2. Search Strings/Strategies
Boolean search operators were adopted in this systematic review. Example search query; (“Artificial intelligence” OR “Generative AI” OR “Machine Learning”) AND (“Higher Education” OR “University” OR “Higher Educational Institutions”) AND (“Social Studies lecturers and students” OR University lecturers and students”) AND (“Lecturers and students AI competences” OR “University lecturers and Social Studies students AI literacy” OR “University lecturers digital competencies”) AND (“How digital competences are operationalized and measured” OR “How digital literacy is measured” OR “How digital skills are assessed”) AND (“University students and lecturers’ perception of AI use in Social Studies education” OR “Lecturers and students perception of AI” OR “Students and lecturers attitude towards AI use”) AND (“Enablers of AI integration in Social Studies education” OR “Facilitators of AI utilization” OR “Promoters of AI uptake by universities” OR “Factors that promote AI integration in Universities”) AND (“Challenges impeding AI integration in Social Studies education in universities” OR “Barriers to AI adoption in Social Studies education in universities” OR “Factors impeding the adoption of AI by universities).
3.3. Criteria for Inclusion and Exclusion
3.3.1. Criteria for Inclusion
Research papers were added to the review if they were peer-reviewed journal articles, published between 2020 to 2026, indexed in Scopus or Web of Science, focuses on AI integration, perception of lecturers and students towards AI as a tool for Social Studies education, AI competencies among Social Studies lecturers and students, how lecturers and students AI competences are measured, and Enablers and barriers to AI integration in Social Studies education settings.
3.3.2. Criteria for Exclusion
Research articles were excluded from the systematic review if they were published earlier than 2020, constituted conference abstracts, commentaries, editorial materials, book reviews and book chapters, or opinion-based publications. Furthermore, we excluded articles that focused primarily on the technical architecture of AI platforms without any mention of educational applications. In addition, all empirical studies concerning the field of primary or secondary education were excluded. Moreover, to keep the quality of the work within an acceptable bound, we excluded publications that are not archived in reputable academic databases. Finally, all research articles published in different languages other than English language were also excluded to avoid misinterpretation of the findings and conclusions.
3.3.3. Data Extraction and Analysis
A well-designed data analysis matrix used in this review captured research title, name(s) of author(s), type of study, key findings and conclusions drawn by the study. The matrix was used to appropriately extract such data from each of the selected journals and entered into Table 1 for further synthesis.
3.4. Data Quality Checks
Selected research articles were evaluated using Mixed Methods Appraisal Tool (MMAT), risk of bias indicators, clarity of methodology, and validity and reliability of the findings.
3.5. Characteristics of Included Journal Articles
The 10 included research articles differed by design ranges from AI adoption in higher education , AI adoption in Social Studies education , lecturers and students’ perception of AI , university students AI competencies , measurement of AI competencies , and enablers and barriers to AI adoption . Table 1 presents summary of the matrix used to extract the data from the 10 research articles reviewed in this study.
Table 1. Summary of Articles Findings.

Research Title

Author(s)

Type of Study

Key findings

Conclusions

Integrating AI in higher education: Perception, challenges and strategies for academic innovation.

Schmidt et al., (2025)

Exploratory mixed methods

While lecturers and students recognize AI pedagogical transformative potential, they were concerns about its impact on critical thinking and academic integrity, and some students has misconceptions of AI. Lecturers and students has differing attitudes, perception, concerns and AI competences.

Students employ AI for academic task but struggle with reliability and ethical issues. It was advocated for increase awareness, CPD programmes, and the need for ethical guidelines to overcome the barriers to AI integration.

AI in higher education, opportunities and challenges: A review

AlBlooshi (2026).

An exploratory systematic review

AI platforms such as intelligent tutoring systems, and GenAI creates opportunities for individualized instruction thereby increasing students’ motivation and learning. However, concerns have been raised about use of fake narratives into generative AI tools, biases, privacy concerns, and the impact of AI on environment.

Lecturers capacity building on ethical use of AI, postgraduate training should develop a clearly defined AI use policy to align the university vision, encourage robust AI-resistant structures for the secure continuation of modern examinations, and ongoing investments in AI literacy projects for both lecturers and students,

AI and Social Studies education in Nigeria: A pathway to enhancing critical thinking in Nigeria

Bakare (2024)

Systematic review

AI has a transformative impact on Social Studies education, offering personalized learning, improve instructional pedagogies, and expand access to educational resources. However, AI integration in Social Studies education is impeded by AI bias, digital divide, privacy issues, and cost related issues.

AI has potential to transform Social Studies education. However, to tap into these gains, institutions must put in place strategies to address the challenges that impede the digital innovation of AI in Social Studies education.

CIVIC: Five pillars for using AI in Social Studies education

Heafner and Maxwell (2025)

Desk review

Responsible and ethical use of AI in social studies requires a clear framework that aligns with the discipline’s emphasis on critical thinking, inclusivity, and civic engagement. However, challenges persist such as bias, data privacy, and ethical issues.

Integration of AI into Social Studies education has potential of enhancing inquiry-based learning, critical thinking, and civic engagement. However, responsible use of AI requires policy guide lines that align with values of Social Studies education.

Beyond the hype: Exploring faculty perception and acceptability of AI in teaching practices

Ofosu-Ampong (2024)

Cross-sectional survey

Majority of lecturers were willing to accept their students use of AI. Teaching experience, institutional support for AI use, and attitude towards AI proved to be significant predictors of AI acceptance in education. Key factors influencing lecturers’ acceptance of AI for their students include perceived pedagogical affordances, organizational policies and incentives, perceived complexity and usability, and socio-cultural context.

The majority of university lecturers support the use of AI by their students. However, challenges such as connectivity issues, lack of training on AI, absence of universities council’s approval of AI use, AI bias, and potential job displacement persist. Capacity building and policy guidelines should be designed to spearhead AI use in universities.

Barriers and Enablers to Artificial Intelligence (AI) Adoption in Administrative Functions in Public Universities in Ghana: A Case Study of the University of Education, Winneba

Boison (2025)

Convergent mixed methods

The study found inadequate digital infrastructure, limited AI literacy, ambiguous policy frameworks, and resistance to change as barriers to AI integration. However, strong perceived usefulness of AI tools, leadership support, and departmental readiness in selected units were identified as facilitators of AI integration.

Incorporating AI into university administration is an avenue for changing the core nature of the institution. To realize the promise of AI in the study university, there is a need for a strategic thinking, building capacity, ethical protection and a culture that encourages change based on the local context and needs.

AI literacy in Social Studies education

Yetişensoy and Rapoport (2023)

Case study

Social studies can play an important role in teaching AI literacy, which is part of digital literacy but is likely to become an independent and important citizenship qualification in the future.

It has been advocated that social studies scholars conduct theoretical and applied studies that point out the relationship between AI literacy and social studies. These empirical studies will help the development of new perspectives on the subject, revealing the potential role of social studies in teaching AI literacy.

Assessing teachers’ generative AI competencies: Instrument development and validation

Shi (2025)

Cross-sectional survey

The instrument for teacher AI competencies evaluation comprises five dimensions: (1) Technological proficiency of GenAI, (2) pedagogical competencies of AI in teaching, (3) preparing students with effective practices of AI, (4) AI-related professional development and communication, and (5) risk and ethical awareness of AI use in education.

The research addresses a gap in AI adoption in education by providing a comprehensive and validated tool for assessing teachers’ AI competencies required for technology integration. The instrument not only provides insights for targeted continuous professional development programmes but also supports efforts to align AI utilization with pedagogical, practical, and ethical considerations.

Artificial intelligence literacy among university students—a comparative transnational survey.

Mansoor et al., (2024)

Comparative transnational survey

The research found significant disparities in AI literacy levels among university students based on nationality, scientific specialization, and academic degrees, while age and gender did not show notable impacts. Malaysian participants scored higher on AI literacy scale than individuals from other African countries.

The study advocates for assessing AI literacy levels across different societal segments and developing the appropriate measurements scales for those competecies.

Drivers and barriers of AI adoption and use in scientific research

Bianchini et al., (2025)

Systematic review of large datasets

Social and collaborative teams are the strongest enablers of AI adoption. Institutional leverage matter most initially but decline over time. Access to high performing computing (HPC) play a limited role in determine AI adoption especially medical sciences and biology. This indicates that infrastructure deficits are less important than human and social factors in most disciplines.

Adoption of AI technology in scientific research is social driven process informed by collaborative networks and knowledge exchange than by institutional resources or infrastructure alone, especially as AI tools are increasingly becoming democratized.

4. Discussion
The evidence emerging from the synthesis of the literature captured in Table 1 are discussed thematically in the preceding pages. The thematic analysis gives a rigorous framework for systematically identifying, interpreting, and reporting themes within the data, providing a nuanced understanding of complex issues , facilitates integration of multiple findings across several studies , and promotes transparency and rigor in data synthesis .
4.1. Lectures and Students Perception of AI as Tool for Social Studies Education
The reviewed literature discovered a complex and often ambivalent perception of AI among lecturers and students. Empirical studies such as and demonstrate that both groups acknowledge AI’s transformative pedagogical potential, particularly in enhancing individualized instruction, research support, and instructional innovation. Similarly, highlight AI’s capacity to improve students’ engagement, motivation, and access to knowledge, aligning with previous findings by , who argue that AI enhances adaptive learning environments. However, these optimisms are tempered by deep seated concerns, especially around academic integrity, overreliance, and erosion of critical thinking and problem-solving skills . This aligns with the findings from , who advise that generative AI may encourage superficial learning if not pedagogically guided. Notably, identifies misconceptions about AI competencies among students, which contradicts the assumption in earlier studies that digital native students possess sophisticated understanding of emerging technologies. Furthermore, disciplinary consideration is significant, as a result, emphasize that Social Studies is rooted in critical inquiry, civic reasoning, problem-solving, active citizenship, environmental consciousness, and ethical deliberations, requires context sensitive AI integration frameworks, suggesting that generic AI adoption models may be insufficient.
The findings of this systematic review reveal a nuanced and dialectical landscape in which enthusiasm for artificial intelligence coexists with substantive pedagogical anxieties. This duality is neither incidental nor transitional; rather, it reflects a deeper epistemological tension between technological augmentation and the foundational aims of education. On one hand, the recognition of AI as a catalyst for instructional transformation underscores its capacity to reconfigure traditional pedagogical paradigms. Its affordances for adaptive learning, real-time feedback, and personalized knowledge pathways suggest a shift toward more learner-centered ecologies, where instructional delivery is increasingly responsive to individual cognitive profiles and learning trajectories. Within this framing, AI is not merely a supplementary tool but an infrastructural force capable of redefining how knowledge is curated, accessed, and internalized.
However, this optimism is circumscribed by critical concerns that interrogate the unintended consequences of such integration. The apprehension surrounding academic integrity signals more than a procedural challenge; it points to a fundamental disruption in how learning authenticity is conceptualized and assessed. The ease with which AI systems can generate coherent and contextually relevant outputs raises questions about authorship, originality, and the evaluative frameworks that underpin academic work. Consequently, the integrity debate must be situated within broader discussions about epistemic trust and the redefinition of intellectual labor in digitally mediated environments. Equally significant is the concern regarding cognitive atrophy, particularly the potential erosion of higher-order thinking skills. The delegation of analytical and problem-solving tasks to AI systems may inadvertently diminish students’ engagement in deep cognitive processing, thereby fostering a form of intellectual passivity. This risk is especially pronounced in contexts where pedagogical scaffolding is insufficient, allowing AI to function as a substitute rather than an enhancer of learning. The implication here is that the educational value of AI is contingent not on its capabilities per se, but on the intentionality and sophistication of its pedagogical deployment.
4.2. Lecturers and Students Digital Competency Readiness for AI Adoption
Evidence from this literature suggest uneven and inadequate digital competency readiness among some lecturers and students. For instance, discovered significant disparities in AI literacy across regions and disciplines, with students from developing countries exhibiting comparatively lower AI competencies. These findings concur with , which identifies a global AI skill gap, particularly in the global south as barrier to technology adoption in education. Similarly, and identify limited AI literacy and low technical proficiency as critical barriers to effective AI adoption. In contrast, conceptualizes AI competencies as multidimensional construct, encompassing not only technical proficiency but also pedagogical alignment, ethical awareness and communication skills. This broader framing corroborates digital competency framework, which emphasizes wholistic educator preparedness. Interestingly, introduce social capital dimension, suggesting that competency development is not purely individual but influenced by collaborative networks and institutional ecosystems. This perspective extends beyond earlier competency models that focused primarily on individual skills.
The evidence synthesized in this review points to a structurally uneven landscape of digital competency readiness that complicates the effective integration of artificial intelligence in educational contexts. Rather than a uniform deficit, the pattern that emerges is one of stratified capability, shaped by geographical location, disciplinary orientation, and institutional capacity. The comparatively lower levels of AI literacy observed in developing contexts underscore a broader asymmetry in access to technological resources, training infrastructures, and epistemic exposure, thereby reinforcing existing global inequities in educational innovation. At the same time, the identification of limited technical proficiency among both lecturers and students suggests that the challenge is not confined to resource-poor settings but reflects a more pervasive misalignment between rapid technological advancement and the pace of pedagogical adaptation.
4.3. How Digital Competences Are Measured
The measurement of AI-related competencies in higher education is emerging but still fragmented. For example, provides a validated multidimensional instrument, identifying five main domains: technological proficiency, pedagogical integration, student preparation, professional engagement, and ethical awareness. This represent a significant progress compared to earlier, less structured frameworks. However, demonstrate that cross-national measurement remains inconsistent, with variations in scales and contextual benchmarks limiting comparability. This supports the proposition by that standardized global metrics for AI literacy are still underdeveloped. Moreover, position AI literacy as an emerging civic competency, suggesting that measurement should be extended beyond technical skills to include critical AI awareness and societal implications, a perspective largely absent in earlier quantitative instruments.
4.4. Institutional Governance and Strategies for AI Integration in Higher Education
Institutional governance emerges as a decisive factor in AI integration. Specifically, identifies weak policy frameworks, infrastructure deficits, and leadership gaps as major constraints to AI adoption in Ghanaian universities. Conversely, the presence of leadership support and departmental readiness act as strong enablers of AI integration in HEI. Moreover, further emphasize that strategic investments, institutional vision alignment, and collaborative ecosystems are vital for sustainable AI adoption in universities. This assertion is consistent with recommendations advocating for whole-institutional approaches to AI governance. However, the findings challenge earlier techno-deterministic assumptions by which demonstrates that institutional context and governance structures often outweigh technological availability in determining successful AI integration.
4.5. Faculty Adoption and Training Needs
Lecturers adoption of AI is informed by perceived usefulness, ease of use and institutional encouragement and support, consistent with the Technology Acceptance Model (TAM) . Similarly, outlines teaching experience, attitudes, and organizational incentives as significant predictors of AI acceptance, aligning with previous study by . Nevertheless, substantial training gaps persist. Empirical studies by and identifies the lack of CPD and structured training programmes as barriers to AI utilization in education. In agreement, emphasize the need for targeted competency-based training, particularly in ethical and pedagogical dimensions of AI use. These findings support recent empirical research by , which opines that faculty readiness not technology preparedness is the main barrier to a successful adoption of AI in universities. However, the current review extends this by highlighting the discipline specific needs of Social Studies educators, especially in fostering critical, ethical, and responsible use of AI for academic purposes.
The findings of this review indicate that lecturers’ adoption of AI is shaped by a complex interplay of cognitive, affective, and institutional determinants, reflecting established models of technology acceptance while also exposing their limitations in contemporary educational contexts. Perceptions of usefulness and ease of use remain central drivers, yet these factors are significantly mediated by organizational climates that either incentivize or constrain innovation. In this regard, institutional support emerges not merely as a facilitating condition but as a decisive force that legitimizes AI integration within pedagogical practice. At the individual level, teaching experience and professional attitudes further condition adoption patterns, suggesting that lecturers’ prior pedagogical orientations and openness to change critically influence their engagement with AI. Despite these enabling factors, the persistence of substantial training deficits reveals a structural disconnect between institutional aspirations for technological integration and the provision of adequate professional development pathways. The absence of continuous professional development frameworks and structured training programmes undermines lecturers’ capacity to translate theoretical acceptance into effective pedagogical application. This gap is particularly consequential given the increasingly complex demands associated with AI use, which extend beyond technical operation to encompass ethical judgment, instructional design, and critical evaluation of AI-generated outputs. Consequently, the emphasis on competency-based training reflects a necessary shift toward more holistic forms of professional preparation that integrate technical, pedagogical, and ethical dimensions.
4.6. Universities Policy Guidelines on AI Adoption
A recurring theme across the studies is the absence of formal AI policies. For instance, and report ambiguous or non-existing institutional policy guidelines, creating uncertainties among faculty and students. More importantly, strongly advocates for comprehensive AI policies, including: ethical usage frameworks, AI-resistant assessment systems, and institutional alignment with AI innovation goals. This aligns with global policy recommendation put together by , yet contrast with the findings from the previous study by , which suggested that institutions were beginning to formalize AI governance structures. The present systematic review discovered that policy development remains uneven and context dependent, especially in African higher education systems.
4.7. Enablers and Barriers to AI Integration in Social Studies Education in HEIs
The synthesis identifies a dual structure of enablers and barriers to AI adoption in Social Studies education. Perceived usefulness and pedagogical innovation , leadership support and institutional readiness , collaborative networks and social capital , and expanding accessibility of AI tools were identified as enablers of AI adoption in Social Studies education in most universities. However, the barriers to AI integration were highlighted as limited infrastructure and connectivity , low AI competencies and training gaps , digital divide and cost constrains , and resistant to change due to socio-cultural factors . These findings are consistent with recent meta-analysis by , but extend them by highlighting contextual inequalities between Global north and Global south educational institutions.
The synthesis delineates a dynamic and interdependent configuration of enabling and constraining forces that collectively shape the trajectory of AI adoption in Social Studies education, revealing that technological integration is neither inherently progressive nor uniformly attainable. On the enabling side, the perceived pedagogical value of AI, particularly its capacity to stimulate instructional innovation and enhance teaching effectiveness, operates in tandem with institutional leadership, organizational readiness, and the diffusion of collaborative networks that facilitate knowledge exchange and peer-supported learning. The growing accessibility of AI tools further lowers entry barriers, creating new opportunities for experimentation and pedagogical transformation. However, these enabling conditions are persistently counterbalanced by deeply entrenched structural and contextual impediments. Limitations in infrastructure and unreliable connectivity significantly curtail the practical usability of AI systems, especially in resource-constrained environments, while deficiencies in AI competencies and the absence of systematic training frameworks inhibit meaningful engagement among educators and students alike. Additionally, economic constraints and the broader digital divide exacerbate disparities in access and participation, reinforcing patterns of exclusion that mirror global inequalities. Sociocultural resistance to change introduces another layer of complexity, as entrenched beliefs, institutional inertia, and apprehension toward technological disruption can impede adoption even in relatively well-resourced settings.
4.8. Ethical Considerations and Biases of AI
Generally, the integration of AI into higher education teaching, learning, research, administration, and assessment raises several ethical issues, including challenges connected to academic integrity, accuracy, AI bias, data privacy, transparency, intellectual property rights, and the environmental impact of this digital innovation. When AI is wrongly used, it jeopardizes academic honesty and creates situations where various problems arise, such as plagiarism, the authenticity of student submissions, and the use of AI-generated content in educational settings. Moreover, AI’s hallucination is another critical issue, wherein it communicates misleading information as a result of its indiscriminate gathering of information from its sources . Generative AI platforms usually lack sufficient context, reliability, and learning capabilities through experience, which can lead to the creation of false content, particularly when dubious sources are used . These tools provide answers that appear reasonable and confident, potentially leading users to be misled if they consume uncritically . Similarly, observed the problem of fabricated citations, highlighting that ChatGPT created 69% of fictitious references by taking author names and journal titles from real publications, making them difficult to detect. Likewise, discovered that a significant portion of the literature generated by ChatGPT in various disciplines did not exist during verification of AI content. This exposes a concerning educational and research environment, where the accuracy of information is highly valued. Bias is another deeply rooted problem when working with AI, stemming from the very nature of the training data. AI models can integrate the biases that already exist in the training data .
Ethical concerns are central and pervasive across all the articles reviewed. The key issues identified include: algorithmic bias , data privacy and risks of surveillance , academic dishonesty and misuse , environmental cost of AI systems . In addition, proposes a discipline specific ethical framework anchored in Social Studies attitudes and values such as equity, inclusivity, civic responsibility, respect, diversity, commitment to achieving excellence, teamwork, truth, and integrity. This aligns with but extends ethical discourse by embedding it with subject-specific pedagogy. However, the persistence of these ethical issues suggest that ethical governance has not kept pace with technological advancement, contradicting optimistic projections in earlier AI adoption in higher education literature.
4.9. Limitations of the Study
A major shortcoming of PRISMA-oriented review is their overdependence on methodological rigor of included articles. Since we synthesize existing evidence rather than generate new primary data, any biases, inconsistencies or methodological error in primary studies can easily be transferred into the review. Moreover, high heterogeneity in research designs, populations, and outcomes measures can complicates synthesis and limit comparability. Additionally, systematic review conducted under PRISMA model are normally vulnerable to publication bias, where studies with statistically significant findings are more likely to be published and thus included. This may result in over estimation of effect sizes or skewed conclusions. More importantly, selective outcome reporting within studies can distorts the evidence base. To exemplify, and colleagues observed that even comprehensive search strategies cannot completely eliminate this bias, especially when grey literature is underrepresented.
More importantly, undertaking a high quality PRISMA-based systematic review is resource-intensive, demanding substantial time, expertise and coordination. Our research activities such as protocol designing, database searches, screening, data extraction, and data quality checks requires significant effort. Consequently, some reviews may become outdated by the time they are published, especially in rapidly evolving fields like AI in higher education. In agreement, highlight that the lag between evidence generation and synthesis can reduce the practical relevance of the findings. Finally, while PRISMA enhances transparency and standardization, its structured approach can be overly rigid, particularly when dealing with complex, interdisciplinary, or qualitative research domains. The systematic reviews are primarily optimized for quantitative studies and may not sufficiently capture contextual nuances, theoretical insights, or emerging themes in qualitative research. As argued by , that an overly standardized approaches may oversimplify complex phenomena and limit interpretive depth.
4.10. Areas for Further Research
The review identified several critical gaps that warrant future scholarly attention. First, there is a need for discipline-specific AI integration models in Social Studies education, given that existing frameworks remain largely generic and fail to capture the epistemological emphasis on critical inquiry, building of desirable attitudes and values, civic competence, and ethical deliberation. Future researchers should therefore develop and empirically validate context-sensitive pedagogical frameworks geared towards Social Studies. Second, the persistent misconceptions about AI among university students challenge assumptions about digital native competence. Further research should therefore examine students conceptual understanding of AI, including how these misconceptions influence learning outcomes, critical thinking, problem-solving, creativity, and academic integrity. Third, the findings show a lack of standardized global metrics for measuring AI competencies. Comparative, cross-national study is required to develop validated, culturally responsive AI literacy frameworks that incorporate technical, pedagogical, and ethical dimensions. Fourth, limited empirical evidence exists on the long-term impact of AI use on higher-order thinking skills and general academic performance of university students, particularly in humanities and the social sciences. Longitudinal studies should be conducted to explore whether AI enhances or undermines students critical thinking, collaboration, creativity, problem-solving, civic engagement, and general academic performance over time. Moreover, while ethical issues are widely acknowledged, there is insufficient empirical studies on operationalizing ethical AI frameworks into classroom practices. Future studies should go beyond conceptual discussions to test practical ethical guidelines and assessments methods within real university educational context. Finally, the review exposes contextual inequalities between Global north and Global south institutions, particularly concerning infrastructure, access and policy preparedness. Future research should prioritize equity-focused investigations that evaluate how socio-economic and institutional disparities shape AI adoption and outcomes.
5. Conclusions
The review demonstrates that AI integration in Social Studies education is marked by promise, complexity, and uneven preparedness. While lecturers and students recognize AI transformative potential for enhancing individualized instruction, engagement, and pedagogical innovation, these gains are counterbalanced by significant challenges related to academic integrity, ethical issues, biases, over-reliance of AI, and the erosion of problem-solving, critical thinking, and creative thinking skills. The findings further discovered that digital competence gaps, weak institutional governance, insufficient training, and absence of clear policy guidelines remain major obstacles to effective utilization of AI in universities. More importantly, the study underscores that AI integration is not merely a technological issue but a socio-pedagogical and institutional challenge. Successful AI integration depends on the alignment of core competencies, governance structures, ethical frameworks, and discipline-specific pedagogies. Moreover, the evidence highlights stark inequalities between educational contexts, especially disadvantaged institutions in the Global South. All in all, the adoption of AI into Social Studies education demands a wholistic, context-aware, and ethically grounded approach, where human-centered pedagogy remain central, and technology serves as an enabler rather than a substitute for critical intellectual engagement.
6. Recommendations
As a result of the evidence presented in the review, the following suggestions have been made.
1) Universities and researchers need to create AI adoption models specific to Social Studies education that integrate 21st-century skills such as critical thinking, collaboration, civic engagement, problem-solving, and creativity.
2) Higher education institutions (HEIs) need to establish AI policy frameworks that are specific and actionable concerning academic integrity, ethical use, data privacy, and AI-resistant assessment strategies, in conjunction with international standards such as the UNESCO framework.
3) Universities need to provide continuous professional development (CPD) training for faculty members and implement AI literacy training for students, with an emphasis on the technical, pedagogical, and ethical aspects.
4) Social Studies curricular should incorporate ethical AI use, bias awareness, and critical AI literacy, alongside the redesign of assessment strategies to minimize misuse and promote authentic, higher-order learning outcomes in university students.
5) Governments and institutions of higher learning, particularly in the Global South, should prioritize investment in digital infrastructure, internet accessibility, and affordable AI tools to ensure equitable participation in AI-enabled higher education.
Abbreviations

AI

Artificial Intelligence

CPD

Continuous Professional Development

HEI

Higher Education Institution

MMAT

Mixed Methods Appraisal Tool

PRISMA

Preferred Reporting Items for Systematic Review and Meta-Analysis

TAM

Technology Acceptance Model

VR

Virtual Reality

Author Contributions
Iddrisu Bariham: Data curation, Formal Analysis, Methodology, Project administration, Supervision, Validation, Writing – review & editing
Iddrisu Abdul-Hafiz: Investigation, Software, Visualization
Shakul Abdulai: Conceptualization, Resources, Writing – original draft
Conflicts of Interest
The authors declare no conflict of interest.
References
[1] Adiguzel, T., Kaya, M. H., and Cansu, F. K. (2023). Revolutionizing education with AI: exploring the transformative potential of ChatGPT. Contemp. Educ. Technol. 15: ep429.
[2] AlBlooshi, S. (2026). Artificial intelligence in higher education, opportunities, and challenges: a review. Front. Educ. 10: 1683968.
[3] Altman, D. G. (1994). The scandal of poor medical research. BMJ, 308(6924), 283–284.
[4] Alqahtani, T., Badreldin, H. A., Alrashed, M., Alshaya, A. I., Alghamdi, S. S., bin Saleh, K., et al. (2023). The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research. Res. Soc. Adm. Pharm. 19, 1236–1242.
[5] Atchley, P., Pannell, H., Wofford, K., Hopkins, M., and Atchley, R. A. (2024). Human and AI collaboration in the higher education environment: opportunities and concerns. Cogn. Res. 9: 20.
[6] Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): understanding the potential benefits of ChatGPT in promoting teaching and learning. SSRN Electron. J.
[7] Bakare, M. I. (2024). Artificial Intelligence and Social Studies Education In Nigeria: A Pathway to Enhanced Learning And Critical Thinking In Nigeria. International Journal of Creative Research Thoughts, ISNN: 2320-2882.
[8] Bianchini, S., Müller, M., & Pelletie, P. (2025). Drivers and barriers of AI adoption and use in scientific research. Technological Forecasting & Social Change, 220 (2025) 124303.
[9] Bobula, M. (2024). Generative artificial intelligence (AI) in higher education: a comprehensive review of challenges, opportunities, and implications. J. Learn. Dev. High. Educ. 30.
[10] Boison, R. B. (2025). Barriers and Enablers to Artificial Intelligence (AI) Adoption in Administrative Functions in Public Universities in Ghana: A Case Study of the University of Education, Winneba. Canadian Journal of Educational and Social Studies, 5(4), pp.50- 80.
[11] Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., Pham, P., Chong, S. W., & Siemens, G. (2024). A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education, 21, Article 4.
[12] Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.
[13] Castillo-Martínez, I. M., Flores-Bueno, D., Gómez-Puente, S. M., & Vite-León, V. O. (2024). AI in higher education: A systematic literature review. Frontiers in Education, 9, 1391485.
[14] Chan, C. K. Y., and Hu, W. (2023). Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 20, 1–18.
[15] Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20, Article 22.
[16] Davis, F. D. (1989), "Perceived usefulness, perceived ease of use, and user acceptance of information technology", MIS Quarterly, 13(3): 319–340.
[17] Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M., Alalwan, A. A., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., Wright, R. (2023). So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642.
[18] Falebita, O. S., & Kok, P. J. (2024). Strategic goals for artificial intelligence integration among STEM academics and undergraduates in African higher education: A systematic review. Discover Education, 3, 151.
[19] Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2022). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689–707.
[20] Francis, N. J., Jones, S., and Smith, D. P. (2025). Generative AI in higher education: balancing innovation and integrity. Br. J. Biomed. Sci. 81: 14048.
[21] Garzón, J., Patiño, E., & Marulanda, C. (2025). Systematic review of artificial intelligence in education: Trends, benefits, and challenges. Multimodal Technologies and Interaction, 9(8), 84.
[22] Greenhalgh, T. (2018). How to read a paper: The basics of evidence-based medicine and healthcare (6th ed.). Wiley-Blackwell.
[23] Heafner, T., & Maxwell, D., (2025). CIVIC: Five pillars for using artificial intelligence in social studies education. Contemporary Issues in Technology and Teacher Education, 25(4), 648-686.
[24] Higgins, J. P. T., Thomas, J., Chandler, J., Cumpston, M., Li, T., Page, M. J., & Welch, V. A. (2023). Cochrane handbook for systematic reviews of interventions (Version 6.4). Cochrane.
[25] Holmes, W., Bialik, M., & Fadel, C. (2022). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.
[26] Joshi, B. M., & Khatiwada, S. P. (2024). Analyzing barriers to ICT integration in education: A systematic review. The Third Pole: Journal of Geography, 24, 25–45.
[27] Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., & Günnemann, S. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274.
[28] Labadze, L., Grigolia, M., and Machaidze, L. (2023). Role of AI chatbots in education: systematic literature review. Int. J. Educ. Technol. High. Educ. 20, 1–17.
[29] Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–16). Association for Computing Machinery.
[30] Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2022). Intelligence Unleashed: An Argument for AI in Education. Pearson Education / UCL Knowledge Lab.
[31] Mansoor, H. M. H., Bawazir, A. Alsabri, M. A., Alharbi, A., & Okela, A. H. (2024). Artificial intelligence literacy among university students—a comparative transnational survey. Front. Commun. 9: 1478476.
[32] Mawunda, N. M. (2014). A framework for integration of ICTs in teaching and learning processes in secondary schools in Machakos sub-county. Unpublished Master’s Thesis, University of Nairobi, Kenya.
[33] Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & The PRISMA Group. (2021). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine, 6(7), e1000097.
[34] Monzon, N., and Hays, F. A. (2025). Leveraging generative AI to improve motivation and retrieval in higher education learners. JMIR Med. Educ.
[35] Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041.
[36] Organisation for Economic Co-operation and Development (OECD). (2023). OECD Digital Education Outlook 2023: Towards an Effective Digital Education Ecosystem. Paris: OECD Publishing.
[37] Ofosu-Ampong, K. (2024). Beyond the hype: exploring faculty perceptions and acceptability of AI in teaching practices. Discover Education, (2024) 3: 38.
[38] Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., et al. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71.
[39] Redecker, C., & Punie, Y. (2022). European framework for the digital competence of educators: DigCompEdu. Publications Office of the European Union.
[40] Schmidt, D. A., Alboloushi, B., Thomas, A., & Magalhaes, R. (2025). Integrating artificial intelligence in higher education: Perceptions, challenges, and strategies for academic innovation. Computers and Education Open, 9, 100274.
[41] Shi, L. (2025). Assessing teachers Generative AI Competencies: instrument development and validation. Education and Information Technologies (2025) 30: 23365–23384
[42] Sun, J. C., & Pratt, T. L. (2024). Navigating AI integration in career and technical education: Diffusion challenges, opportunities, and decisions. Education Sciences, 14(12), 1285.
[43] Tenakwah, E. S., Boadu, G., Tenakwah, E. J., Parzakonis, M., Brady, M., Kansiime, P., et al. (2023). Generative AI and higher education assessments: a competency-based analysis.
[44] Teo, T. (2023). Acceptance of technology in education: An updated review of the Technology Acceptance Model (TAM). Interactive Learning Environments.
[45] Thorne, S., Kirkham, S. R., & O’Flynn-Magee, K. (2004). The analytic challenge in interpretive description. International Journal of Qualitative Methods, 3(1), 1– 11.
[46] UNESCO. (2023). Guidance for generative AI in education and research. Paris: UNESCO.
[47] Vaismoradi, M., Turunen, H., & Sullivan, K. (2016). Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nursing and Health Sciences, 18(1), 4-188.
[48] Williamson, B., & Eynon, R. (2022). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45(3), 223–235.
[49] Yetişensoy, O., & Rapoport, A. (2023). Artificial intelligence literacy teaching in social studies education. Journal of Pedagogical Research, 7(3).
[50] Zawacki-Richter, O., Bond, M., Marin, V. I., & Gouverneur, F. (2023). Systematic review of artificial intelligence in higher education (updated trends and implications). International Journal of Educational Technology in Higher Education, 20, Article 10.
Cite This Article
  • APA Style

    Bariham, I., Abdul-Hafiz, I., Abdulai, S. (2026). Barriers and Enablers of AI Integration into Social Studies Education: A Systematic Review of University Lecturers and Students Experiences. Higher Education Research, 11(3), 49-61. https://doi.org/10.11648/j.her.20261103.12

    Copy | Download

    ACS Style

    Bariham, I.; Abdul-Hafiz, I.; Abdulai, S. Barriers and Enablers of AI Integration into Social Studies Education: A Systematic Review of University Lecturers and Students Experiences. High. Educ. Res. 2026, 11(3), 49-61. doi: 10.11648/j.her.20261103.12

    Copy | Download

    AMA Style

    Bariham I, Abdul-Hafiz I, Abdulai S. Barriers and Enablers of AI Integration into Social Studies Education: A Systematic Review of University Lecturers and Students Experiences. High Educ Res. 2026;11(3):49-61. doi: 10.11648/j.her.20261103.12

    Copy | Download

  • @article{10.11648/j.her.20261103.12,
      author = {Iddrisu Bariham and Iddrisu Abdul-Hafiz and Shakul Abdulai},
      title = {Barriers and Enablers of AI Integration into Social Studies Education: A Systematic Review of University Lecturers and Students Experiences},
      journal = {Higher Education Research},
      volume = {11},
      number = {3},
      pages = {49-61},
      doi = {10.11648/j.her.20261103.12},
      url = {https://doi.org/10.11648/j.her.20261103.12},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.her.20261103.12},
      abstract = {This research explored facilitators and barriers to the integration of AI in Social Studies education in the context of higher educational institutions (HEIs). Anchored on the Technology Acceptance Model (TAM) and directed by three research questions, the study assessed lecturers and students AI perceptions and AI competencies, and various facilitators and challenges to AI integration in Social Studies education in order to inform policy and best practices. Employing a systematic review approach, the study reviewed 10 research papers published in peer-reviewed journals. The results show that the integration of AI in Social Studies education has potential, but the level of preparedness is uneven. Although lecturers and students appreciate the transformative potential of AI for personalized instruction, increased engagement, and innovative teaching, the potential benefits are offset by the challenges of maintaining academic integrity, ethical concerns, biases, over-dependence on AI, and the loss of problem solving and critical thinking skills. Digital literacy deficits, poor university governance, inadequate training, and a lack of AI policies were identified as main barriers to the effective use of AI in universities. The evidence also points to a lack of educational equity, particularly in the Global South. The study suggests developing AI policy frameworks according to each discipline, strengthening AI literacy through continuous professional development programmes, establishing institutional policies on AI utilization, investing in digital infrastructure, integrating the teaching and learning with AI into curriculum and assessment, and incorporating ethical use of AI into curriculum and assessment practices in universities.},
     year = {2026}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Barriers and Enablers of AI Integration into Social Studies Education: A Systematic Review of University Lecturers and Students Experiences
    AU  - Iddrisu Bariham
    AU  - Iddrisu Abdul-Hafiz
    AU  - Shakul Abdulai
    Y1  - 2026/05/16
    PY  - 2026
    N1  - https://doi.org/10.11648/j.her.20261103.12
    DO  - 10.11648/j.her.20261103.12
    T2  - Higher Education Research
    JF  - Higher Education Research
    JO  - Higher Education Research
    SP  - 49
    EP  - 61
    PB  - Science Publishing Group
    SN  - 2578-935X
    UR  - https://doi.org/10.11648/j.her.20261103.12
    AB  - This research explored facilitators and barriers to the integration of AI in Social Studies education in the context of higher educational institutions (HEIs). Anchored on the Technology Acceptance Model (TAM) and directed by three research questions, the study assessed lecturers and students AI perceptions and AI competencies, and various facilitators and challenges to AI integration in Social Studies education in order to inform policy and best practices. Employing a systematic review approach, the study reviewed 10 research papers published in peer-reviewed journals. The results show that the integration of AI in Social Studies education has potential, but the level of preparedness is uneven. Although lecturers and students appreciate the transformative potential of AI for personalized instruction, increased engagement, and innovative teaching, the potential benefits are offset by the challenges of maintaining academic integrity, ethical concerns, biases, over-dependence on AI, and the loss of problem solving and critical thinking skills. Digital literacy deficits, poor university governance, inadequate training, and a lack of AI policies were identified as main barriers to the effective use of AI in universities. The evidence also points to a lack of educational equity, particularly in the Global South. The study suggests developing AI policy frameworks according to each discipline, strengthening AI literacy through continuous professional development programmes, establishing institutional policies on AI utilization, investing in digital infrastructure, integrating the teaching and learning with AI into curriculum and assessment, and incorporating ethical use of AI into curriculum and assessment practices in universities.
    VL  - 11
    IS  - 3
    ER  - 

    Copy | Download

Author Information
  • Abstract
  • Keywords
  • Document Sections

    1. 1. Introduction
    2. 2. Literature Review
    3. 3. Methodology
    4. 4. Discussion
    5. 5. Conclusions
    6. 6. Recommendations
    Show Full Outline
  • Abbreviations
  • Author Contributions
  • Conflicts of Interest
  • References
  • Cite This Article
  • Author Information