Dinesh Deckker1, Subhashini Sumanasekara2
1(Wrexham University, United Kingdom), 2(University of Gloucestershire, United Kingdom)
This systematic review analyses peer-reviewed literature from 2010 to 2024, sourced from IEEE Xplore, Google Scholar, PubMed, and SpringerLink. Using targeted keywords such as AI gender bias, algorithmic fairness, and bias mitigation, the review assesses empirical and theoretical studies that examine the causes of gender bias, its manifestations in AI-driven decision-making systems, and proposed strategies for detection and mitigation.
Findings reveal that biased training data, algorithm design flaws, and unacknowledged developer assumptions are primary sources of gender discrimination in AI systems. In education, these systems affect grading accuracy and learning outcomes; in workplaces, they influence hiring, evaluations, and promotions. Mitigation approaches can be categorised into three main categories: data-centric (e.g., data augmentation and data balancing), algorithm-centric (e.g., fairness-aware learning and adversarial training), and post-processing techniques (e.g., output calibration). However, each approach faces implementation challenges, including trade-offs between fairness and accuracy, lack of transparency, and the absence of intersectional bias detection.
The review concludes that gender fairness in AI requires integrated strategies that combine technical solutions with ethical governance. Ethical AI deployment must be grounded in inclusive data practices, transparent protocols, and interdisciplinary collaboration. Policymakers and organizations must strengthen accountability frameworks, such as the EU AI Act and the U.S. AI Bill of Rights, to ensure that AI technologies support equitable outcomes in education and employment.
The integration of artificial intelligence (AI) into education and workplace systems has introduced both opportunities for efficiency and risks of perpetuating historical biases. Among these risks, gender bias remains a persistent and deeply rooted concern. AI tools used for student assessment, hiring, promotions, and performance evaluations have demonstrated tendencies to replicate and even intensify preexisting gender inequalities. These outcomes are often traced to biased training datasets, non-transparent algorithms, and the absence of fairness-focused design principles [1], [2].
Despite the growing attention to algorithmic fairness, the literature remains fragmented, with few studies providing an integrated view of how gender bias manifests differently across educational and professional AI applications. This review offers a novel contribution by systematically analysing peer-reviewed research across both sectors, categorising bias sources, synthesising detection and mitigation methods, and evaluating the real-world implications of ethical AI frameworks.
By critically examining empirical and theoretical works published between 2010 and 2024, this review aims to bridge disciplinary gaps, inform future AI design, and support policy interventions. It responds to a crucial research need: to develop unified strategies that address gender bias at multiple levels data, algorithms, and institutional policy.
AI-driven recruitment systems often reflect historical hiring patterns that favoured men, leading to lower selection rates for equally qualified female candidates [3], [4]. Tools trained on male-dominant datasets have rejected resumes containing gender-coded language such as women s chess club [5].
Facial recognition systems exhibit significant accuracy disparities based on gender. Studies have shown lower recognition rates for female faces, particularly those with darker skin tones, due to biased training datasets [6]. [7]. These errors not only affect identity verification but also have profound implications for security and law enforcement.
Educational technologies also demonstrate gender bias, particularly in automated grading and adaptive learning systems. Algorithms trained on biased data reflect gendered performance trends, resulting in skewed outcomes that disadvantage female students [8, 9]. Tutoring platforms may recommend more manageable tasks or offer less feedback to female learners, reinforcing gender-based learning disparities [10].
While some progress has been made through fairness-aware algorithms and explainable AI (XAI), implementation remains limited. Tools like Grad-CAM [11] and model cards [12] improve transparency but are rarely adopted in commercial settings [13]. Additionally, fairness frameworks often overlook intersectional dimensions such as race, class, and disability, narrowing their real-world effectiveness [14].
This paper contributes to the field in three significant ways:
1. Cross-sector synthesis: Unlike prior studies focusing exclusively on either education or employment, this review unifies both domains under a single analytical framework.
2. Methodological rigour: The study employs a systematic approach to identify, categorise, and critically evaluate the most influential peer-reviewed research published between 2010 and 2024.
3. Policy relevance: The review incorporates a discussion of governance frameworks (e.g., EU AI Act, U.S. AI Bill of Rights), providing actionable insights for the implementation of ethical AI.
This study employed a systematic review methodology to evaluate peer-reviewed literature related to gender bias in artificial intelligence (AI) systems within educational and workplace contexts. The review followed structured protocols inspired by the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) [15] framework to ensure transparency and replicability.
A comprehensive search was conducted using four major academic databases: IEEE Xplore, Google Scholar, PubMed, and SpringerLink. The search covered studies published between January 2010 and March 2024, using combinations of the following keywords:
AI gender bias
Bias in AI hiring
Algorithmic fairness in education
Gender discrimination in AI
Bias mitigation in machine learning
Inclusion Criteria:
Peer-reviewed journal articles or conference papers.
Published between 2010 and 2024.
Focused on AI applications in education or workplace settings.
Discussed gender bias detection, impact, or mitigation.
Provided either empirical findings or theoretical frameworks.
Exclusion Criteria:
Non-peer-reviewed sources (e.g., blogs).
Studies unrelated to gender (e.g., focusing only on racial bias).
Technical papers without social or ethical context.
Non-English publications.
A PRISMA-style flow diagram [15] summarising the selection process is provided in Figure 1.
Fig. 1. PRISMA 2020 flow diagram outlining the study selection process.
To ensure systematic assessment, each selected study was evaluated based on:
Contextual domain: Education or workplace.
Bias category: Data-level, algorithm-level, or outcome-level bias.
Mitigation strategies: Data-centric, algorithm-centric, or post-processing methods.
Type of contribution: Empirical (e.g., experiments, case studies) or theoretical (e.g., frameworks, policy analysis).
The authors also recorded whether studies addressed intersectional bias, discussed ethical implications, and referenced existing governance policies such as the EU AI Act or the U.S. AI Bill of Rights.
D. Research Gap
While there is growing scholarly attention to the ethical and technical aspects of gender bias in AI systems, existing reviews often focus narrowly on either algorithmic fairness in general or gender discrimination in isolated contexts such as hiring or facial recognition. These studies typically overlook the combined impact of gender bias across both education and workplace environments, which are increasingly interconnected through AI-driven decision-making tools.
Furthermore, many prior reviews emphasize detection and mitigation strategies but fall short of integrating policy frameworks and ethical governance models into their analysis. The lack of attention to intersectional bias, where gender bias overlaps with other dimensions such as race, socioeconomic status, or disability, also leaves critical gaps in understanding how AI systems affect different groups simultaneously.
Our review addresses these deficiencies by:
Synthesising literature from both educational and employment contexts within a single framework.
Categorizing sources, impacts, and mitigation techniques of gender bias in a structured, comparative format.
Highlighting the role of recent policy developments (e.g., EU AI Act, U.S. AI Bill of Rights) in shaping ethical responses to gender bias in AI.
Calling for intersectional approaches to bias detection and mitigation.
By bridging disciplinary silos and connecting technical, ethical, and institutional perspectives, this review offers a more comprehensive understanding of gender bias in AI an essential step toward the equitable and accountable deployment of AI in real-world settings.
This section synthesises findings from 11 representative studies selected for their detailed insights into bias types, mitigation strategies, intersectionality considerations, and policy frameworks relevant to AI applications in education and workplace settings.
Each study was evaluated across five key dimensions:
Domain: The primary focus area Education, Workplace, or Both.
Bias Category: The level at which bias manifests Data, Algorithmic, or Outcome.
Mitigation Strategy: The corrective or preventative approach Data-centric, Algorithm-centric, Post-processing, or Policy-based.
Intersectionality: Whether intersecting axes of discrimination (e.g., gender + race) were considered.
Policy Framework: Whether the study aligned with or proposed formal governance strategies.
This evaluation matrix facilitated consistent classification across studies and provided a foundation for comparative analysis.
Among the 11 analyzed studies:
6 studies focused on workplace bias, particularly algorithmic discrimination in recruitment systems, Ex:[5], [4]
3 studies addressed educational bias, including grading algorithms and adaptive systems, Ex:[7], [10].
2 studies spanned both domains, analyzing systemic and multi-level biases, Ex:[2]
These studies include both empirical (e.g., dataset evaluations, model testing) and theoretical contributions (e.g., policy reviews, fairness frameworks).
Biases were categorised and addressed as follows:
Bias Type:
Fig. 2. Distribution of bias types identified in the reviewed studies: algorithmic bias (n = 4), data-level bias (n = 3), outcome-level bias (n = 3), and systemic/intersectional bias (n = 1).
Mitigation Strategies:
Fig. 3. Distribution of included studies by mitigation approach category: data-centric (n = 4), algorithm-centric (n = 3), post-processing (n = 2), and policy-based (n = 4).
Some studies adopted hybrid approaches, addressing both technical and governance-level interventions.
Assessment was based on scope, methodological transparency, and practical relevance:
Table 1. Study Quality Assessment Based on Methodological Rigour and Scope
|
Quality Tier |
No. of Studies |
Description |
|
High |
4 |
Multi-method, large datasets, applied policy frameworks |
|
Medium |
5 |
Methodologically sound but context-limited |
|
Low |
2 |
Conceptual only or lacked empirical grounding |
Workplace studies revealed predominant data and algorithmic biases affecting recruitment outcomes, e.g., [5], [17].
Education studies highlighted challenges in algorithm fairness and outcome disparities, e.g., [7], [8].
Policy-integrated research, e.g., [12], [16] showcased frameworks such as model cards and fairness audits.
Intersectionality was explicitly addressed in only a few studies, pointing to a need for deeper multidimensional analyses.
While mitigation strategies are maturing, the field still lacks longitudinal evaluations of their effectiveness and scalability.
This review confirms that gender bias remains a persistent challenge in AI applications across both educational and workplace contexts. While the reviewed literature reflects growing awareness and sophistication in identifying and addressing bias, the effectiveness of proposed mitigation strategies varies significantly.
Data-centric approaches, such as data augmentation and rebalancing, are widely used (e.g., [4], [16]), but they rely heavily on the assumption that bias is primarily rooted in the dataset. This overlooks structural and historical inequalities that shape the data in the first place. Additionally, these methods can unintentionally oversample minority representations, leading to distorted distributions or performance trade-offs.
Algorithm-centric methods, such as fairness-aware training and adversarial debiasing (e.g., [7], [3]), show promise in improving model behaviour during training. However, their implementation often requires advanced technical expertise and computational resources, which are not equally available across all organizations. Moreover, many of these models operate as black boxes, reducing interpretability and user trust [13], [23].
Post-processing techniques, such as output calibration and ranking correction (e.g., [17]), are relatively more straightforward to implement but are reactive rather than preventive. They treat the symptoms of bias after decisions are made rather than addressing underlying causes, and their effectiveness is typically limited to the specific application without generalizability.
Policy-driven strategies such as model documentation [12] and fairness audits [32] are essential for accountability. However, uptake is inconsistent across sectors, and few policies are enforceable. Intersectional bias addressed by only a minority of studies (e.g., [14]) remains a critical gap, especially when AI systems interact with overlapping axes of discrimination such as race, class, or disability.
Table 2. Summary of Mitigation Strategies with Examples, Advantages, and Limitations
|
Mitigation Strategy |
Examples |
Advantages |
Limitations |
|
Data-Centric |
Data audits, rebalancing, augmentation [4], [16] |
Addresses bias at the source |
May reinforce structural inequalities; data availability |
|
Algorithm-Centric |
Fairness-aware training, adversarial debiasing [31], [2] |
Tackles bias during model training |
Requires technical expertise; interpretability issues |
|
Post-Processing |
Score calibration, fair ranking [17] |
Easy to implement post hoc |
Reactive, not preventive; limited scope |
|
Policy-Based |
Model cards, ethics audits, transparency tools [12], [14] |
Enables accountability and governance |
Enforcement is weak; adoption inconsistent |
|
Explainability (XAI) |
SHAP, LIME, Grad-CAM [18] |
Enhances transparency and trust |
Often only diagnostic, not corrective |
|
Intersectional Analysis |
Multi-dimensional bias evaluation [14], [8] |
Reveals layered inequalities |
Rarely applied; complex to operationalize |
Fig. 4. Conceptual framework illustrating the cycle of bias in AI systems. Data bias propagates into algorithmic bias, resulting in outcome bias. A feedback loop reinforces training data with biased outcomes. Interventions are categorised into policy-based (e.g., model cards, ethics audits, regulation) and technological solutions (e.g., fairness-aware algorithms, data rebalancing, XAI).
This systematic review analyzed 11 peer-reviewed studies spanning 2010 2024 to examine how gender bias manifests in AI systems and how such bias is detected and mitigated. The review encompassed applications in both education and the workplace, offering a comprehensive perspective across domains where AI-driven decisions can significantly impact individual opportunity and equity.
The findings show that:
Gender bias originates from biased training data, flawed algorithms, and a lack of ethical oversight.
Mitigation strategies fall into three main categories data-centric, algorithm-centric, and post-processing, with emerging support for policy-level governance.
Many reviewed studies highlight the trade-off between fairness and performance, and a lack of intersectional bias detection persists.
Long-term, real-world evaluations of fairness interventions are notably absent, limiting the field's ability to gauge sustainable impact.
The most substantial contributions come from studies that integrate technical and ethical perspectives, such as those by Shrestha and Das [2], Mitchell et al. [12], and O'Connor and Liu [1]. These works advocate not only for improved models but also for structural changes in how AI is regulated, developed, and audited.
To move toward equitable AI systems, future work must:
Invest in explainable AI (XAI) tools that make fairness visible and actionable.
Mandate policy compliance mechanisms, such as those introduced in the EU AI Act and the U.S. AI Bill of Rights.
Expand the lens of analysis to include intersectionality, ensuring that AI systems do not disproportionately harm already marginalized communities.
Ultimately, fair AI is not only a technical challenge but a societal one requiring collaboration between engineers, policymakers, educators, ethicists, and affected communities.
[1] S. O Connor and H. K. Liu, "Gender bias perpetuation and mitigation in AI technologies: Challenges and opportunities," AI & Society, vol. 38, pp. 917 933, 2023. doi: 10.1007/s00146-023-01675-4.
[2] S. Shrestha and S. Das, "Exploring gender biases in ML and AI academic research through systematic literature review," Frontiers in Artificial Intelligence, vol. 5, 2022. doi: 10.3389/frai.2022.976838.
[3] A. L. Hunkenschroer and C. Luetge, "Ethics of AI-enabled recruiting and selection: A review and research agenda," Journal of Business Ethics, vol. 182, pp. 243 261, 2022. doi: 10.1007/s10551-022-05049-6.
[4] X. Ferrer, T. V. Nuenen, J. M. Such, M. Cot, and N. Criado, "Bias and discrimination in AI: A cross-disciplinary perspective," IEEE Technology and Society Magazine, vol. 40, no. 1, pp. 72 80, 2021. doi: 10.1109/MTS.2021.3056293.
[5] J. Dastin, "Amazon scraps secret AI recruiting tool that showed bias against women," Reuters, Oct. 10, 2018. [Online]. Available: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.
[6] P. Terh rst et al., "A comprehensive study on face recognition biases beyond demographics," IEEE Transactions on Technology and Society, vol. 2, no. 4, pp. 199 212, 2021. doi: 10.1109/TTS.2021.3111823.
[7] H. Liu, J. Dacon, W. Fan, H. Liu, Z. Liu, and J. Tang, "Does gender matter? Towards fairness in dialogue systems," in Proc. Int. Conf. Computational Linguistics (COLING), Barcelona, Spain, Dec. 2020, pp. 4405 4415. doi: 10.18653/v1/2020.coling-main.390.
[8] Z. Slimi and B. Villarejo-Carballido, "Navigating the ethical challenges of artificial intelligence in higher education: An analysis of seven global AI ethics policies," TEM Journal, vol. 12, no. 2, pp. 548 554, 2023. doi: 10.18421/TEM122-02.
[9] L. Cheng, K. R. Varshney, and H. Liu, "Socially responsible AI algorithms: Issues, purposes, and challenges," Journal of Artificial Intelligence Research, vol. 71, pp. 1089 1121, 2021. doi: 10.1613/jair.1.12814.
[10] F. Kamalov, D. S. Calonge, and I. Gurrib, "New era of artificial intelligence in education: Towards a sustainable multifaceted revolution," Sustainability, vol. 15, no. 16, pp. 12451, 2023. doi: 10.3390/su151612451.
[11] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-CAM: Visual explanations from deep networks via gradient-based localization," in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Venice, Italy, Oct. 2017, pp. 618 626. doi: 10.1109/ICCV.2017.74.
[12] M. Mitchell, S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, et al., "Model cards for model reporting," in Proc. Conf. Fairness, Accountability, and Transparency (FAT), Atlanta, GA, USA, Jan. 2019. doi: 10.1145/3287560.3287596.
[13] K. Holstein, J. W. Vaughan, H. Daum III, M. Dudik, and H. Wallach, "Improving fairness in machine learning systems: What do industry practitioners need?" in Proc. 2019 CHI Conf. Human Factors Comput. Syst., Glasgow, Scotland, May 2019, pp. 1 16. doi: 10.1145/3290605.3300830.
[14] S. Guo, J. Wang, L. Lin, and R. Chen, "The impact of cognitive biases on decision-making processes in high-stress environments," Journal of Cognitive Psychology, vol. 33, no. 5, pp. 567 580, 2021.
[15] M. J. Page, J. E. McKenzie, P. M. Bossuyt, I. Boutron, T. C. Hoffmann, C. D. Mulrow, et al., "The PRISMA 2020 statement: an updated guideline for reporting systematic reviews," BMJ, vol. 372, no. n71, pp. 1 9, 2021. doi: 10.1136/bmj.n71.
[16] E. Ntoutsi, P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M. Vidal, et al., "Bias in data-driven artificial intelligence systems: An introductory survey," Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 10, no. 3, pp. e1356, 2020. doi: 10.1002/widm.1356.
[17] M. Raghavan, S. Barocas, J. Kleinberg, and K. Levy, "Mitigating bias in algorithmic hiring: Evaluating claims and practices," in Proc. Conf. Fairness, Accountability, and Transparency (FAT), Barcelona, Spain, Jan. 2020, pp. 469 481. doi: 10.1145/3351095.3372873.
[18] S. J. Yang, H. Ogata, T. Matsui, and N. Chen, "Human-centered artificial intelligence in education: Seeing the invisible through the visible," Cognitive and Affective Computing, vol. 2, no. 1, pp. 1 14, 2021. doi: 10.1016/j.caeai.2021.100008.
[19] A. K chling and M. C. Wehner, "Discriminated by an algorithm: A systematic review of discrimination and fairness in algorithmic decision-making in HR recruitment and development," AI and Ethics, vol. 1, pp. 1 17, 2020. doi: 10.1007/s40685-020-00134-w.
[20] A. Thieme, D. Belgrave, and G. Doherty, "Machine learning in mental health: A systematic review of the HCI literature to support the development of effective and implementable ML systems," ACM Trans. Comput.-Hum. Interact., vol. 27, no. 5, pp. 1 53, 2020. doi: 10.1145/3398069.
[21] A. Paullada, I. D. Raji, E. M. Bender, E. Denton, and A. Hanna, "Data and its (dis)contents: A survey of dataset development and use in machine learning research," Patterns, vol. 2, no. 11, pp. 100336, 2021. doi: 10.1016/j.patter.2021.100336.
[22] A. Asatiani, P. Malo, P. R. Nagbl, E. Penttinen, T. Rinta-Kahila, and A. Salovaara, "Challenges of explaining the behavior of black-box AI systems," Journal of Management Science and Quantitative Economics, vol. 6, no. 1, pp. 1 23, 2020. doi: 10.17705/2msqe.00037.
[23] V. Hassija, V. Chamola, A. Mahapatra, A. Singal, D. Goel, K. Huang, et al., "Interpreting black-box models: A review on explainable artificial intelligence," Cognitive Computation, 2023. doi: 10.1007/s12559-023-10179-8.
[24] A. Nguyen, H. N. Ngo, Y. Hong, B. Dang, and B. T. Nguyen, "Ethical principles for artificial intelligence in education," Education and Information Technologies, vol. 27, pp. 13573 13593, 2022. doi: 10.1007/s10639-022-11316-w.
[25] M. Mirbabaie, F. Br nker, N. Frick, and S. Stieglitz, "The rise of artificial intelligence: Understanding the AI identity threat at the workplace," Electronic Markets, vol. 31, pp. 895 913, 2021. doi: 10.1007/s12525-021-00496-x.
[26] P. Budhwar, S. Chowdhury, G. Wood, H. Aguinis, G. J. Bamber, J. R. Beltran, et al., "Human resource management in the age of generative artificial intelligence: Perspectives and research directions on ChatGPT," Human Resource Management Journal, vol. 34, no. 1, 2023. doi: 10.1111/1748-8583.12524.
[27] A. Caliskan, P. A. Pimparkar, T. Charlesworth, R. Wolfe, and M. R. Banaji, "Gender bias in word embeddings: A comprehensive analysis of frequency, syntax, and semantics," in Proc. 2022 AAAI/ACM Conf. AI, Ethics, and Society (AIES '22), Oxford, UK, 2022, pp. 172 182.
[28] M. Roshanaei, "Cybersecurity preparedness of critical infrastructure: A national review," Journal of Critical Infrastructure Policy, vol. 4, no. 1, Article 4, 2023.
[29] S. Popenici, "The critique of AI as a foundation for judicious use in higher education," Journal of Applied Learning & Teaching, vol. 6, no. 2, pp. 378 384, 2023.
[30] N. Meade, E. Poole-Dayan, and S. Reddy, "An empirical survey of the effectiveness of debiasing techniques for pre-trained language models," in Proc. 60th Annu. Meeting Assoc. Comput. Linguistics (ACL), Dublin, Ireland, May 2022, pp. 1878 1898.
[31] B. Booth, L. Hickman, S. K. Subburaj, and S. K. D'Mello, "Bias and fairness in multimodal machine learning: A case study of automated video interviews," in Proc. 2021 ACM Conf. Fairness, Accountability, and Transparency (FAccT '21), Virtual Event, Mar. 2021, pp. 279 289.
[32] I. D. Raji, A. Smart, R. N. White, M. Mitchell, T. Gebru, B. Hutchinson, J. Smith-Loud, D. Theron, and P. Barnes, "Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing," in Proc. 2020 Conf. Fairness, Accountability, and Transparency (FAT), Barcelona, Spain, Jan. 2020, pp. 33 44. doi: 10.1145/3351095.3372873.
[33] European Commission, "Proposal for a regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act)," European Commission, Brussels, Belgium, 2021. [Online]. Available: https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence.
[34] White House Office of Science and Technology Policy (OSTP), "Blueprint for an AI Bill of Rights: Making automated systems work for the American people," Washington, DC, USA, 2022. [Online]. Available: https://www.whitehouse.gov/ostp/ai-bill-of-rights.
[35] L. Floridi, J. Cowls, M. Beltrametti, R. Chatila, P. Chazerand, V. Dignum, C. Luetge, R. Madelin, U. Pagallo, F. Rossi, B. Schafer, P. Valcke, and E. Vayena, "AI4People An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations," Minds and Machines, vol. 28, no. 4, pp. 689 707, 2018. doi: 10.1007/s11023-018-9482-5.
AUTHOR BIOGRAPHIES
Dinesh Deckker is a postgraduate researcher
currently pursuing a PhD in Marketing. He holds a BA (Hons) in Business from
Wrexham University, UK; an MBA from the University of Gloucestershire, UK; a
BSc (Hons) in Computer Science from IIC University of Technology, Cambodia; and
an MSc (Hons) in Computing from Wrexham University. His research interests
include Artificial Intelligence, Social Sciences, and Linguistics. ORCID : https://orcid.org/0009-0003-9968-5934
Subhashini Sumanasekara is a postgraduate
researcher with a strong interdisciplinary background in computing and
education. She holds a BSc (Hons) in Computing from the University of
Gloucestershire, UK; an MSc (Hons) in Strategic IT Management from the
University of Wolverhampton, UK; a B.Ed (Hons) from IIC University of
Technology, Cambodia; and an MA (Hons) in Education from Girne American
University, Cyprus. Her research interests include Artificial Intelligence,
Social Sciences, and Linguistics. ORCID : https://orcid.org/0009-0007-3495-7774