Attitudes, Knowledge, Skills, and Usage of ChatGPT in Educators and Healthcare Professionals. Health Educ Health Promot 2025; 13 (3) :579-586 URL: http://hehp.modares.ac.ir/article-4-81390-en.html
Introduction The rapid evolution of artificial intelligence (AI), particularly through models like ChatGPT, has begun reshaping key sectors, especially education and healthcare. ChatGPT, a large language model (LLM) developed by OpenAI, has demonstrated significant potential in enhancing both fields by facilitating communication, improving learning processes, and supporting clinical decision-making [1, 2]. In education, ChatGPT has shown great promise in facilitating personalized learning experiences. It allows educators to generate tailored educational content, design interactive learning scenarios, create quizzes, and develop role-playing activities that enhance student engagement and critical thinking [3-5]. The technology’s ability to support self-directed learning by offering immediate feedback and personalized instruction is particularly valuable in remote learning environments, such as online courses or language education [6]. However, the adoption of ChatGPT in education is not without challenges. Concerns about its potential to replace educators, create dependency on AI, and generate inaccurate content highlight the importance of understanding educators’ perspectives on its integration [7, 8]. Research has shown that while educators acknowledge the potential benefits of AI, they also emphasize the need for cautious integration that ensures the preservation of human interaction and the accuracy of content [9]. Similarly, in healthcare, the application of ChatGPT holds great potential to transform clinical practices, streamline documentation processes, and support decision-making. The model has been evaluated for its ability to assist in generating clinical notes, providing differential diagnoses, and facilitating patient communication [10, 11]. Recent studies in low- and middle-income countries (LMICs) have provided further insight into how healthcare students and professionals perceive and engage with AI tools like ChatGPT. In a cross-sectional study conducted in Iran, Mousavi Baigi et al.[12] found that while the majority of medical and paramedical students have optimistic views about AI, a significant proportion demonstrates conceptual gaps in understanding its core definitions and functions, highlighting the need for targeted AI literacy initiatives in the healthcare curriculum. Furthermore, in a systematic review of AI-based automated clinical coding systems, the same group of researchers identified both the transformative potential and the practical challenges of integrating AI into medical documentation. Their analysis revealed that while AI models can enhance coding efficiency and accuracy, issues, such as transparency, ethical concerns, and interpretability remain major barriers to adoption [13]. These findings underscore the complex interplay between opportunity and readiness in AI integration within healthcare, reinforcing the importance of assessing attitudes and competencies among stakeholders prior to implementation. ChatGPT has also shown utility in medical education, where it can help students prepare for licensing exams, enhance their knowledge base, and facilitate self-learning through simulated case-based scenarios [14, 15]. Furthermore, ChatGPT’s ability to synthesize complex medical data and assist in patient education, particularly for chronic disease management, suggests that AI can play a pivotal role in improving patient outcomes [16]. However, healthcare professionals have expressed concerns regarding data privacy, the ethical implications of relying on AI for medical decisions, and the accuracy of AI-generated recommendations [17, 18]. Despite its promise, widespread adoption in healthcare requires careful consideration of these challenges and an understanding of healthcare professionals’ readiness to incorporate AI tools, like ChatGPT into their practice [19, 20]. Despite a growing number of individual studies examining the application of ChatGPT in either education or healthcare, there is currently no comprehensive synthesis that evaluates the perceptions, skills, and preparedness of professionals across both sectors. This systematic review was therefore conducted to bridge this gap by aggregating and analyzing findings from diverse international studies. The aim of this review was to examine the existing evidence on attitudes, knowledge, skills, and usage of ChatGPT among educators and healthcare professionals worldwide. By identifying trends, gaps, and challenges, this study aimed to provide a foundational understanding of how ChatGPT is currently perceived and applied, particularly in light of socioeconomic disparities. Information and Methods Study design This systematic review followed the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines to ensure an organized and transparent synthesis of the available evidence [21]. A comprehensive literature search was performed across PubMed, Embase, Scopus, and Web of Science from October 30, 2022, to June 13, 2024. The search was designed to identify studies that focused on the attitudes, knowledge, skills, or use of ChatGPT by teachers and healthcare professionals. The search terms included a combination of relevant keywords and controlled vocabulary (e.g., MeSH and Emtree terms), tailored for each database. The following Boolean search strategy was applied across all databases: (“Attitude” OR “Knowledge” OR “Awareness” OR “Skill” OR “Usage” OR “Perceptions” OR “Acceptance” OR “Practices” OR “Intention” OR “Sentiment”) AND (“Chatbot” OR “ChatGPT” OR “Chat Generative Pre-Trained Transformer”). The search was restricted to English-language databases, but studies in all languages were considered for inclusion. Only survey-based studies, including qualitative, quantitative, and mixed-methods research, that examined teachers’ and healthcare professionals’ attitudes, knowledge, skills, or use of ChatGPT were included. Exclusion criteria encompassed review articles, studies focusing on non-teacher or non-healthcare populations (e.g., the general public, university students), and studies for which full-text access was unavailable.A total of 14 eligible studies were included in the final synthesis (Figure 1).
Figure 1. Flowchart of the study selection. The quality of the studies included was assessed using the mixed methods appraisal tool (MMAT) [22], a validated tool for evaluating various study designs, including qualitative, quantitative, and mixed-methods research. Studies were initially screened based on two preliminary questions to confirm their relevance. Subsequently, each study was assessed using five specific methodological criteria and rated on a scale from 1 to 5, with a maximum score of 5. Studies with a score of less than 3/5 were excluded to ensure methodological rigor. This assessment was conducted independently by two reviewers, with discrepancies resolved through discussion and consensus. Findings A thorough systematic review was carried out, starting with an extensive literature search across four major databases: PubMed, Embase, Scopus, and Web of Science. This search initially retrieved a total of 15,148 records (PubMed: 1,717; Embase: 1,791; Scopus: 6,135; Web of Science: 5,505). After removing 4,423 duplicate entries, 10,725 distinct studies remained for further screening. These records underwent title and abstract screening, which led to the selection of 90 studies for full-text review. After conducting a detailed eligibility assessment of the 90 full-text articles, 14 studies [23-33] were found to meet the inclusion criteria and were included in the final systematic review. Out of the 14 reviewed studies, 9 studies (64.3%) received the maximum quality score of 5 out of 5, indicating excellent methodological quality. The remaining 5 studies (35.7%) scored 4 out of 5, reflecting minor methodological limitations, primarily related to the partial fulfillment of one criterion, confirming the overall robustness of the included research and supporting the reliability of the conclusions drawn (Table 1). Table 1. Quality assessment of included studies using the mixed methods appraisal tool (MMAT) (2018) Of the 14 studies included, 4 studies (28%) were conducted in high-income countries (HICs), including the United States [32], Hong Kong [27], South Korea [26], and Tunisia [30]. In contrast, 10 studies (72%) were carried out in LMICs, which include India [23, 24], Indonesia [6, 7], Pakistan [31], Singapore [27], Saudi Arabia [20, 33], and South Africa [29]. According to global economic and development data, 4 studies (28%) were conducted in HICs namely [26, 27, 30, 32], while 10 studies (72%) were conducted in LMICs, including [6, 7, 20, 23, 24, 29, 31, 33]. The total sample size across all 14 studies was 4,343 participants. The study conducted by Temsah et al.[20] in Saudi Arabia reported the largest sample size, comprising 1,057 participants, while Iqbal et al.[31] in Pakistan reported the smallest sample size, with just 20 participants. Among the 14 studies included, 7 studies (50%) focused on teachers, including schoolteachers, university professors, and pre-service teachers [6, 7, 23, 24, 26, 30]. These studies examined teachers across various educational levels and disciplines, including management, linguistics, and computer science. In contrast, the remaining 7 studies (50%) focused on healthcare professionals, including physicians, nurses, and other healthcare personnel [20, 25, 28, 29, 31-33]. These studies explored healthcare professionals at different career stages, including physicians and nurses (Table 2). Table 2. Summary Sof the characteristics of studies Among the 14 studies reviewed, 8 studies (57%) reported a positive attitude toward the use of ChatGPT [23-28, 33]. Educators and healthcare professionals generally perceive this tool as a valuable resource that can enhance teaching, learning, and healthcare tasks. For instance, in India [23], educators acknowledge the potential of ChatGPT to assist in teaching but express caution regarding its limitations, particularly in handling complex tasks and its lack of emotional intelligence. Similarly, healthcare professionals in Taiwan [25] have a relatively positive outlook, appreciating ChatGPT’s role in knowledge acquisition, though they emphasize the need for specialized guidance to ensure effective learning. In South Korea [26], pre-service teachers are enthusiastic about using ChatGPT for gardening education, finding its immediate response capability useful. However, negative attitudes are observed in Pakistan [31], where 40% of teachers express concerns about cheating and plagiarism. Similarly, in South Africa [29], concerns are raised regarding technological dependency and its potential impact on academic integrity, with 68% of educators expressing reservations. Six studies (43%) focus on the use of ChatGPT, reporting a range of positive and diverse applications of this tool [25-28, 33]. In Taiwan [25], medical students use ChatGPT for complex tasks, like searching for medical codes and creating self-assessment quizzes. Similarly, educators in India [23] and Indonesia [7] utilize it to create case studies, quizzes, and role-play scenarios. This tool is very useful for content generation, but there are limitations in areas, such as finance [24]. Conversely, pre-service teachers in South Korea [26] use it for lesson planning and material discovery, observing increased student engagement. However, 40% of teachers in Pakistan [31] report minimal use, and the tool has less impact in their classrooms. In terms of knowledge of ChatGPT, 8 studies (57%) address the awareness levels of participants, revealing varied levels of familiarity with this tool [20, 23, 24, 29, 30-33]. For example, in Saudi Arabia [20], 50% of healthcare workers are unfamiliar with ChatGPT, indicating a gap in awareness. In Tunisia [30] and South Africa [29], higher awareness is reported, with 91.7% of faculty and 88.9% of students having used the tool, though students exhibit lower levels of awareness compared to faculty. Healthcare professionals in India [28] are more familiar with the career impacts of ChatGPT, especially through social media awareness. In the domain of ChatGPT’s skills, 5 studies (36%) focus on the skills and effectiveness of this tool [6, 7, 20, 27, 33]. In Hong Kong [27], the natural language processing capabilities of ChatGPT are praised for helping English language learners with grammar correction and sentence structure. Similarly, EFL teachers in Indonesia [6] utilize ChatGPT’s ability to improve language learning and provide alternative expressions. However, concerns arise regarding complex or subject-specific topics, as students in South Africa [29] noted generic responses that lacked depth in advanced subjects. Healthcare professionals in Saudi Arabia [20] also point out the tool’s lack of accuracy in medical decision-making, raising concerns about its precision. Despite these concerns, 77.1% of participants in Saudi Arabia [33] believe that ChatGPT can have a positive impact on their career trajectory and enhance their skills. Discussion This review provided a comprehensive synthesis of existing evidence on the attitudes, knowledge, skills, and usage of ChatGPT among educators and healthcare professionals, highlighting both the enthusiasm for its applications and the concerns surrounding its use. The integration of ChatGPT in education and healthcare has gained momentum worldwide, with particularly dynamic adoption patterns across both HICs and LMICs. A notable finding was the parallel treatment of educators and healthcare professionals across studies, yet with diverging focal points in how ChatGPT is perceived. Educators frequently emphasized its pedagogical value (particularly in content generation, student engagement, and personalization), whereas healthcare professionals demonstrated more cautious optimism, often linked to the potential for task automation, documentation, or support in decision-making. This divergence suggests that attitudes toward ChatGPT are shaped not only by professional roles but also by the risk sensitivity and ethical expectations inherent in each domain. Healthcare settings, where human error may have more serious consequences, appear to foster greater skepticism about the reliability and accountability of AI tools. A key finding was the divergence between LMICs and HICs in terms of perceptions and adoption. While educators and healthcare workers in LMICs frequently emphasized ChatGPT’s utility in resource-constrained settings to improve engagement and facilitate learning, participants in HICs tended to highlight its risks, including issues of academic integrity, ethical concerns, and reliance on technology. These findings are consistent with previous systematic reviews on AI in education and healthcare, reporting that technology adoption in LMICs is often driven by necessity and limited infrastructure, whereas in HICs, adoption is shaped more by concerns about governance and quality assurance [8, 34]. Interestingly, despite greater familiarity with ChatGPT in studies based in HICs, enthusiasm was more pronounced in LMICs. This contrast may indicate that relative benefit, rather than absolute knowledge or access, drives positive perception. In contexts where educational and healthcare systems face chronic limitations, even moderate AI assistance (such as ChatGPT’s language or feedback functions) can be perceived as transformative. In contrast, participants in HICs may benchmark ChatGPT against higher standards of precision and ethical compliance, contributing to a more skeptical stance. This discrepancy has important implications for implementation strategies, which should be tailored to local expectations, constraints, and comparative baselines. Recent high-quality systematic reviews further support and contextualize our findings. For instance, Lee et al.[35] in a comprehensive review of 92 studies using the General System Theory (GST) and Knowledge, Skills, And Attitudes (KSA) framework, report a significant gap in attitudinal research and a strong emphasis on higher education and STEM subjects. Their analysis aligns with our findings, especially regarding the disproportionate focus on knowledge and skills while attitudes remain underexplored—a pattern also reflected in several studies included in our review. Similarly, Adarkwah et al.[36], in a scoping review of ChatGPT in healthcare education, highlight global disparities in adoption and ethical concerns, particularly in the African region. This reinforces our observations regarding LMICs, where enthusiasm for ChatGPT often coexists with infrastructural and ethical limitations. Additionally, Sallam [37] emphasize the dual nature of ChatGPT’s impact: while it offers immense potential for improving learning, research, and clinical workflows, it also raises critical concerns such as bias, misinformation, plagiarism, and lack of transparency. Our results support these concerns, particularly in studies where healthcare professionals expressed hesitation due to trust and reliability issues. Our data also revealed a misalignment between awareness, attitudes, and actual usage. In several studies, positive attitudes do not necessarily translate into frequent or effective use, particularly where institutional support or technical training is lacking. This suggests that awareness alone is insufficient; without accompanying skill development and structural support, the potential of ChatGPT may remain unrealized. Conversely, in some LMIC settings, creative and adaptive usage emerge even in the absence of deep technical knowledge, highlighting the role of user agency and contextual innovation as enablers of AI adoption. Together, these studies underscore the urgent need for global AI literacy programs, ethical frameworks, and context-sensitive guidelines that enable the safe and equitable integration of ChatGPT across education and healthcare. The results also align with those of Mousavi Baigi et al.[38], showing that AI applications in occupational therapy hold promise for improving personalization and efficiency but require clear protocols to avoid misuse. Similarly, Du [39] demonstrate that intelligent classroom designs can increase learning efficiency, supporting our observation that ChatGPT is perceived as a complementary tool for teaching rather than a replacement for human educators. In this context, the importance of structured evaluation frameworks for digital tools becomes evident, particularly as the use of AI-powered platforms, like ChatGPT parallels the rise of digital educational games. Several studies have emphasized the need for systematic evaluation criteria to ensure pedagogical effectiveness and user engagement in digital learning environments [40]. Regarding knowledge and skills, this review found variability in awareness levels and proficiency across contexts. In LMICs, limited exposure to AI technologies is often reported, reflecting a gap that has also been highlighted in prior studies on AI literacy among students and healthcare workers [8]. In contrast, participants based in HICs display higher familiarity but continue to express doubts about ChatGPT’s accuracy in clinical or advanced academic applications. This contrast suggests that improving AI literacy should be prioritized globally, though the specific training needs differ across economic contexts. Taken together, ChatGPT is broadly recognized as a promising tool, but its integration depends on addressing two critical gaps: AI literacy and training, particularly in LMICs and ethical and governance frameworks, particularly in HICs. Without addressing these issues, adoption may remain partial or problematic. This study’s strengths include its rigorous methodology and comprehensive coverage of studies across diverse contexts. However, limitations should be noted: the diversity of included studies precluded meta-analysis, and heterogeneity in survey instruments reduced comparability across findings. Future research should build on these insights through longitudinal and intervention-based studies to evaluate not only perceptions but also measurable impacts of ChatGPT on learning and healthcare outcomes. In conclusion, the findings from this review suggest that while LMICs are generally more enthusiastic about adopting ChatGPT due to a lack of resources and the potential to overcome educational and healthcare challenges, HICs are more cautious, focusing on ethical concerns, accuracy, and reliability. Most studies show positive attitudes toward ChatGPT, with educators and healthcare professionals acknowledging its potential to enhance learning and improve healthcare practices. However, concerns about knowledge limitations, misuse, and lack of emotional intelligence remain obstacles to its wider adoption. Future efforts should focus on training, raising awareness, and addressing ethical concerns to ensure that ChatGPT is used effectively and responsibly across both fields. Conclusion ChatGPT holds significant promise for transforming education and healthcare, particularly in resource-constrained settings. Acknowledgments: This study is part of project No. 4030734 conducted by the Student Research Committee at Mashhad University of Medical Sciences, Mashhad, Iran. We extend our sincere gratitude to the Student Research Committee and the Research & Technology Chancellor of Mashhad University of Medical Sciences for their financial support of this study. We also wish to thank the medical and paramedical students of Mashhad University of Medical Sciences who participated in this study and contributed to the completion of the research. Additionally, the authors acknowledge the assistance of ChatGPT (OpenAI) in providing English translation and improving the linguistic clarity of the manuscript. The final responsibility for the content of the manuscript lies solely with the authors. Ethical Permissions:This work was approved by the ethics committee of Mashhad University of Medical Sciences (IR.MUMS.REC.1403.258).