Validating an Instrument for Teachers’ Acceptance of Artificial Intelligence in Education
Abstract
As artificial intelligence (AI) receives wider attention in education, examining teachers’ acceptance of AI (TAAI) becomes essential. However, existing instruments measuring TAAI reported limited reliability and validity evidence and faced some design challenges, such as missing informed definitions of AI to participants. This study aimed to develop and validate a TAAI instrument, with providing sufficient evidence for high psychometric quality. Based on the literature, we first identified five dimensions of TAAI, including perceived usefulness, perceived ease of use, behavioral intention, self-efficacy, and anxiety, and then developed items to assess each dimension. We examined the face and content validity using expert review and think-aloud with pre-service teachers. Using the revised instrument, we collected responses from 274 pre-service teachers and examined the item discriminations to identify outlier items. We employed the confirmatory factor analysis and Cronbach’s alpha to examine the construct validity, convergent validity, discriminant validity, and reliability. Results confirmed the dimensionality of the scale, resulting in 27 items distributed in five dimensions. The study exhibits robust validity and reliability evidence for TAAI, thus affirming its usefulness as a valid measurement instrument.
Index Terms:
Artificial intelligence (AI), Teachers’ acceptance of AI (TAAI), Pre-service teacher, Instrument, Factor analysisI Introduction
Artificial intelligence (AI) is showing increasing potential to reshape the landscape of future education, forming an emergent research frontier [1]. This new surge of research focuses on the substantial transformation of AI in teaching, learning, and assessment practices when integrating a wide range of AI applications, such as chatbots, automated scoring systems, and intelligent tutoring systems into education [2, 1, 3]. However, realizing this potential is not without challenges. Successful implementation of AI-based teaching is closely related to factors such as teachers’ perception of and attitudes toward AI, AI literacy, and ethical concerns, etc. Among all the factors, teachers’ willingness to use AI in their teaching has been crucial [4]. If teachers are reluctant to use AI, this novel technology may be left without truly benefiting any students. This concern is not purely alarmism– “high access and low use” has constantly been seen in the history of educational technology due to teachers’ low acceptance [5, 6, 7]. Can AI be an exception?
We doubt it. In fact, AI drew substantial concerns when the concept was first introduced several decades ago. People were worried that AI might substitute humans and cause humans to lose control [8, 9]. As AI is growing its autonomy in learning analytics and decision-making and increasingly plays partial roles and responsibilities of teachers (e.g., tutors, teaching assistants), its applications in education draw significant concerns from stakeholders, including teachers, if not more [10, 8]. [11] reported teachers’ fear of AI due to the blurring boundaries between teachers’ roles and AI, which leads to criticism of AI for disrupting teachers’ expertise-based roles. Additionally, the complexity of AI applications and the additional efforts needed to operate them could further reduce teachers’ willingness to use AI [3]. In addition to these known facts, pseudo AI harmfulness rumored in the media further upsets teachers and worsens their acceptance of AI [12]. Given these concerns, understanding teachers’ acceptance of AI (TAAI) in education has become one of the pivotal factors in this educational transformation process [13, 4, 14].
Some existing studies have assessed TAAI from different aspects, such as including perceived ease of use, perceived usefulness, attitude to AI, and self-efficacy [13, 4, 15]. However, the items used to assess TAAI were primarily limited to revising components of the technology acceptance model (TAM) [4, 15], without comprehensive validation in contexts, resulting in concerns about the psychometric quality and design of items. Meanwhile, the lack of knowledge and awareness of AI among the respondents can also result in inaccurate results, since most of the instruments have not provided participants with an introduction to AI and its use in education. Due to these constraints, previous studies show limitations in understanding teachers’ perspectives on accepting AI in their teaching and identifying the influential factors. Therefore, it is vital to develop a validated instrument for measuring teachers’ acceptance of AI [15].
To address this gap, we developed and validated an instrument to measure TAAI. We first conceptualized the construct and developed a theoretical framework, followed by developing items and conducting cognitive think-aloud interviews to revise the items. To validate the TAAI, we obtained data from 274 pre-service teachers from diverse discipline backgrounds to test the instrument. Using item and test analyses, we confirmed TAAI’s validity and reliability. The study answered three research questions: 1) What are the psychometric features of the TAAI items? 2) To what extent are the instrument items reliable in assessing teachers’ acceptance of AI? 3) How valid are the instrument items in assessing teachers’ acceptance of AI?
II Challenges of Measuring TAAI
II-A Psychometric Quality
Although measuring TAAI is an important topic and many studies have reported preliminary findings, the field is suffering from a lack of instruments with robust evidence of psychometric properties. Insufficient psychometric evidence will undermine the methodological rigor of the instrument, in turn significantly compromising the validity of the findings. To obtain a high-quality instrument, researchers must provide psychometric evidence in various aspects to ensure validity.
A valid instrument needs to be supported by validity evidence, including content, construct, convergent, and discriminant validity. To address the content validity, the developed instrument needs to be put through expert review, pilot test, and cognitive interview; otherwise, the instrument is in lacks evidence of face and content validity [4, 16]. We have rarely seen research documenting sufficient content validity evidence. With regard to construct validity, studies generally use exploratory factor analysis or confirmatory factor analysis, reporting the fit indices and factor loading to show the factor structure [15]. To maintain a high-level construct validity, the number of items in one dimension should be no less than three; otherwise, it could undermine the rigor of the measurement [17]. Furthermore, the presence of items within the same dimension with similar meanings has to be cautious. Overly similar items may be redundant and should be reduced. For example, instruments measuring technology acceptance have integrated “anxiety” as one of the dimensions within TAM and relevant models [14, 16, 15]. However, the dimension used for measuring anxiety had a small number of items focused on similarly negative feelings that people have with AI, resulting in some redundancy of the instrument. Such item design may yield better reliability results, but it reduces the information that could be obtained from the instrument. Additionally, it can confuse subjects when answering items with similar meanings.
Moreover, convergent and discriminant validity, as well as item discrimination, matter the instrument’s psychometrical quality. Convergent validity refers to whether items within the same dimension are constructed to represent most of their constructs and eventually converge in the same dimension, while discriminant validity means that each item should be strongly related to its own dimension and weakly related to other dimensions [18]. The two aspects of validity demand a clear conceptual framework for the instrument. Item discrimination indicates the discriminative power of items to distinguish between respondents with higher and lower scores. To maintain item descriptive, the developed items should be accurate and understandable to the respondents without confusing wording that can impact the appropriate response of subjects [19].
However, few studies have taken all the aspects of validity into consideration, leaving the existing instruments of TAAI without robust validity evidence. Other studies that developed instruments to assess teachers’ AI affection with sufficient validity evidence, but fall out of TAAI. For example, [20] developed an instrument to measure pre-service teachers’ AI perception based on the planned behavior theory. They provided evidence about construct reliability, convergent validity, and discriminant validity. However, the instrument is about measuring teachers’ perception of learning AI rather than TAAI, which this study focuses on. To sum up, there are limited well-validated instruments to measure TAAI in education to date. The field needs instruments with psychometric quality evidence specifically to measure teachers’ AI acceptance in education.
II-B Concerns related to participants’ knowledge of AI in daily life and education
Participants’ responses would be impacted by their knowledge and understanding of the objects being measured. That is, teachers’ responses to AI acceptance can be impacted by their knowledge and understanding of AI. Without realizing this idea, TAAI instruments may yield invalid conclusions.
As an emerging technology, AI has been widely used in a variety of applications in our daily lives. However, many people are unaware of the conceptions of AI and unconscious of AI applications, let alone its potential application in education. A survey, conducted in 2022 encompassing 11,004 U.S. adults aimed at gauging people’s awareness of AI applications in daily life found that although 90% of the participants have heard about the term AI, merely one-third reported having extensive knowledge about AI and can correctly identify all the uses of AI provided in the survey (Pew Research Center, 2023). Similarly, the British Office for National Statistics released an article about public awareness, opinions, and expectations about AI and reported that only 17% of adults could identify AI usage in daily life [21]. The findings resonate with a recent study reporting a range of misunderstandings of AI [22]. Even though AI has been increasingly introduced into educational contexts, [23] found that many pre-service teachers were unaware of AI in their learning. Similarly, many teachers face the same problem of being unfamiliar with AI concepts and its various applications in daily life and education. Without knowing AI concepts and its various applications, teachers might provide incorrect or biased responses when reporting their acceptance and attitude toward AI in education.
The problem can be partly alleviated by offering stimuli, such as reading materials or video clips at the very beginning of the survey. The approach is considered to be useful in helping participants recall their memories of using technology and get a better understanding of the technology, thus eliciting more accurate responses during the test [4, 24, 25]. Studies on the acceptance of technology that involve people with limited knowledge and familiarity with the specific technology have used this methodology [4]. For example, to evaluate children’s perceptions of conversational agents (CA), [25] allowed children to interact with the CA during the study, providing them with proximal and in-person experiences to better elicit their perceptions. Kim et al. [24] provided reading articles to students when investigating their perceptions of AI teaching assistants, given these assistants are relatively new and have not yet been widely implemented. [4] provided reading material on their educational artificial intelligence tools (EAIT) to support teachers’ understanding of the concept of EAIT before evaluating their acceptance of EAIT. Additionally, [17] provided images of conversations with chatbots before surveying teachers’ attitudes towards chatbots in education. To be noted, as an emerging technology, AI has not been widely spread and integrated in classrooms, thus limiting teachers’ knowledge of AI in education. Therefore, providing stimuli is critical for obtaining reliable results when measuring TAAI. However, apart from the studies presented above, providing stimuli before the scale has not been generally seen in recent studies concerning TAAI [15].
III Materials & Methods
To address the issues mentioned above, aided by the guidelines of instrument development, we proposed the following steps to guide our development and validation of the teachers’ acceptance of AI instruments:
(1) Define the constructs aiming to measure based on a theoretical framework.
(2) Provide concise and clear stimuli at the beginning of the survey to assist participants in understanding AI and its applications in education.
(3) Develop and revise items through expert reviews and think-aloud interviews. Each dimension includes multiple items to obtain variances. Item quality is also verified by item analysis.
(4) Provide reliability and validity evidence using various methods. Expert reviews and teacher think-aloud procedures are used to ensure face and content validity. CFA and item analysis provide evidence of factor structure, reliability, convergent validity, discriminant validity, and item discrimination.
III-A Conceptualizing teachers’ acceptance of AI in education (TAAI)
III-A1 Theories of technology acceptance
Teachers’ acceptance of novel technologies has been one of the most critical topics in science and technology research. Teachers are the first exposers when technologies come into play in education. Regardless of the novelty of the technologies, teachers, knowledge facilitators, and class activity coordinators determine what technologies to adopt in classrooms, when, and how. Therefore, effective technology integration relies highly on teachers’ willingness and acceptance of the technology. Given the importance of adopting novel technologies, uncovering factors accounting for teachers’ acceptance of technology has become one of the crucial research topics within decades of literature. In this regard, Davis [26] proposed the technology acceptance model (TAM) to explain factors influencing the acceptance of any novel technology, which has been the most influential model in the field.
TAM concerns how users come to accept and use technology, including five factors that influence their decision about how and when they will use it. Davis underscores users’ perceptions, attitudes, behavioral intentions, and actual uses as integral constructs. He believed that teachers’ perceived usefulness (PU) and perceived ease of use (PEU) are at the forefront of other factors in their decisions to adopt the technology. Technologies with lower PU and PEU lose the first dip of being integrated into educational settings. Davis further pointed out that teachers’ attitude towards use (AU) plays a significant role in their decision-making, as negative feelings during technology integration can discourage further usage. Moreover, he realized that teachers’ reluctance to use technology is often because of the lack of intent to use it; the more teachers are self-motivated, the more likely they are to accept the technologies. Therefore, Davis perceived behavioral intention (BI) as one of the critical factors as well. Ultimately, Davis suggested that teachers have to use the technology in person to realize the potential of the technology; the more frequently teachers actually use (AU) the technology, the more likelihood that teachers may accept and integrate the technology in their classroom.
In their later work, Davis and colleagues removed the construct of AU because of its generality and added two additional constructs: social influence processes (e.g., subjective norm) and cognitive instrumental processes (e.g., job relevance) [27]. In addition, since a limited effect was found on the AU factor, many studies that applied the TAM have left out AU from the construct [28, 29, 14, 30]. Researchers proposed revised TAM models without the AU factor, such as TAM 2 [27], TAM 3 [31], and UTAUT [32]. Despite the broad use of TAM, the direct application of TAM in teachers’ acceptance of AI remains an area with limited research.
III-B Constructs of TAAI
AI shares common features with prior learning technologies while maintaining its distinct characteristics, such as stimulating human-like cognitive functions [33], presenting teachers with excitement and unique challenges as they embrace AI in teaching and learning settings. These challenges, in particular, are stressed by the automaticity, accessibility, and functionality of AI [34], which substantially impact teachers’ willingness to adopt AI in teaching and learning.
AI seldom acts in isolation in the classroom; instead, it is integrated with other technologies to facilitate teaching and learning. For example, Gerard and Linn (2022) [35] included AI-based automatic scoring in web-based inquiry to facilitate science learning. Latif and Zhai, et al. [Latif2024] included AI-Scorer in mobile learning to facilitate formative assessment practices in classrooms. These integrations increase the performance challenges for teachers due to the complexity of the applications, thus drawing teachers’ concerns about ease of use. What made it more complex was the unique applicability of machine algorithms for specific purposes and populations based on the training dataset. Research has suggested that machine algorithms trained by given datasets are expected to be used with new datasets with similar features to maintain objectivity and accuracy; violating this rule may result in errors or bias [36]. This requirement is substantially demanding as teachers are supposed to understand the conditions of AI and its applicability. Will teachers be able to use AI in a feasible manner? To what degree does this additional complexity impact teachers’ willingness to use AI? These are questions that TAAI has to address.
Although AI has recently received significant attention, many people are unaware of AI applications. According to Gartner Research Circle and Gartner’s 2019 CIO Agenda survey, 42% of respondents did not fully understand the benefits of implementing AI in the workplace and daily life [37]. In the educational field, although much research has been conducted on integrating AI into teaching practice, teachers’ actual use of AI in classrooms is still very limited [5]. Both in-service and pre-service teachers barely have experience learning AI knowledge and using AI technology [5]. Due to the circumstances, teachers’ perceived ease of use and perceived usefulness of AI can be important variables that impact their intention to accept AI. Thus, we adopt perceived ease of use and perceived usefulness as internal factors to explain teachers’ behavioral intentions. These three constructs are also empirically verified to be powerful for predicting and explaining user behaviors for new technologies in educational contexts [38, 14, 15]. As a result, we have perceived ease of use for AI, perceived usefulness of AI, and behavioral intention of AI integration.
In addition, we expanded the TAM by adding two more constructs (i.e., anxiety and self-efficacy) to assess teachers’ acceptance of AI in education. Teachers’ anxiety about AI is critical to be assessed and garners significant scholarly attention [39, 8, 40]. The characteristics of AI, notably its ability to learn, reason, solve problems, and make decisions by imitating human cognitive abilities, compared to other information technologies [33], raise critical concerns among teachers. One of the concerns is that the decision process of AI is not transparent and can potentially be biased, which makes it possibly to lead to unfair treatment of students [41]. Some educators also worry that AI can cause job replacement, resulting in job loss and a decline in the quality of education [42]. At the same time, since the functioning of AI applications is based on large and processing data, personal privacy also becomes an issue. With these concerns and the lack of trust in using AI in teaching, it will be hard for teachers to engage in the open and effective integration of AI in education [39].
Besides, anxiety has been considered one of the important variables in predicting acceptance of AI. Many studies in educational contexts have explored the relationship of anxiety with other constructs, including attitudes, perceived ease of use, etc., and have found some significantly negative correlation [43, 39, 14, 16]. At the same time, although teachers’ AI anxiety is increasingly investigated, existing research uncovered limited evidence of the relationship between teachers’ anxiety and their acceptance of AI. In this study, we proposed that teachers’ anxiety about using AI in teaching is a critical construct in the TAAI instrument.
Teachers’ self-efficacy refers to teachers’ belief in their ability to integrate new technologies into their professional practice to enhance students’ learning [14]. Previous studies have found both direct and indirect effects of self-efficacy on pre- and in-service teachers’ acceptance of using new technologies, including mobile devices [14], computers [44], assistant technologies for special education [45], etc. Self-efficacy, thus, is regarded as one of the key drivers in the adoption of AI [18]. Due to the low actual use of AI in current teaching practice, teachers are somehow unfamiliar with learning and teaching with AI applications. They may also have a low level of awareness about the use of AI. In fact, most teachers consider themselves unknown to the general characteristics of AI and how to apply it in teaching [5, 46]. The uncertainty can make teachers feel overwhelmed about using AI [18] and undermine their perceived usefulness of adopting AI in teaching [18]. Low self-efficacy in using AI in teaching, thus, leads to unwillingness to integrate AI into classroom practices. In this study, we also examined the teachers’ self-efficacy in using AI. Given the advantages of TAM and its wide application in related studies, this study proposed the constructs of interest—TAAI within a revised framework of TAM.
Table I presents the definitions of the construct through five sub-dimensions. Table I. Constructs and descriptions of the instrument dimensions of teachers’ acceptance of AIED
Dimension | Description |
---|---|
Behavioral Intention | The strength of teachers’ intention to use AI in their teaching practice |
Behavioral Intention | The degree to which teachers believe that AI can be used effortlessly |
Perceived Usefulness | The extent to which teachers feel that utilizing AI will improve their teaching performance |
Self-efficacy | The degree to which teachers believe that they have the ability to perform specific tasks using AI in their teaching to achieve better educational results |
Anxiety | Negative feelings and concerns that teachers can have when they use AI in their teaching practices |
III-C Development of the initial instrument
The initial version of the TAAI instrument includes three sections. The first section consists of reading material and 10 five-point scale questions. The reading material introduces the concept of AI, some daily applications, and how AI is integrated into educational settings. The questions ask participants to illustrate their previous experiences of using AI applications in their personal and professional lives. The reading material and the questions function as the stimulus in the instrument to help participants familiarize themselves with AI applications in their daily lives and education settings. The second section collects participants’ demographic information, including gender, grade, and majors, to better contextualize students’ responses to the following questions. The third section includes 32 survey items to assess the five dimensions of teachers’ acceptance of AI. The survey items were developed using a five-point Likert scale, ranging from strongly disagree (1) to strongly agree (5).
The construction of teachers’ AI acceptance guided the development of the item pool in the third section of the instrument. We developed the items based on previous technology acceptance instruments, mostly for teachers [17, 8, 47, 14, 40, 16]. Since some items focused on general AI acceptance rather than TAAI, we revised them to make the context more related to teachers’ using AI in their teaching. For example, the original item, “AI treats different people differently, which makes me anxious” [8], was revised to form the item, “I am concerned that AI algorithms may be biased, leading to varying accessibility to students with diverse backgrounds” in AN dimension. Meanwhile, we adopted some items from other non-AI instruments to be suitable in the TAAI context. For instance, the item “Using WBLS saves me time” [48] was modified into “Using AI technologies saves me time.” Additionally, we created new items focusing on teachers’ potential teaching practice with AI, such as the items “I am willing to use AI tools to help prepare lessons” and “I am willing to use AI tools to assess students,” which were developed as original items in the BI dimension to focus more on using AI for different teaching purposes.
To establish the face and content validity of the initial instrument, a panel of four experts —comprising two professors with specialization in AI in education, one with a focus on STEM education, and another with expertise in none-STEM education — conducted a comprehensive evaluation. They evaluated every part of the instrument concerning its intended purpose, phrasing, and the defined scope of each item’s corresponding dimensions. Furthermore, they provided insightful comments and suggestions for instrument revision at item and dimension levels. Based on expert feedback, we revised the instrument. We reduced the length of the reading material to make it more concise, clear, and readable. As for teachers’ experience of AI, we provided more examples of commonly used AI programs to make the items more familiar to respondents. For example, in the original item “I employ personal assistants in the mobile phone”, we added Siri as an AI assistant example that people usually use. For items that measure TAAI, we revised the wording of some items and also added concrete examples to make the items easier to read and understand. For example, we added “lesson preparation, grading, and other administrative tasks” as ways that teachers can use AI in teaching to save time. In total, 15 items in the instrument were modified.
III-D Think aloud
To further establish face and content validity, we conducted think-aloud of the initial instrument with five pre-service teachers, including four females and one male, randomly selected from a class. Think-aloud protocol requires participants to articulate their feelings, thoughts, actions, and other cognitive processes while engaging in a set of activities to make thought processes as explicit as possible throughout task performance [49]. This allows researchers to gain insights into the participants’ cognitive processes rather than just their responding results for further instrument refinement [50]. Researchers provided guidance for relevant concepts, functions, and procedures of think-aloud before the test, followed by a demonstration. The participants read the guidance and then completed a short task with think-aloud, ensuring their understanding of the procedure. Afterward, they were given the questionnaire and started to think aloud. Researchers videotaped the entire think-aloud process. Based on the data obtained, we modified or deleted the items that did not function as expected. For example, for some original items wording as “I can use AI technologies to …,” participating teachers commented that “Technology seems difficult. I can use AI-based tools, but I don’t think I can use AI technology since I know nothing about that.” We thus rephrased “AI technologies” as “AI applications” or “AI tools.” Through the think-aloud procedure, six items were revised, and an additional item was added. The added item is “I have seen others using AI tools for teaching” in the AI using experience section, since a participating teacher mentioned that although she had no experience of using AI in teaching, she has seen other teachers used AI applications.
III-E Field test and data collection
Using a random sampling approach, we distributed the revised version of the TAAI instrument to 301 pre-service teachers for field test through an online platform called “Wenjuanxing.” All participants were from a four-year university specializing in in-service and pre-service teacher education. To increase the diversity of participants, we recruited participants with various majors, including STEM majors such as mathematics, physics, biology, chemistry, geography, and technology, and non-STEM majors including Chinese, English, history, and politics. Among the 301 participants, we collected 274 valid responses for data analysis. The demographic distribution of the sample is in Table II.
Gender | Grade | Subject domain | ||||
Male | Female | Undergraduate | Graduate | STEM | Non-STEM major | |
n | 54 | 220 | 114 | 160 | 180 | 94 |
% | 19.7 | 80.3 | 41.6 | 58.4 | 65.7 | 34.3 |
III-F Data analysis
To examine individual items’ discrimination power, we first conducted a t-test using SPSS to compare teachers’ scores between the upper and lower 27%. Meanwhile, using SPSS, we also calculated Cronbach’s alpha for the whole instrument, each dimension, and each item, to indicate the internal consistency of the scale and identify items that need to be deleted.
Then, we employed Mplus to conduct confirmatory factor analysis (CFA) to confirm the dimensionality of the instrument, providing evidence for construct, convergent, and discriminate validity. We used the weighted least squares means and variance adjusted (WLSMV) estimator option, considering the categorical and non-normal data. Factor structure was verified based on a solid theoretical foundation and by conducting CFA using empirical data. CFA can help examine whether the data fit a theoretical structure. Based on the results of CFA, many items were loaded onto the factors; however, the fit of the overall model was not satisfying enough. Thus, according to CFA results, including factor loadings and modification indices (M.I.), we removed ill-fitting items with factor loadings less than 0.32 (Tabachnick & Fidell, 2001). Meanwhile, factor loadings of the items belonging to the same dimension should be close ideally, and items with significantly lower factor loadings were also considered to have relatively poor performance. Furthermore, items with M.I. larger than 10 were “bad” [51] since M.I. shows the degree to which model fit improves if an item is allowed to be correlated with another factor or removed.
To select and exclude ill-items, we considered item complexity and content. For example, the item “Learning to use AI in teaching is difficult for me” showed relatively low factor loading (i.e., 0.44) and a high modification index (i.e., above 10). Although it stands for the “learning anxiety” construct in anxiety construct, the concept can be easily confused with “self-efficacy,” which can also be seen from the think-aloud result. This will result in a correlation between multiple factors for one item. Therefore, we decided to remove the item. Another example is the item “I will actively learn to adopt AI tools to assist teaching.” The factor loading was satisfying (i.e., 0.896) while the M.I. was above 10, the meaning expressed in the item is somewhat different from other items in the dimension, which concerned teachers’ willingness to use AI in teaching practice. Therefore, we also removed the item. On the contrary, for the item “I would like to use AI tools for student assessment” in the behavioral intention dimension, the factor loading (i.e., 0.741) was acceptable, while the M.I. was above 10. We decided to keep the item since assessment is an important part of teaching practice. The cull of items stopped until the overall fit of the model and the statistics of each item reached a satisfactory result.
IV Results
The final instrument contains five sub-dimensions and 27 items in total. We reported the psychometric quality results of the final instrument as follows.
IV-A Item analysis
To examine the item discrimination, we conducted a two-sample t-test to compare scores between the upper 27% and the lower 27% of participants for each item [52]. We found that all items show significant discrimination between the upper and lower groups of participating teachers (see Table III). The findings suggest the robustness of individual items in discriminating participants with varying levels of acceptance of AI.
Item | Cronbach’s alpha if item deleted | Mean (SD) | t | Upper 27% | Lower 27% | |
Upper 27% | Lower 27% | |||||
PU | ||||||
Y1 | 0.91 | 4.33 (0.54) | 3.59 (0.58) | -8.996* | 4.33 | 3.59 |
Y2 | 0.92 | 4.29 (0.58) | 3.43 (0.77) | -8.488* | 4.29 | 3.43 |
Y3 | 0.92 | 4.33 (0.58) | 3.54 (0.69) | -8.379* | 4.33 | 3.54 |
Y4 | 0.91 | 4.28 (0.63) | 3.38 (0.77) | -8.685* | 4.28 | 3.38 |
Y5 | 0.92 | 4.36 (0.59) | 3.72 (0.75) | -6.487* | 4.36 | 3.72 |
Y6 | 0.92 | 4.40 (0.49) | 3.75 (0.60) | -8.019* | 4.40 | 3.75 |
PEU | ||||||
Y7 | 0.92 | 3.88 (0.71) | 2.78 (0.75) | -10.253* | 3.88 | 2.78 |
Y8 | 0.92 | 3.70 (0.77) | 2.49 (0.67) | -11.462* | 3.70 | 2.49 |
Y9 | 0.91 | 3.64 (0.70) | 2.50 (0.69) | -11.228* | 3.64 | 2.50 |
Y10 | 0.92 | 3.85 (0.65) | 2.73 (0.87) | -10.194* | 3.85 | 2.73 |
Y11 | 0.91 | 3.90 (0.63) | 2.48 (0.70) | -14.469* | 3.90 | 2.48 |
BI | ||||||
Y12 | 0.91 | 4.36 (0.53) | 3.53 (0.78) | -8.447* | 4.36 | 3.53 |
Y13 | 0.91 | 4.35 (0.52) | 3.18 (0.89) | -10.820* | 4.35 | 3.18 |
Y14 | 0.92 | 4.14 (0.72) | 3.12 (1.04) | -7.768* | 4.14 | 3.12 |
Y15 | 0.91 | 4.38 (0.49) | 3.27 (0.92) | -10.253* | 4.38 | 3.27 |
Y16 | 0.91 | 4.37 (0.53) | 3.51 (0.79) | -8.654* | 4.37 | 3.51 |
SE | ||||||
Y17 | 0.91 | 4.21 (0.50) | 3.39 (0.76) | -8.487* | 4.21 | 3.39 |
Y18 | 0.91 | 4.18 (0.53) | 3.11 (0.80) | -10.508* | 4.18 | 3.11 |
Y19 | 0.91 | 4.23 (0.49) | 3.27 (0.71) | -10.581* | 4.23 | 3.27 |
Y20 | 0.91 | 4.15 (0.53) | 3.10 (0.76) | -10.924* | 4.15 | 3.10 |
Y21 | 0.91 | 4.05 (0.62) | 2.92 (0.82) | -10.656* | 4.05 | 2.92 |
Y22 | 0.91 | 4.32 (0.54) | 3.21 (0.67) | -11.228* | 4.32 | 3.21 |
AN | ||||||
Y23 | 0.92 | 2.72 (1.05) | 2.03 (0.70) | -4.964* | 2.72 | 2.03 |
Y24 | 0.92 | 2.80 (1.02) | 2.26 (0.89) | -3.857* | 2.80 | 2.26 |
Y25 | 0.92 | 2.78 (1.16) | 2.10 (0.85) | -4.572* | 2.78 | 2.10 |
Y26 | 0.92 | 2.50 (1.00) | 2.12 (0.71) | -2.977* | 2.50 | 2.12 |
Y27 | 0.92 | 2.52 (1.02) | 1.93 (0.63) | -4.700* | 2.52 | 1.93 |
*p 0.01 |
IV-B Reliability
To examine the reliability of the instrument, we calculated Cronbach’s alpha for the instrument, each dimension, and each item if deleted (see Table III). Results indicate that the reliability coefficient of the instrument is 0.92, and that of the sub-dimensions PU, PEU, BI, SE, and AN are 0.88, 0.91, 0.91, 0.91, and 0.77, respectively. All the values over 0.7 indicate a high consistency in each dimension and among the instrument [N](Nunnally, 1978). Meanwhile, we found that if an item was deleted, the Cronbach’s alpha was either dropped or similar to the original value, which means no item should be deleted.
IV-C Validity
IV-C1 Construct validity
To ensure the validity, CFA was utilized to evaluate how well the measurement model fit the data by reporting root mean square error of approximation (RMSEA), comparative fit index (CFI), Tucker-Lewis index (TLI), a Standardized Root Mean Square Residual (SRMSR), and chi-square ratio on the degrees of freedom (). RMSEA is an absolute fit measure that is one of the most widely used indices in SEM [53](Kline, 2015) with a value less than 0.08 is acceptable [54]. CFI and TLI are incremental fit measures, ranging from 0 to l, with a value greater than 0.90 suggests a good fit [55]. SRMSR is an absolute fit measure, ranging from zero to 1.0 with a value less than 0.07 being acceptable [56]. As for χ2/df, it is preferred to chi-square fit statistic since it is not affected by large samples. A value less than 3 indicates a good fit between the hypothesized model and the sample data (Chi-square for model fit in confirmatory factor analysis). The results of CFA showed a good fit of RMSEA of 0.061, a CFI of 0.981, a TLI of 0.979, an SRMSR of 0.051, a chi-square of 629.186 with df =314, and a χ2/df of 2.00. The test results indicated that the instrument’s items clustered around factors or sub-dimensions that represented the main components as constitute the theoretical framework. The standardized item loading ranges within the five factors are perceived usefulness (0.70-0.94), ease of use (0.83–0.94), behavioral intention (0.75–0.95), self-efficacy (0.81–0.92), and anxiety (0.60–0.81) (see Table IV), which are all above the benchmark of 0.5 [18].
Item | PU | EU | BI | SE | AN |
---|---|---|---|---|---|
Y1 | 0.94 | ||||
Y2 | 0.81 | ||||
Y3 | 0.80 | ||||
Y4 | 0.88 | ||||
Y5 | 0.70 | ||||
Y6 | 0.89 | ||||
Y7 | 0.83 | ||||
Y8 | 0.88 | ||||
Y9 | 0.89 | ||||
Y10 | 0.86 | ||||
Y11 | 0.94 | ||||
Y12 | 0.90 | ||||
Y13 | 0.95 | ||||
Y14 | 0.75 | ||||
Y15 | 0.91 | ||||
Y16 | 0.93 | ||||
Y17 | 0.81 | ||||
Y18 | 0.86 | ||||
Y19 | 0.91 | ||||
Y20 | 0.89 | ||||
Y21 | 0.81 | ||||
Y22 | 0.92 | ||||
Y23 | 0.75 | ||||
Y24 | 0.69 | ||||
Y25 | 0.69 | ||||
Y26 | 0.60 | ||||
Y27 | 0.81 |
IV-C2 Convergent validity
The convergent validity was tested using factor loadings, Average Variance Extracted (AVE), and composite reliability (CR). Based on previous research, the factor loading should exceed 0.60 to demonstrate the sufficiency of representing the construct [57]. Findings suggest that our item loadings meet the benchmark (see Table IV). Furthermore, Table V shows the AVE and CR of each factor. AVE measures the amount of variance captured by a latent variable from its measurement scale to the amount of variance due to measurement errors [58], while CR assesses how well a construct is measured by the indicators assigned to it. The AVE of all factors is above 0.5, indicating that the latent variables account for at least 50% of the measurement variance, which stands for a satisfying convergent validity [58]. The value of CR on each factor is above 0.8, indicating a robust composite reliability [59].
Indicators | Factors | ||||
---|---|---|---|---|---|
PU | EU | BI | SE | AN | |
AVE | 0.70 | 0.78 | 0.66 | 0.76 | 0.51 |
CR | 0.93 | 0.95 | 0.91 | 0.95 | 0.83 |
IV-C3 Discriminate validity
To examine the discriminate validity, we calculated the correlations between the dimensions (see Table VI). According to the correlation results, most of the dimensions were statistically correlated to each other, with a value higher than 0.4, indicating a common construct across them. However, the dimension anxiety was not significantly correlated with perceived usefulness and perceived ease of use () but was correlated with behavioural intention as well as self-efficacy significantly with a relatively low correlation coefficient.
Additionally, we compared the correlation coefficients between factors and the Average Variance (AV) of each construct. AV is the square root of the corresponding AVE. When the AV of each factor is more than the correlation coefficients of that dimension with other constructs, it indicates a valid discriminant [60]. Table VI shows the correlation between constructs, which was smaller than AV, indicating robust discriminant validity.
Factors | PU | PEU | BI | SE | AN |
---|---|---|---|---|---|
PU | 0.838 | — | — | — | — |
PEU | 0.430* | 0.884 | — | — | — |
BI | 0.835* | 0.430* | 0.815 | — | — |
SE | 0.652* | 0.722* | 0.621* | 0.870 | — |
AN | -0.106 | -0.082 | -0.146* | -0.222* | 0.711 |
*; The AV of each dimension on the diagonal line. |
V Discussion and Conclusion
Promoting teachers’ integration of AI applications in classrooms is vital in the AI era, while this classroom innovation depends on teachers’ acceptance of AI in education. Since teachers’ acceptance will impact their willingness and ways to adopt AI in teaching, it is critical to understand teachers’ acceptance of AI. Due to the lack of well-validated instruments measuring teachers’ acceptance of AI, this study developed and validated a high-quality instrument to bridge the gap in the literature. The development of the instrument in this study addresses concerns that have been found in previous instruments. We used multiple methods to provide various evidence for the instrument’s psychometric quality, including using expert review and think-aloud interviews to provide evidence for face and content validity, using robust statistical measures (i.e., CFA) to explore the underlying psychometric properties and the structure of the instrument, including construct, convergent, discriminant validity and reliability. The final instrument had five dimensions with 27 items, consistent with the proposed theoretical framework, with a satisfying model fit, reliability, and validity. According to Potvin & Hasni (2014) [61], such empirical alignment between the factor structure and item loading of the instrument and the theoretical basis is also important evidence of the instrument’s construct validity. The final instrument items is in the Appendix, including sub-dimensions as Behavioral Intention of AI (five items), Perceived Ease of Use AI (five items), Perceived Usefulness of AI (six items), Self-efficacy of AI (six items), and AI Anxiety (five items). The various pieces of evidence suggest that the instrument is valid and reliable for measuring teachers’ acceptance of AI in education.
The TAAI instrument has significant implications for promoting AI uses in classrooms and teacher professional development. For example, future research can use the instrument to evaluate pre-service teachers’ acceptance of using AI in teaching and provide evidence for relative intervention in teacher education and professional development. Furthermore, it can be used to further explore additional influential factors correlated to the constructs in this instrument to deepen the understanding of teachers’ AI acceptance in education, such as gender, AI-related experience, and cultural backgrounds. Some of these variables, such as gender, have been paid attention to in teachers’ attitudes and application of AI [62, 39, 15], while others have been explored in users’ acceptance of other new technologies in other contexts [63]. However, little research has been conducted to investigate the influence of those external factors on teachers’ acceptance of AI. Moreover, future research can explore the causal relationship between behavioral intention and other constructs of the TAAI instrument to explain how these factors could impact teachers’ decision-making and behavior in using AI in teaching.
We acknowledge some limitations for future studies to address. First, although the participating pre-service teachers came from diverse disciplines, they were from one university, which could limit the generalization of the instrument. Future studies should recruit more participants in other settings to further validate the instrument. This will contribute to the refinement of the instrument according to data obtained from more teacher groups. Another limitation is the country-specific context. While AI in education is popular around the world, the specific applications in the classroom may be different. To cater to the Chinese context, the examples we provided about AI tools in the stimulus of the TAAI were customized in China. To use the instrument in other cultural contexts, researchers should make some revisions of the AI application examples according to their specific contexts. The revised instrument can allow comparison studies to explore the possible characteristics and cross-cultural differences in teachers’ acceptance of AI worldwide.
Conflict of Interest
No potential conflict of interest was reported by the authors.
Acknowledgment
The authors thank China Scholarship Council (CSC) for supporting the study. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the CSC.
Ethics approval statement
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors.
References
- [1] S. Guo, Y. Zheng, and X. Zhai, “Artificial intelligence in education research during 2013–2023: A review based on bibliometric analysis,” Education and Information Technologies, 2024.
- [2] X. Chen, D. Zou, H. Xie, G. Cheng, and C. Liu, “Two decades of artificial intelligence in education: Contributors, collaborations, research topics, challenges, and future directions,” Educational Technology and Society, vol. 25, no. 1, pp. 28–47, 2022.
- [3] X. Zhai, “Practices and theories: How can machine learning assist in innovative assessment practices in science education,” Journal of Science Education and Technology, vol. 30, no. 2, pp. 139–149, 2021.
- [4] S. Choi, Y. Jang, and H. Kim, “Influence of pedagogical beliefs and perceived trust on teachers’ acceptance of educational artificial intelligence tools,” International Journal of Human–Computer Interaction, vol. 39, no. 4, pp. 910–922, 2023.
- [5] H. M. N. AlKanaan, “Awareness regarding the implication of artificial intelligence in science education among pre-service science teachers,” International Journal of Instruction, vol. 15, no. 3, pp. 895–912, 2022.
- [6] A. Al-Subhy, “The reality of ai applications in education by faculty members at najran university,” Journal of the Faculty of Education, Ain Shams University, vol. 4, no. 44, pp. 319–368, 2020.
- [7] T. Nazaretsky, M. Cukurova, M. Ariely, and G. Alexandron, “Confirmation bias and trust: Human factors that influence teachers’ attitudes towards ai-based educational technology,” in CEUR Workshop Proceedings, vol. 3042, October 2021.
- [8] J. Li and J. Huang, “Dimensions of artificial intelligence anxiety based on the integrated fear acquisition theory,” Technology in Society, vol. 63, p. 101410, September 2020.
- [9] Y. Wang and Y. Wang, “Development and validation of an artificial intelligence anxiety scale: an initial application in predicting motivated learning behavior,” Interactive Learning Environments, vol. 0, no. 0, pp. 1–16, 2019.
- [10] Y. Jang, S. Choi, and H. Kim, “Development and validation of an instrument to measure undergraduate students’ attitudes toward the ethics of artificial intelligence (at-eai) and analysis of its difference by gender and experience of ai education,” Education and Information Technologies, vol. 27, 2022.
- [11] S. Serholt, W. Barendregt, A. Vasalou, P. Alves-Oliveira, A. Jones, S. Petisca, and A. Paiva, “The case of classroom robots: teachers’ deliberations on the ethical tensions,” AI and Society, vol. 32, no. 4, pp. 613–631, 2017.
- [12] X. Zhai and J. Krajcik, “Pseudo ai bias,” arXiv preprint arXiv:2210.08141, 2024.
- [13] A. A. Darayseh, “Acceptance of artificial intelligence in teaching science: Science teachers’ perspective,” Computers and Education: Artificial Intelligence, vol. 4, Feb 2023.
- [14] J. C. Sánchez-Prieto, S. Olmos-Migueláñez, and F. J. García-Peñalvo, “Mlearning and pre-service teachers: An assessment of the behavioral intention using an expanded tam model,” Computers in Human Behavior, vol. 72, pp. 644–654, 2017.
- [15] C. Zhang, J. Schießl, L. Plößl, F. Hofmann, and M. Gläser-Zikuda, “Acceptance of artificial intelligence among pre-service teachers: a multigroup analysis,” International Journal of Educational Technology in Higher Education, vol. 20, no. 1, 2023.
- [16] Y. Wang, C. Liu, and Y. Tu, “Factors affecting the adoption of ai based applications in higher education: An analysis of teachers perspectives using structural equation modeling,” Educational Technology and Society, vol. 24, no. 3, pp. 116–129, 2021.
- [17] R. Chocarro, M. Cortiñas, and G. Marcos-Matás, “Teachers’ attitudes towards chatbots in education: a technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics,” Educational Studies, vol. 49, no. 2, pp. 295–313, 2023.
- [18] Y.-Y. Wang and Y.-W. Chuang, “Artificial intelligence self-efficacy: Scale development and validation,” Education and Information Technologies, vol. 29, no. 4, pp. 4785–4808, 2024.
- [19] S. Chatterjee and K. K. Bhattacharjee, “Adoption of artificial intelligence in higher education: a quantitative analysis using structural equation modelling,” Education and Information Technologies, vol. 25, no. 5, pp. 3443–3463, 2020.
- [20] I. T. Sanusi, M. A. Ayanwale, and A. E. Tolorunleke, “Investigating pre-service teachers’ artificial intelligence perception from the perspective of planned behavior theory,” Computers and Education: Artificial Intelligence, vol. 6, no. November 2023, 2024.
- [21] Office for National Statistics (ONS), “Public awareness, opinions and expectations about ai: July to october 2023,” 2023.
- [22] A. Bewersdorff, X. Zhai, J. Roberts, and C. Nerdel, “Myths, mis-and preconceptions of artificial intelligence: A review of the literature,” Computers and Education: Artificial Intelligence, vol. 4, June 2023.
- [23] A. E. Alimi, O. F. Buraimoh, G. A. Aladesusi, and E. O. Babalola, “University students’ awareness of, access to, and use of artificial intelligence for learning in kwara state,” Indonesian Journal of Teaching in Science, vol. 1, no. 2, pp. 91–104, 2021.
- [24] M. Kim, S. Yi, and D. Lee, “Between living and nonliving: Young children’s animacy judgments and reasoning about humanoid robots,” PLoS ONE, vol. 14, no. 6, pp. 1–19, 2018.
- [25] Y. Xu and M. Warschauer, “What are you talking to?: Understanding children’s perceptions of conversational agents,” in Conference on Human Factors in Computing Systems - Proceedings, pp. 1–13, 2020.
- [26] F. D. Davis, A technology acceptance model for empirically testing new end-user information systems: Theory and results. Doctoral dissertation, Massachusetts Institute of Technology, 1985.
- [27] V. Venkatesh and F. D. Davis, “A theoretical extension of the technology acceptance model: Four longitudinal field studies,” Management science, vol. 46, no. 2, pp. 186–204, 2000.
- [28] P. J.-H. Hu, T. H. Clark, and W. W. Ma, “Examining technology acceptance by school teachers: A longitudinal study,” Information & management, vol. 41, no. 2, pp. 227–241, 2003.
- [29] S. Y. Park, “An analysis of the technology acceptance model in understanding university students’ behavioral intention to use e-learning,” Journal of Educational Technology & Society, vol. 12, no. 3, pp. 150–162, 2009.
- [30] G. W.-H. Tan, K.-B. Ooi, L.-Y. Leong, and B. Lin, “Predicting the drivers of behavioral intention to use mobile learning: A hybrid sem-neural networks approach,” Computers in Human Behavior, vol. 36, pp. 198–213, 2014.
- [31] V. Venkatesh and H. Bala, “Technology acceptance model 3 and a research agenda on interventions,” Decision sciences, vol. 39, no. 2, pp. 273–315, 2008.
- [32] V. Venkatesh, M. G. Morris, G. B. Davis, and F. D. Davis, “User acceptance of information technology: Toward a unified view,” MIS quarterly, pp. 425–478, 2003.
- [33] B. C. Stahl and D. Wright, “Ethics and privacy in ai and big data: Implementing responsible research and innovation,” IEEE Security and Privacy, vol. 16, no. 3, pp. 26–33, 2018.
- [34] E. A. Alasadi and C. R. Baiz, “Generative ai in education and research: Opportunities, concerns, and solutions,” Journal of Chemical Education, vol. 100, no. 8, pp. 2965–2971, 2023.
- [35] L. Gerard, M. C. Linn, and U. C. Berkeley, “Computer-based guidance to support students’ revision of their science explanations,” Computers & Education, vol. 176, 2022.
- [36] X. Zhai and R. H. Nehm, “Ai and formative assessment: The train has left the station,” Journal of Research in Science Teaching, vol. 60, no. June, pp. 1390–1398, 2023.
- [37] Gartner, “The present and future of ai.” https://www.gartner.com/en/webinars/25341/the-present-and-future-of-ai, 2019. [webinar pdf file].
- [38] F. D. Davis, R. P. Bagozzi, and P. R. Warshaw, “User acceptance of computer technology: A comparison of two theoretical models,” Management Science, vol. 35, no. 8, pp. 982–1003, 1989.
- [39] S. Hopcan, G. Türkmen, and E. Polat, “Exploring the artificial intelligence anxiety and machine learning attitudes of teacher candidates,” Education and Information Technologies, pp. 1–21, 2023.
- [40] Y.-Y. Wang and Y.-S. Wang, “Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior,” Interactive Learning Environments, vol. 30, no. 4, pp. 619–634, 2022.
- [41] A. Nguyen, H. N. Ngo, Y. Hong, B. Dang, and B.-P. T. Nguyen, “Ethical principles for artificial intelligence in education,” Education and Information Technologies, vol. 28, no. 4, pp. 4221–4241, 2023.
- [42] R. Terzi, “An adaptation of artificial intelligence anxiety scale into turkish: Reliability and validity study.,” International Online Journal of Education and Teaching, vol. 7, no. 4, pp. 1501–1515, 2020.
- [43] H. R. Chen and H. F. Tseng, “Factors that influence acceptance of web-based e-learning systems for the in-service education of junior high school teachers in taiwan,” Evaluation and Program Planning, vol. 35, no. 3, pp. 398–406, 2012.
- [44] K.-T. Wong, T. Teo, and S. Russo, “Influence of gender and computer teaching efficacy on computer acceptance among malaysian student teachers: An extended technology acceptance model,” Australasian Journal of Educational Technology, vol. 28, no. 7, 2012.
- [45] C. S. Nam, S. Bahn, and R. Lee, “Acceptance of assistive technology by special education teachers: A structural equation model approach,” International Journal of Human-Computer Interaction, vol. 29, no. 5, pp. 365–377, 2013.
- [46] F. Incerti, Preservice Teachers? Perceptions of Artificial Intelligence Tutors for Learning. Ohio University, 2020.
- [47] J. C. Sánchez-Prieto, S. Olmos-Migueláñez, and F. J. García-Peñalvo, “Informal tools in formal contexts: Development of a model to assess the acceptance of mobile technologies among teachers,” Computers in Human Behavior, vol. 55, pp. 519–528, 2016.
- [48] W.-T. Wang and C.-C. Wang, “An empirical study of instructor adoption of web-based learning systems,” Computers & Education, vol. 53, no. 3, pp. 761–774, 2009.
- [49] M. D. Wolcott and N. G. Lobczowski, “Using cognitive interviews and think-aloud protocols to understand thought processes,” Currents in Pharmacy Teaching and Learning, vol. 13, no. 2, pp. 181–188, 2021.
- [50] D. Pepper, J. Hodgen, K. Lamesoo, P. Kõiv, and J. Tolboom, “Think aloud: using cognitive interviewing to validate the pisa assessment of student self-efficacy in mathematics,” International Journal of Research & Method in Education, vol. 41, no. 1, pp. 3–16, 2018.
- [51] L. Muthén and B. Muthén, “Bo 1998-2015,” Mplus user’s guide.
- [52] F. G. K. Yilmaz, R. Yilmaz, and M. Ceylan, “Generative artificial intelligence acceptance scale: A validity and reliability study,” International Journal of Human-Computer Interaction, 2023.
- [53] R. B. Kline, Principles and practice of structural equation modeling. Guilford publications, 2015.
- [54] B. M. Byrne, Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming (Multivariate Applications Series). New York, NY: Taylor and Francis Group, 2010.
- [55] J. F. Hair, W. C. Black, B. J. Babin, R. E. Anderson, and R. L. Tatham, Multivariate Data Analysis. Upper Saddle River, NJ: Prentice-Hall, 1998.
- [56] L.-t. Hu and P. M. Bentler, “Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives,” Structural equation modeling: a multidisciplinary journal, vol. 6, no. 1, pp. 1–55, 1999.
- [57] R. R. Molefi and M. A. Ayanwale, “Using composite structural equation modeling to examine high school teachers’ acceptance of e-learning after covid-19,” New Trends and Issues Proceedings on Humanities and Social Sciences, vol. 10, no. 1, pp. 1–11, 2023.
- [58] C. Fornell and D. F. Larcker, “Evaluating structural equation models with unobservable variables and measurement error,” Journal of Marketing Research, vol. 18, no. 1, pp. 39–50, 1981.
- [59] J. F. Hair, C. M. Ringle, and M. Sarstedt, “Pls-sem: Indeed a silver bullet,” Journal of Marketing Theory and Practice, vol. 19, no. 2, pp. 139–152, 2011.
- [60] W. W. Chin et al., “The partial least squares approach to structural equation modeling,” Modern methods for business research, vol. 295, no. 2, pp. 295–336, 1998.
- [61] P. Potvin and A. Hasni, “Interest, motivation and attitude towards science and technology at k-12 levels: a systematic review of 12 years of educational research,” Studies in science education, vol. 50, no. 1, pp. 85–129, 2014.
- [62] R. A. S. Alissa and M. A. Hamadneh, “The level of science and mathematics teachers’ employment of artificial intelligence applications in the educational process,” International Journal of Education in Mathematics, Science and Technology, vol. 11, no. 6, pp. 1597–1608, 2023.
- [63] S. Kelly, S.-A. Kaye, and O. Oviedo-Trespalacios, “What factors contribute to the acceptance of artificial intelligence? a systematic review,” Telematics and Informatics, vol. 77, p. 101925, 2023.