Abstract

This paper proposes a nonelliptic extended English teaching ability evaluation algorithm based on an adaptive random matrix model. The algorithm models a nonelliptical expansion target as multiple elliptical sub-objectives, and the expansion state of each sub-target is described by an inverse Wishart distribution. The new method is combined to improve the robustness problem caused by the initialization after the algorithm expansion. This research uses the smart teacher education platform to conduct research and analysis on the evaluation of teaching practice ability of intern normal students. From the perspective of data evaluation, we explore the influence of normal students’ curriculum training on the teaching practice ability of normal students. By analyzing the correlation between normal students’ course grades and practice grades, this paper explores the influence of normal students’ prepractice training on normal students’ teaching practice ability. This paper explores the influence of the training of normal students’ course learning on the teaching practice ability of normal students. The results show that the learning level of normal students’ professional courses has a significant impact on the development of normal students’ practice performance and teaching practice ability; the impact of normal students’ pedagogical course level on normal students’ practice performance and teaching practice ability is relatively low.

1. Introduction

As the “basic work” of developing education, the construction of teaching staff is related to the national plan. The “Opinions on Comprehensively Deepening the Reform of Teaching Team Construction in the New Era” pointed out that it is necessary to strengthen support for normal colleges and universities to ensure the overall improvement of the overall quality of the teaching team [1]. At present, it is the era of education informatization 1.0 to 2.0, the era of transition from “three links and two platforms” to “three highs, two comprehensives, and one large,” and it is from the construction of educational infrastructure and environmental informatization to the “comprehensive information literacy of teachers and students” upgrade process. As early as 2017, the “13th Five-Year Plan for the Development of National Education” issued by the State Council pointed out that schools should use data mining technology to analyze students’ behavior data and provide data support for teaching decision-making [2]. Teaching ability, as one of the core competencies of teachers, reflects the competence of teachers and is related to the training of talents for social development needs [3]. As future teachers, normal students should evaluate their teaching ability scientifically and systematically, which is a key step to improve the teaching ability of normal students and a necessary measure to improve the quality of the teaching staff.

The teaching ability model of normal students to be constructed by this research is mainly based on various policy documents related to normal student education issued and promulgated at home and abroad, as well as typical teaching ability standards at home and abroad [4]. It has a strong theoretical basis, and the model is comprehensive, which can provide reference for other researchers who pay attention to the teaching ability of normal students, and can also provide a theoretical basis for normal colleges and universities to reconstruct teacher education courses and improve the teaching ability training or evaluation system of normal students. The teaching ability model of normal students constructed in this study includes various teaching ability elements that normal students should have and their weights, which can provide a relatively complete and systematic reference standard for the training and evaluation of normal students’ teaching ability [5, 6]. At the same time, based on the constructed model, this study also takes the normal students of a school’s teacher education college as the research sample database to obtain data items that reflect the various teaching abilities of normal students, build a personalized teaching ability portfolio of normal students, and analyze these data items. The data are processed and analyzed, and the individualized teaching ability radar map of normal students can be obtained, which can realize the accurate evaluation of various teaching abilities of normal students and understand the current development status of normal students’ teaching ability, so as to guide students’ teaching and practical activities for subsequent teachers.

In order to accurately estimate the target shape, it is necessary to establish a suitable extended state dynamic model and measurement noise model [7]. According to the extended state estimation value at the previous moment and the measurement data at the current moment, the self-adaptation of the model parameters is realized. The course study of normal students is the basic link of the training of normal students, and it is also the basic link of the development of normal students’ education and teaching ability. Normal students must have a reasonable structure of knowledge and skills in order to have a higher teaching ability and then to practice in practice.

Teaching has an impact, which has an impact on their own internship performance and ability development in the internship. As a representative of the knowledge level of normal students, the correlation and degree of correlation between the normal students’ course learning performance and the practice appraisal scores also represent the influence of normal students’ curriculum learning level on the normal students’ educational practice performance and the development of teaching practice ability. This paper uses the overall data samples of normal students to conduct data analysis and research, collects and organizes the course academic performance of normal students before their internship, and explores the relationship between normal students’ course academic performance and normal students’ internship performance. Through the exploration of relevant relationships, the analyst will analyze whether the course study of the normal students has an impact on the practice performance of the normal students and the degree of impact, and further verify and explain the data analysis results combined with the interview survey.

The French Ministry of National Education has described and stipulated the professional competence standards for primary and secondary school teachers through circulars [8]. The new standard unifies the professional competence requirements for primary and secondary school teachers, and expresses 10 essential professional competences for primary and secondary school teachers from three aspects: knowledge, skills, and attitudes, including the professional ethics, teaching, and communication process that teachers should have. Compared with the old standard, the new standard is more clear and systematic, emphasizing the importance of teachers’ “reform and innovation ability” and information technology and collaboration ability to the teaching profession, which is conducive to teachers in the new era to focus on knowledge and skills in a more targeted manner [9, 10]. It can be seen that, starting from the teaching organization process, the structural elements of normal students’ teaching ability mainly include professional concepts and knowledge, basic abilities such as language, written expression, and blackboard writing, as well as teaching design, teaching implementation, teaching management, and evaluation as the main body [11, 12].

From the perspective of cognitive psychology, researchers believe that teaching ability combines general ability and special ability [1315]. General ability refers to the four abilities of teaching design, teaching implementation and regulation, evaluation and reflection, and teaching research; special ability refers to the cognition of teaching and comprehension, experimental skills, and inquiry skills [16, 17].

Relevant scholars believe that teaching ability consists of three aspects: intellectual foundation, general teaching ability, and specific subject teaching ability [18]. The intellectual foundation includes the analytical, creative, and practical thinking that teachers show in the teaching process; the general teaching ability includes teaching cognitive ability, teaching operation ability, and teaching supervision ability, which are the abilities involved in all teaching activities [19]. Teaching ability refers to the ability displayed in the teaching of specific subjects, including the special ability of Chinese teaching ability and mathematics teaching ability [20].

Relevant scholars divide teaching ability into teaching design ability, language expression ability, teaching supervision ability, teaching evaluation ability, and education mechanism [21, 22]. Teaching design ability refers to the ability to systematically plan and design each element of the teaching process before teaching activities; verbal expression ability refers to the ability to express one’s own ideas and teaching content through language and writing [23]. In order to achieve the expected teaching goals, teachers have the ability to actively plan, check, evaluate, and adjust the teaching activities in the teaching process. Based on teaching objectives, educational wit refers to the sensitivity of teachers to teaching activities and the ability to cope with teaching unexpected events [24]. It can be seen that, without considering the dimension of personality traits, teaching design ability, teaching implementation ability, and teaching evaluation ability also point to the organizational process dimension of classroom teaching [25].

3. Methods

3.1. Adaptive Random Matrix Model

The random matrix model approximates the target shape as an ellipse, preserving the two basic extended states of size and orientation. An ellipse’s major and minor semi-axes are a and b, respectively. When the motion law of the extended target is complex, the shape estimation error of RMM is large due to the need to estimate the shape parameters of the ellipse at the same time. An adaptive random matrix model is proposed, which achieves more accurate estimation by adjusting the parameters of the extended state dynamic model. The target posterior probability is

The dynamic model of the extended state iswhere T represents the transpose of the matrix and W represents the Wishart distribution.

Among them, Hk is the measurement matrix, is the measurement noise, and the probability of noise is divided intowhere (m, P) represents a Gaussian distribution with mean m and covariance P, and Bk is a symmetric positive definite matrix of order d. The measurement Zk is related to the extended state Xk, and the measurement noise depends on the spatial shape of the target.

ARMM mainly focuses on two parameters: the extended state dynamic model parameter and the measurement noise model parameter. Taking a two-dimensional space as an example, the motion state is xk, the expansion state is Xk, and the target shape is modeled as an ellipse.

The symmetric positive definite random matrix Xk can only describe the ellipse whose center point is at the origin. Combined with the motion state xk, it can describe the ellipse at any position in space. According to the above analysis, the random matrix corresponding to the ellipse can be expressed aswhere the trace of the matrix is tr. To decompose a symmetric positive definite matrix X, there are many theoretical tools to choose from, such as singular value decomposition or eigenvalue decomposition. However, the orthogonal matrices obtained by these two methods include rotation transformation and symmetry transformation, and the decomposition is not unique. In this paper, Givens rotation is used for matrix decomposition, and the following lemma is given for Givens rotation.

If X is a symmetric positive definite matrix, we can diagonalize X by Givens rotation, and we can getwhere atan represents the arctangent function.

For the extended target in 3D space, the parameter adaptation method is similar to that in 2D space. When the movement law of the extended target is complex, if a fixed model parameter value is used, poor tracking performance may be obtained. ARMM adjusts the model parameters according to the state estimate value at the previous moment to achieve a more accurate state estimate.

3.2. Improved Algorithm for K-Means Initialization

Data cleaning is a commonly used technical means of data preprocessing, which is a process used to identify and eliminate data noise in “dirty” data. Given a database instance with schema R and data quality requirements, data cleaning refers to finding the database instance, which usually goes through five steps of preparation, detection, location, correction, and verification. The data cleaning flowchart is shown in Figure 1.

The operation steps of k-means++ are inherently sequential. The entire sample set needs to be scanned multiple times, and the subsequent operations depend on the previous results, which makes the algorithm unable to expand and is not suitable for large-scale data sets.

In order to improve the problem that k-means++ cannot be extended in parallel, k-means algorithm is simple, highly parallel, and easy to implement on any parallel computing model. Theoretically, it can be proved that k-means∥ approximates the optimal solution with a constant factor, and experiments also show that the algorithm can effectively reduce the number of sample scans and algorithm iterations under the premise of ensuring accuracy. At present, these two algorithms have been widely recognized and used in a large number of well-known open-source projects.

3.3. K-Means++

K-means firstly randomly selects a group of clustering centers, and k-means + provides a method to select special centers. K-means ++ is a fast and easy cluster initialization technique, and it can provide an approximate optimal center different from o (Logk) polymer center. Theoretically, k-means++ is used to prove that the corresponding cost and e(n = 2) are less than 8(lnK +2). Compared with the first k-means version, k-means++ with cluster-centered randomly selected main iteration is sent to k-means, greatly improving the accuracy of k-means. The algebra is reduced, the initialization sensitivity and uncertainty of related clusters are reduced, and the relief of k-means ++ is greatly improved.

The main disadvantage of k-means++ lies in its inherent sequential execution characteristics. To obtain k cluster centers, the data set must be traversed k times, and the calculation of the current cluster center depends on all the previously obtained cluster centers, which makes the algorithm unable to be parallelized. Scaling greatly limits the application of the algorithm on large-scale data sets.

The main idea of k-means∥ is to change the sampling strategy for each traversal. Instead of sampling only one sample per traversal as in k-means++, O(k) samples are sampled per traversal, and the sampling process is repeated for about O(logn) times; after repeated sampling, a set of O(klogn) sample points is obtained, which approximates the optimal solution with a constant factor, and then the O(klogn) points are clustered into k points. The k points are sent into the Lloyd iteration as the initial cluster centers.

K-means greatly inspired the original k-means c++ algorithm randomly selected points as sample clusters to calculate each sample of the cost center. After the ψ is sampled, the sample is added into the cluster to continue the sampling cycle. In general, the number of C samples is significantly lower than the number of all samples.

3.4. Scalable Parallel Fuzzy C-Means

This section presents a parallel scalable FCM algorithm that combines the Spark programming model and special initialization methods. Although Hadoop is the most popular distributed processing framework, Spark provides a richer programming model and more efficient support for iterative, interactive tasks than Hadoop.

In order to get a better initialization process and better algorithm performance, this paper introduces a specific initialization method developed by k-means++. A series of RDD operations provided by Spark, i.e., a series of transformation and behavior functions, are used in our implementation of the parallel scalable fuzzy c-means algorithm.

Considering the interactive characteristics of data analysis tasks and the architectural characteristics of Spark MLBase, in the experiment, we compiled the algorithm implementation directly into the Spark MLlib library, so that the program can be run interactively directly in the Spark shell.

Since there are many RDD operation functions involved in the algorithm pseudo-code, the important RDD operations will be listed here first.

FCM is sensitive to initialization and can affect the iterative process. Poor initialization leads to too many iterations and local optimizations. Random initialization does not guarantee the stability and performance of the iterative process. In larger data analysis scenarios, the use of clusters for distributed computing increases the cost per analysis task. Therefore, the algorithms used in big data analysis usually provide relative stability and quality. In distributed environments, large computations typically do not use algorithms that average multiple operations. Introducing generalized k-means allows us to approximate the optimal primary center of the class, improves the approximation speed of the algorithm, reduces algebra, provides the quality of the algorithm, and stabilizes the iterative process of the algorithm, and you can avoid algorithms in distributed environments.

The sub-sampling refers to that in the initialization process, a number of samples are distributed in each block according to the probability, and then the obtained samples are sampled locally according to the probability to obtain c samples as the initial cluster center.

The algorithm can be roughly divided into three stages. First, the samples are distributed according to the percentage probability of the sample cost to the total cost. Then, the obtained fewer samples are weighted according to the number of samples in their category, and the fewer samples are obtained by sampling according to the weighted probability. Finally, a FCM clustering is performed locally on the driver side, and the obtained few samples are clustered into c cluster centers as the output of the initialization process.

The detailed description of the initialization process can be extended to FCM. The algorithm expression directly contains the text of the Spark function. The input algorithm includes a general file system object file, the number of alternative parameters in the initialization process, the fuzzy M index in FCM, and the selection factor of k-means output including a set of clustering centers. It is sent to the repeater. The algorithm step represents the RDD operation model. In the first phase, the target file is created as an RDD, and in the second phase, the data center is selected as the first initial center of the cluster. Phase 3–13 is the iterative core initialization process developed by k-means algorithm. The regular value for the number of substitutions is considered more efficient than the number of substitutions, so init in the file is iterative. The number of times is taken as 6. When counting each sample point in C as a cluster center, the total number of samples is divided into its category. Taking the obtained sample set C as the initialization result, a local FCM is executed on the driver side, and the O(c log n) center points obtained in the previous steps are quickly clustered into c; that is, the final initial cluster center is obtained. The schematic diagram of the scalable FCM initialization process is shown in Figure 2.

After explaining the meaning of each step in the algorithm initialization process, we describe the algorithm idea of the entire initialization process.

The first stage is to perform distributed probability sampling on all samples. The diagram of distributed processing is only simplified as dividing into three blocks for parallel operation, but it can actually be extended horizontally into several blocks for parallel processing. First flatten each data set into one dimension, calculate the sample cost of each sample one by one, and then reduce and sum to get the total cost of the sample set. Probability-based sampling of samples is carried out independently, the samples obtained by sampling in each block are collected, and a small number of samples whose sample cost accounts for a large proportion of the total cost are collected.

The second stage is to weight the obtained small number of samples and sample according to the weighted probability, divide other samples into these sample categories, calculate the number of samples divided into these small sample categories in a distributed manner, and use this sample as the sample. The weight of the cluster center, that is, the proportion of the sample cost to the overall cost when these few samples are used as the cluster center, and a smaller number of samples with a larger proportion are selected.

In the third stage, a fast local FCM clustering is performed, a smaller number of samples are quickly clustered into c, and the c samples are used as the initial clustering center set.

The algorithm will eventually output a set of cluster centers, according to which all samples can be classified according to the principle of maximum membership, clearly express the distributed processing process of the iterative part of the algorithm, iteratively calculate and update the partial sum of the cluster center on each block in parallel, and finally reduce the sum to solve the cluster center.

The scalability of the processing scale mainly refers to the scalability of the algorithm in the scale of the data that can be processed. Both vertical and horizontal scalability are related to the distributed architecture selected by the algorithm. In terms of vertical scalability, if the cluster is upgraded by adding computing resources such as processors, the algorithm can effectively utilize the application performance improvement brought by the cluster upgrade without modification.

In terms of horizontal scalability, if more servers are added to the cluster to enhance the computing power of the cluster, there is no need to modify the algorithm, and larger-scale cluster analysis support can be provided immediately and effectively.

4. Results and Analysis

4.1. Overall Description of the Academic Performance of Normal Students

This study is divided into middle school group, primary school group, and preschool group according to the difference between the internship schools and the objects of the intern normal students. Education majors, music education, art education, and some Chinese language majors are aimed at intern normal students who practice in primary schools. The preschool group is mainly for preschool education majors. At the same time, according to the different types of courses, the courses taken by normal students are divided into two categories: education courses (mainly including teacher education courses such as pedagogy and educational psychology) and professional courses. We sort out and calculate the average scores of each normal student in the two types of courses, summarize and organize the average scores of each group of normal students in the two types of courses, and use it to explore the relationship between the normal students’ course academic performance and practice grades.

Firstly, the overall data description and analysis are carried out for the two types of course academic performance and internship results of normal students in each group, which lays the foundation for further exploration of the relationship between the course academic performance and internship appraisal results of normal students in each group. The results of education courses of normal students in each group are summarized and counted, and the statistical results are shown in Figure 3.

Through the statistics and description of the grades of normal students in educational courses, we can have a more intuitive understanding of the learning level of normal students in each group of educational courses. The average grades of the three groups in the education courses of normal students are relatively similar, the average grades of the education courses of normal students in the primary school group are slightly higher than those of the other two groups, and the standard deviation and variance are relatively small. The average score of normal students in the preschool group is relatively low among the three groups, while the standard deviation and variance are slightly higher than those of the middle school group and the primary school group, and the degree of dispersion of scores is relatively large. The results of professional courses of normal students in each group are summarized and counted, and the statistical results are shown in Figure 4.

4.2. Overall Description of Normal Students’ Internship Achievements

By describing the internship results of the three groups of normal students, we can have a basic understanding of the status of the normal students’ internship performance and make statistics and descriptions on the basic status of the three groups of normal students’ internship results. The statistical results are shown in Figure 5.

The average practice scores of the three groups of normal students were all higher than 90 points, and the average values of the three groups of normal students were relatively close. The average practice score of the preschool group was the highest among the three groups, reaching 92.5 points. And the standard deviation of the three groups of normal students’ practice scores is about 4. It can be seen that the overall scores of normal students’ practice scores are relatively concentrated, and the score value is relatively high.

In order to further explore the distribution of the practice scores of the three groups of normal students, it is necessary to carry out a normal distribution test on the practice scores of the three groups of normal students and input the practice scores of the three groups of normal students into SPSS24. In the normal distribution diagram, the linear distribution state formed by each value determines the degree of normality of the value. When the linear state is almost a straight line state, the input value satisfies the normal distribution; otherwise, it is not satisfied with normal distribution.

The Kolmogorov–Smirnov value of the internship scores of the normal students in the middle school group is 0.087, and the corresponding significance level is 0.001, which is less than 0.005. Therefore, the null hypothesis is rejected, and it is believed that the internship scores of the normal students in the middle school group do not show a normal distribution. The Shapiro–Wilk statistic of the practice scores of normal students in the middle school group is 0.974, and its significance level is 0.002 (less than 0.05). Combining the above two judgments, it is concluded that the practice scores of normal students in the middle school group do not conform to the normal distribution.

The skewness of the distribution of practice grades in the primary school group is −0.271, so the distribution of the practice grades of normal students in the primary school group is negatively skewed. The Kolmogorov–Smirnov statistic of the practice scores of the normal students in the primary school group is 0.106, and its significance level is 0.001, which is less than the 0.05 level. The null hypothesis is rejected, and it is believed that the practice scores of the normal students in the primary school group do not show a normal distribution. The Shapiro–Wilk statistic of the practice scores of the normal students in the primary school group is 0.975, and its significance level is 0.002 (less than 0.05). Combining the above two judgments, it is concluded that the practice scores of normal students in the primary school group do not conform to the normal distribution.

The Kolmogorov–Smirnov statistic of the preschool teachers’ practice scores is 0.136, and its significance level is 0.001 (less than 0.05). The null hypothesis is rejected, and it is believed that the preschool teachers’ practice scores do not show a normal distribution. The Shapiro–Wilk statistic of the preschool teachers’ practice scores is 0.935, and its significance level is 0.001 (less than 0.05). The null hypothesis is rejected, and it is believed that the preschool teachers’ practice scores do not show a normal distribution. Combining the above two judgments, it is concluded that the practice scores of the preschool normal students do not conform to the normal distribution.

4.3. Correlation Analysis of Course Grades and Internship Grades for Normal Students in the Middle School Group

In order to further clarify whether the learning level of the normal students’ curriculum will have an impact on the educational practice of normal students and the degree of impact, this study analyzes the degree of correlation between the normal students’ curriculum learning performance and the practice appraisal results, and explores the relationship between the curriculum learning level and the practice appraisal through correlation analysis. The relationship between normal students’ educational practice performance and the development of teaching practice ability, and the data analysis results are further verified and explained in combination with the interview survey.

In the previous normal distribution test, it can be seen that the three groups of normal students’ practice scores are all skewed distribution, so it is necessary to convert the standard Z scores of the three groups of normal students’ two types of courses and practice scores. In order to make it meet the requirements of normal distribution, further correlation analysis can be carried out. The standardized conversion of the scores of each group was realized with the help of SPSS software.

The Pearson correlation analysis method was used to analyze the pedagogical course grades, professional course grades, and internship appraisal grades of normal students in the middle school group. The analysis results are shown in Figure 6.

In the middle school group, there is a significant positive correlation between the educational course grades and the practice appraisal grades at the level of 0.01, and the correlation coefficient of the product difference is 0.301 < 0.4; the professional courses and the practice appraisal grades are significantly positively correlated at the level of 0.01. The correlation coefficient is 0.412. It can be seen that there is a significant low-degree positive correlation between the educational course grades of normal students in the middle school group and the practice appraisal results. There is a significant positive correlation between appraisal results; that is, students with higher grades in professional courses have relatively better grades in practice appraisal.

4.4. Correlation Analysis between the Course Grades and the Practice Grades of Normal Students in the Primary School Group

The Pearson correlation was carried out on the grades of pedagogical courses, professional course grades, and internship appraisal grades of normal students in the primary school group, and the statistical results are shown in Figure 7.

The value of the significance test of the correlation coefficient between the grades of the education courses and the practice grades of the normal students in the primary school group was equal to 0.092 > 0.05, which did not reach the significant level, while the value of the significance test of the correlation coefficient between the grades of the professional courses and the practice grades was less than 0.01. The product-difference correlation coefficient was 0.392, showing a significant positive correlation. It can be seen from this that there is no significant correlation between the educational course grades and the practice grades of the normal students in the primary school group, and the correlation coefficient is around 0.1. Therefore, it can be judged that the educational course grades of the normal school students in the primary school group cannot have a significant effect on the practice grades. However, there is a significant positive correlation between the grades of professional courses and practice grades; that is, the students with higher grades in the professional courses of the primary school normal students have relatively higher practice grades.

4.5. Correlation Analysis between the Course Grades and Internship Grades of Normal Students in the Preschool Group

The Pearson correlation analysis method was used to analyze the pedagogical course grades, professional course grades, and internship appraisal grades of the normal students in the preschool group. The analysis results are shown in Figure 8.

The correlation coefficient of the product difference between the educational course grades and the practice appraisal grades of the normal students in the preschool group is 0.210 < 0.4, and the value of the correlation coefficient test is 0.042 < 0.05, reaching a significant level of 0.05. The product-difference correlation coefficient between them is 0.323 < 0.4, and the value of the correlation coefficient test is 0.001 < 0.01, reaching the significance level of 0.01, showing a significant positive correlation. It can be seen that there is a relatively significant low-degree positive correlation between the preschool group’s normal students’ educational course grades and their internship grades, and an extremely significant low-degree positive correlation between the professional course grades and their internship grades; that is, students with higher grades in professional courses among normal students have relatively higher grades in practice.

5. Conclusion

The parallel expansion design of fuzzy c-means is carried out based on the Spark programming model, but the fuzzy c-means adopts the strategy of randomly selecting the cluster center during initialization, which brings great uncertainty to the iteration of the algorithm and the accuracy of the results. The cost in this scenario is very high. In order to enhance the performance after parallel expansion of fuzzy c-means, the k-means is extended to fuzzy c-means to obtain better clustering performance by referring to the improved strategy of k-means in the initialization stage. This study concludes that in the course training of normal students, the influence of the level of pedagogical knowledge of normal students on teaching practice ability is lower than the influence of professional knowledge level on teaching practice ability. The reasons for this are mainly analyzed in two aspects. First, normal students cannot correctly understand and apply pedagogical knowledge in the practice process. Second, there are deficiencies in the setting and arrangement of some pedagogical courses, which cannot effectively provide guidance for the teaching practice of normal students. In the educational practice guidance link, the teaching practice ability of normal students is divided into three levels: excellent, medium, and poor through the comments of the instructors. The problems existing in teaching practice ability are analyzed. Through the correlation analysis of the grades of the professional courses and the practice grades of the three groups of normal students, it can be known that there is a significant positive correlation between the grades of the professional courses and the practice grades of the three groups of normal students. The level of professional subject knowledge has a significant impact on its practice performance and has a certain impact on the development of its teaching practice ability.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest or personal relationships that could have appeared to influence the work reported in this paper.