Abstract
Experimental instructional design is an important pedagogical component of university teaching and learning, an important means of cultivating students’ innovative spirit and practical skills, and has an important status and role that cannot be replaced by any other means of teaching and learning. Assessment for learning as learning, assessment for learning, and assessment as learning are three paradigms of educational assessment that complement each other in achieving curricular and pedagogical goals and together form learning-based assessment. As an important component of national science and technology development, measuring the effectiveness of laboratory instructional design in universities and research institutions is of special significance. This paper presents the authors’ research on the background, evaluation characteristics, evaluation content, and methods of experimental teaching evaluation in the information technology environment, with examples of their application.
1. Introduction
Experimental teaching is an important teaching content of science and technology teaching is an important means to cultivate students’ innovative spirit, and practical ability has an important status and role that cannot be replaced by any other teaching methods and means [1].
Informatization is a symbol of the 21st century, with the process of informatization, the core of modern computer education technology is developing rapidly, laboratory teaching equipment is gradually digitalized, computerized, and networked, and the era of informatization of laboratory teaching has arrived. Make full use of modern information technology educational tools to significantly improve teaching methods, teaching efficiency, and teaching quality [2]. Laboratory teaching informatization, so that experimental teaching from teaching methods [3], teaching effectiveness, and teaching quality has been improved, so that the experimental management to a new level [4]. Obviously, education, classroom teaching, and experimental teaching in the 21st century are inevitable and necessary for gradual informatization.
Effective teaching in information-based teaching differs from traditional effective teaching in that it refers to effective teaching in the teaching environment supported by information technology. IT-supported teaching breaks the limitations of time, space, and resources in traditional teaching, and can make full use of the advantages of information technology to carry out various teaching modes based on information technology, such as project-based teaching mode, problem-based teaching mode, network collaborative learning teaching mode, and case-based learning mode, which are conducive to the improvement of teaching quality and the full development of students. The application of information technology in teaching does not mean that effective teaching happens. Some studies have shown that the reasons for successful IT teaching should be attributed more to good instructional design and adequate preparation for teaching. Therefore, to examine whether informationalized teaching is effective, it is still necessary to start from whether teaching accomplishes teaching objectives and promotes students’ learning, and to synthesize various factors such as teaching purposes, application of informationalized teaching mode, application of information technology, and teaching process, so as to explore the objective rules of the effectiveness of informationalized teaching.
Educational assessment is the wind vane and baton of educational reform and development [5]. Due to the long-term, generative and delayed nature of education and the indirect, implicit and subjective nature of assessment, the scientific and effective organization and implementation of educational assessment have been a difficult problem in educational teaching practice [6]. Educational evaluation is related to the direction of educational development, and it is gradually shifted to the following direction: improving result evaluation, strengthening process evaluation, exploring value-added evaluation, improving comprehensive evaluation, and establishing a scientific educational evaluation system and mechanism for different subjects and characteristics of different levels and types of education.
Evaluation of laboratory teaching is a necessary method and tool to analyze and recognize the pedagogical quality and efficiency of laboratory teaching. It is also often assumed that the act of measuring memory does not change memory [7], and most educational practices are focused on strengthening the process of students’ processing of knowledge, that is, getting it into their heads. Nowadays, the purpose of evaluation of experimental teaching in the informationized environment is to recognize the laws, problems, and shortcomings of experimental teaching in the informationized environment, to provide a basis for the improvement of experimental teaching quality, experimental teaching improvement, experimental teaching research, and experimental teaching development, in order to meet the needs of talent training in universities in the information era. In the study of how students learn [8], there is no mention of a method about retrieval [9]. This method of retrieval memory is still controversial in specific teaching [10]: whether it is effective [11], ineffective [12], or vague [13]. For a long time, universities across the country have invested a lot of human and financial resources in the field of experimental teaching, accumulated rich experience in experimental teaching, and cultivated a large number of talents, and the evaluation system of classroom theory class teaching is relatively mature, but the evaluation system used to evaluate the quality of experimental teaching has been little studied [14]. With the involvement of information technology in experimental teaching and the increasing requirements for the evaluation of scientific teaching concepts [15] year by year, it is urgent to study and develop the evaluation system of experimental teaching in the new situation to evaluate and monitor experimental teaching.
Whenever new media technologies emerge, some researchers are always eager to introduce these new media technologies into teaching, expecting to use the advantages of new media technologies to improve teaching or solve problems in teaching [16]. However, as the cult of modern media cools down [17], people gradually shift from the blind pursuit of media technology to the research on the effectiveness of information technology teaching applications, but always fall into the awkward mode of “introducing new technology, a successful experiment and a failed promotion.” In this era of information technology, what is wrong with the research on the effectiveness of experimental technology teaching application, and why the research results cannot be promoted to the general practice? And now, the integration of a variety of high-end information technology wisdom classrooms into the vision of educational technology researchers, and how we should be sensible to the configuration of various technologies in the wisdom classroom, the wisdom classroom pragmatic introduction to teaching life? There is an urgent need to find a scientific rationale for the pedagogical application of experimental technologies.
Research on the effectiveness of experimental instructional designs often uses a simple two-class comparison experiment, but such comparison experiments are difficult to prove the instructional effectiveness of experimental designs because of the many gaps in empirical research in this way.
1.1. Misplacement of the Research Question
Comparative experiments on technology application generally explore the question of which is superior between teaching with or without the involvement of a particular technology or between teaching with the involvement of different experimental technologies and attribute the advantage to the application of a particular technology. In pedagogical practice, however, the primary task is to apply technology to improve instruction (practical goal) rather than to demonstrate the superiority of a particular technology (theoretical goal). It is the technology that has specific functions, and these objective functions do not need to be tested repeatedly in pedagogical research. In reality, when different people use the same technology, there will certainly be differences in the extent to which the technology functions are used, but this difference is not caused by the technology itself, but by multiple factors outside the technology. For an emerging technology, the acceptance or rejection of it should not be decided by the merits of its performance. Therefore, the basic question of technology application research is not to prove the superiority of a particular technology, but to explore what particular experimental techniques are most applicable and how they can be combined with other pedagogical elements to achieve optimal results. It is the study of the instructional system associated with the technology that is more important than the study of its advantageous functions.
1.2. Poor Definition of the Comparison Item
Contrast experimental research often judges the merits of technology by the good or bad teaching effect, and inadvertently mistakenly takes the technology as a whole (with or without, this or that) as a contrast item to study. However, technology and its products often contain multiple attributes and functions, and they have completely different effects on teaching and learning, so it is difficult to say what is being compared by using the technology as a whole as a comparison item. Imagine the need to compare visual media and auditory media in terms of the intuitiveness of content presentation. For example, is there a direct comparison between a computer + projector and a blackboard? Comparing technologies in general terms, the conclusions obtained are hardly indicative of the problem. That is, technology products are only meaningful when compared in terms of the same type of information and pedagogical function. Moreover, comparisons of the role of the same technology product in different instructional contexts are interesting, but unfortunately less often done.
1.3. Evidence of Ineffectiveness
The general idea of validity testing of technology instructional applications is to compare the effectiveness of the experimental and control classes and to attribute the improvement in instructional effectiveness to the application of technology. There are two loopholes in this. First, effectiveness here usually refers to the effect of an educational intervention in a specific context, which is actually based on “client satisfaction,” such as improved academic performance, increased motivation, positive learning attitude, and good experience with a technology product [18]. However, “client satisfaction” is not an “objective” effect. Second, the effect of teaching is the result of complex interactions between the elements of teaching activities, which reflects the overall operation of teaching activities and cannot be attributed to any local elements of teaching. Therefore, we cannot conclude the effectiveness of technology application in terms of teaching effectiveness [19]. To take a step back, even if technology is effective, it is only effective in a specific teaching context and does not have a universal applicability out of context. Due to the non-reproducibility of teaching activities, we simply cannot prove the pedagogical validity of technology in the doctrine. In fact, all tests are only tests of the feasibility in a particular context.
1.4. Defective Experimental Design
This phenomenon can be considered to be widespread at home and abroad. Single-factor isogroup experiments are the simplest experimental teaching comparison study experiments, and these experimental designs are still so, and other multifactor experimental teaching comparison experiments are even more seriously flawed in their design. This phenomenon, although directly related to designer literacy, is not essentially caused by the experimental designer, but by the faulty rationale of experimental teaching comparison experiments. Unlike other scientific experiments, teaching experiments with evidence of teaching effectiveness have to consider the individual person as a variable because the individual person as a whole is involved in the experimental process [20]. However, the individual person is in an open self-creation process and cannot be objectified or conceptualized, so we cannot treat the individual person in a teaching experiment as a variable (operationally speaking, i.e., not controllable), and the teaching effect is indeed inseparable from the unique contribution of the individual student, so with the teaching effect as the grip, the comparison experiment, no matter how strictly the experimental environment is controlled, can hardly show that the achievement of the teaching effect and The use of technology is directly related. Therefore, no matter how the comparison experiment is designed, it will have doctrinal flaws, and no researcher will be able to recover from this research rationale. It is for this reason that the field of education refers to these types of teaching experiments as “quasi-experiments. Such quasi-experiments have some exploratory research value, but they can only test the feasibility of local methodological elements. Such feasibility, of course, does not require such empirical studies, which are often obvious.
1.5. Treating Quasi-Conclusions as Conclusive
The conclusions drawn from quasi-experiments in teaching should be “quasi-conclusions.” However, both researchers and practitioners have “inadvertently” promoted quasi-conclusions as generalized definitions, which has led to nothing but confusion and dogma in teaching practice. When people experiment with technology applications, they generally delve into the functional characteristics of a particular technology, customize the content for that technology, and provide as much support as possible in terms of resources, funding, and policies, so that the functionality of the technology is brought to a higher or even higher level, with satisfactory results. You can imagine how costly this kind of teaching experiment is. In fact, although this kind of teaching experiment is a teaching experiment in a “real situation,” it is not a teaching experiment in a “natural situation,” because this kind of teaching experiment is a nonstandard teaching practice. The teaching application of media eventually goes back to the routine work of regular cost, and most of the support equipped for the experiment will be withdrawn. So, is the specific functionality of the technology really needed in regular teaching? If so, will its actual utility be as good as it was during the experiment? These are all uncertainties. If we cannot use the quasi-conclusions of the experiment to address these issues in regular teaching, the quasi-conclusion is naturally useless. At best, it tells us that someone has “worked” before.
We can think about this issue from a different perspective: the pedagogical use of technology may indirectly affect the effectiveness of teaching and learning by increasing the “goal-means” coherence of the teaching system, student engagement, or the adaptability of the teaching system under certain conditions. Therefore, research on the use of technology should examine more the actual role and contribution of technology products in conventional teaching and learning rather than proving its pedagogical effectiveness in isolation [21]. Such empirical studies, while not getting bogged down in quasi-experiments, require an information flow-based approach to instructional systems analysis in order to explore the details of the role of technology products in teaching and learning.
Experimental teaching design validity analysis is a very important experimental design issue [22]. The study of the relative validity of each basic unit facilitates the improvement of the experimental management model and the development of rational planning, thus maximizing the effectiveness of experimental design [23]. Scientific measurement of experimental design validity can reflect the degree of effective subjective efforts to improve experimental design, which can have a motivating effect on each unit [24]. For measuring the effectiveness of teaching experimental design, hierarchical analysis has been used in the past to analyze the good and bad experimental design of each unit. That is, the weight parameters of each research outcome index are determined by hierarchical analysis, and then each index is multiplied by its respective weight and summed, and the result is used as the final comparative score of each unit. Such a measurement method is hardly motivating for the assessment unit. Because the number of indicators of units with good foundation conditions is often higher, the weighted sum is naturally in the upper level, while the units with poor foundation conditions, no matter how hard they work, the results are limited, and the weighted sum is not likely to be in the upper level, which greatly affects the motivation of the subjective efforts of units with poor foundation conditions, or worse, some units with good foundation conditions rest on their laurels and do not think about making progress, resulting in assessment scores are still at the top, while they have actually regressed.
In this paper, the idea of the binary relative effectiveness of measuring the economic efficiency of enterprises is transferred to measuring the effectiveness of teaching experiment design in colleges and universities, that is, the results of the effectiveness of teaching experiment design of each college and university measured by the hierarchical analysis method are taken as a measure of the basic conditions of each college and universities, and it is regarded as an input, while the corresponding current results measured by the hierarchical analysis method are regarded as an output, as the C2GS2 model in the Data Envelope Analysis (DEA) method [25] is used to calculate the relative evaluation results among the evaluation units. This relative evaluation result can eliminate the influence caused by the objective base conditions of each university, and it can make the decision units with different base conditions have the same “benchmark” to achieve the purpose of fair and objective evaluation. It truly reflects the validity of the experimental design. We call this evaluation method the second relative assessment method. By using the binary relative assessment method to evaluate the effectiveness of teaching experiment design in each university, universities with different basic conditions can be stimulated. Against the above background, this study focuses on the problems related to the experimental teaching evaluation system in the information technology environment.
2. Establish the General Index System and Participation Parameters
The research on teaching application of technology must go beyond the old idea of comparative experimental research with no benefit as mentioned above, and start from teaching system analysis to study the teaching function of technology products in normal teaching. At present, there are two main perspectives of teaching system analysis: one is to view the teaching system as a human behavioral system and analyze the behavior of students and the relationship between them; the other is to view the teaching system as an information system with specific functions, inferring the overall properties of teaching from the local characteristics of information flow and revealing the relationship between information flow and teaching functions [26]. The method of analyzing the teaching system from the perspective of behavioral system suffers from the defects of a complicated and confusing coding system, mechanical and arbitrary cut scores, weak interpretation of results, etc. Moreover, it mostly analyzes the external behavior of students and rarely involves technical elements, and even if the technical dimension is involved in the improved scale, it only judges whether a media is used at the operational level, but there is still no way to know what specific role the media information plays for teaching. In addition, understanding or describing the teaching system from the external verbal behaviors of teachers and students ignores the flow of knowledge and information behind the behaviors and fails to establish a link between teaching behaviors and teaching effects, and the research findings are neither supported nor real teaching guidance.
The IIS (Instructional Information Set) diagram analysis method focuses on the relationship between the marked information of IIS output by three types of information processing subjects, namely, teachers, students, and information media, in the teaching system and the teaching function, and infers the overall properties of teaching from the local characteristics of the information flow in these teaching processes. Here, the teaching system refers to a system of information flow among three types of information processing subjects, namely, teachers, students, and information media, which is essentially an information system composed of students, information media, and teachers and their input information and output information plus the IIS expressing socially shared knowledge set. The conceptual model of the teaching system is shown in Figure 1.

Information processing of teacher (IPT), Information processing of learner (IPL), and Information processing of media (IPM) represent the processing of information by teachers, learners, and information media, respectively. {X} and {Y} represent the input and output of their information processing processes. In the conceptual model, the information output Y from the three information processing subjects is extracted and structured as “input information items” with the representation format “<contributor> <operation> <information type> <representation form> <IIS subgraph> [<information quality>] [<content annotation>]” (as shown in Figure 2), and the set of information outputs Y is abstractly summarized as the Instructional Information Set IIS, which is used to characterize the shared nature of knowledge. Other elements that are not directly related to information processing are categorized as environmental elements of the instructional system, such as students’ prerequisite knowledge skills, teacher-student relationships, and students’ motivation levels. Environmental factors have an impact on the instructional system, but these impacts are ultimately expressed through externalized information output. The conceptual model of the instructional system reflects not only the relationship of information flow among three types of subjects: teachers, students, and information media, but also the contribution of this multi-subject information flow to social knowledge construction, which is reflected in the amount of activation of knowledge points by the information flow. The IIS diagram analysis method specifies that only the information flows of “answer,” “knowledge semantics” and “factual examples” correspond to the IIS knowledge subgraphs, and only the information flows of Only the subgraphs containing IIS knowledge can contribute to the activation of knowledge points. Although the specific externalized behaviors or verbal information of teachers and students in the teaching process cannot be reproduced, the IIS knowledge subgraph behind the specific behaviors or information flow is an objective graph, and only the information flow that contains the IIS knowledge subgraph has value, and the specific expressions of the information flow have no essential influence on the operation results of the teaching system, so the teaching system in the sense of this information flow is reproducible. This reproducibility of the research object ensures the reproducibility and scientificity of the whole empirical study.

Research on the application of media technologies based on information flow analysis opposes the verification of the pedagogical effectiveness of technologies through comparative experiments and advocates the analysis of the actual pedagogical functions of specific technological products in the context of regular teaching [27]. Therefore, instead of interfering with teachers and students to deliberately use a certain technology in the teaching process, the researcher provides a variety of media technology choices and allows them to make their own trade-offs according to their needs, and then conducts an information flow analysis of the teaching activity process to determine in detail the real role of the selected technology product in the teaching process and the actual dependence of teaching on it [28].
Traditional experimental teaching methods are mostly validation experiments students rely on detailed laboratory handouts to guide each step of the experiment students carefully and cautiously operate inevitably to obtain the expected experimental data thus achieving verification of theoretical learning [29]. However, due to the limitations of experimental conditions, time, consumption of equipment, equipment integrity, laboratory management, etc. students rely on the teacher’s guidance and lack of active thinking, creative thinking, and research on experimental refutation thus the students are trained in experimental ability are less competent. The introduction of computer-assisted teaching with its multimedia, interactivity, and simulation makes the whole teaching process more active and efficient [30]. The use of computer simulation can facilitate the examination of students’ design ideas repeatedly modified and optimized and also make many experiments that could not be realized in the laboratory in the past to obtain simulation effects. At the same time, the local area network of experiments, the intelligence, interactivity, and reliability of experimental instruments and virtual instruments to achieve student-oriented personalized teaching reduces the duplication of teachers’ work and laboratory management workload makes open laboratories possible and also creates conditions for remote experimental teaching in distance education [31].
The key to the reform of experimental teaching is the reform of experimental teaching mode with students as the main body of the experimental teaching mode of the experimental process to design comprehensive. The design of comprehensive experiments requires students to have strong basic knowledge and wide knowledge with certain innovative abilities. Informatization experimental environment due to the large amount of experimental data storage, experimental program optional, high comparability, experimental data easy to analyze and calculate digital equipment to make the experimental data sampling easy to experiment in the system with a computer to facilitate rapid processing [32]. Based on the above information environment fully mobilizes the enthusiasm, initiative, and creativity of students in the experiments while also providing technical support for the student-led teacher-led experimental teaching design. Of course, to achieve the change of teaching mode must also have the practice and scientific evaluation of the experimental teaching process guided by the theory of learning-based instructional design and other educational technology.
3. Comprehensive Algorithm for Measuring the Validity of Experimental Instructional Design
Before teaching, interviews and questionnaires were conducted with database-related experts, teachers, and former students to understand the learning needs of the database course and to analyze the needs in order to prepare for conducting the effectiveness study. Based on this, we designed the teaching program and prepared the teaching materials according to the available technology and equipment. Then, the first phase of teaching “case study and collaborative web-based learning,” i.e., “instructional design,” was conducted. During the teaching process, changes and problems were recorded, and after the “instructional design” was completed, research was conducted to understand the effectiveness of the teaching at this stage by means of interviews and questionnaires. After analyzing the effectiveness of the teaching, the teaching plan is revised and adjusted, and the next stage, i.e., the “multimedia courseware production” stage, is carried out until the end of this stage of teaching. After all the teaching is finished, we summarize the whole teaching and research process and propose a model for analyzing the effectiveness of information technology teaching for undergraduates. The model is shown in Figure 3.

In the study, the effectiveness of stage-specific teaching is mainly understood by means of questionnaires to find out the existing problems and analyze the effectiveness. The analysis of the effectiveness of informatization teaching should comprehensively examine the implementation effect of informatization teaching mode, the application of information technology, and the teaching process to see whether the teaching effect meets the purpose of teaching and whether it effectively promotes students’ learning. Based on such considerations, this paper proposes the analysis model of information-based teaching effectiveness as shown in Figure 4 and uses it to analyze the effectiveness of information-based teaching on the basis of questionnaire survey. First, the purpose of teaching is determined according to the learning needs, and the teaching process is analyzed, mainly from examining two aspects: the information technology teaching model and the application of information technology. Then, we analyze what kind of teaching effect is produced after the teaching process and whether it conforms to the teaching purpose. There are many cases between the two extremes of fully conforming and not conforming at all, as shown by the double arrows in the figure. The author divides the teaching effect into five levels, and there are several cases between very effective and completely ineffective such as relatively effective, generally effective, and less effective.

Through the analysis of the effectiveness of experimental teaching, several main factors affecting the teaching effect were summarized: the knowledge and skill reserve before the experiment, the expected results and innovation of the experiment, the content of the experiment, the experiment management, the information environment, etc. The specific contents and the results summarized by the questionnaire research are shown in Table 1:
At this point, traditional information technology effectiveness research has come to an end. However, innovative effectiveness measurement must be supported by data or models. Therefore, we reanalyze the data summarized above to come up with a more effective evaluation method. As mentioned in the introduction part of DEA, in this paper we will analyze and summarize the data from the survey results again by using the binary relative evaluations.
The binary relative evaluation method for the analysis of the effectiveness of experimental design teaching in colleges and universities was carried out in two stages. First, the previous and current composite indices of the effectiveness of experimental design teaching in each university are measured by using the hierarchical analysis method, and then they are regarded as input and output, respectively, and their binary relative evaluations are measured by using the data envelopment analysis method. When using the hierarchical analysis method to measure the composite index of the management effectiveness of each university, a system of indicators of the effectiveness of experimental design teaching in universities and selected weight parameters are established. The intent of this method is to establish the system of measuring the effectiveness of experimental teaching design in universities using the principles of system engineering and hierarchical analysis.
After the analysis of the above-mentioned survey results, some relatively important evaluation factors were determined, but due to the instability and limitation of the survey sample, we made a deeper weighting of the summarized influencing factors through the hierarchical analysis method. The specific method is derived from the following hierarchical analysis. The algorithm of the maximum characteristic root is shown in equation (1); the consistency index is shown in equation (2); the consistency ratio is shown in equation (3). The judgment matrix of A-B and the judgment matrix of B-C are shown in equations (4) and (5):where is the i-th factor, is the Judgment Matrix, is the i-th weighting factor.
The average random consistency index, RI, is obtained by taking the arithmetic mean of the eigenvalues of the random judgment matrix after several iterations of the calculation.
When CR < 0.1, the consistency of the A-matrix is generally considered to be acceptable.where .
And the weighting factor is: where .
And the weighting factor is:
The final structural model and weight parameters of the index system for the effectiveness of experimental teaching design in universities using hierarchical analysis are shown in Figure 5.

In order to give a true reflection of the improvement of the management level of each university due to subjective efforts, each university with different basic conditions should have different reference standards, and the index of the effectiveness of experimental design teaching in each university measured by hierarchical analysis reflects to some extent their different basic conditions, so it can be used as a reference standard to measure the basic conditions of different universities, and we call it the reference index. The current index of the effectiveness of experimental design teaching in each university can also be measured by the hierarchical analysis method, and we call it the current index. We know that the level of experimental design teaching in each university can be fairly measured only in the dynamic change, so we introduce the concept of index state and possible set of index state.
Let , be the reference index and current index of the j-th college, respectively, and call the array the index state of the jth college, called the convex set, T, as in (3)where .
From DEA model [25], we obtain (4):
If the optimal value = 1, the university is said to be on the frontier of the set of possible exponential states T. So the binary relative evaluation N of the college can be derived from (5):
As , the binary relative evaluation represents the percentage of the current index of each university in the maximum current index that can be achieved under the same reference conditions.
4. Results
In this paper, the effectiveness of experimental design teaching in 15 universities from 2014 to 2017 was measured using the above-mentioned binary relative evaluation method. The selected data sources were “2014 National Compendium of Science and Technology Statistics of Higher Education Institutions” and “2018 National Compendium of Science and Technology Statistics of Higher Education Institutions” prepared by the Department of Science and Technology, Ministry of Education and published by Higher Education Press [33]. In order to protect the privacy of local schools and to comply with the relevant protocols, the abbreviated names of these universities are protected in this paper. However, due to the specificity of the related professions involved, the nature of the terms in the names of the schools is retained. For example, in this paper, the abbreviation of Harbin Institute of Technology, one of the leading industrial institutions in China, is coded as H Technical University. The results of the binary relative evaluation of the effectiveness of experimental design instruction in the 15 target universities in this paper are shown in Tables 2–4.
From the results, we can see that the schools with high binary relative values can be divided into two cases: one case is the schools with high reference index and current index, such as H Technical University. Medicine in 2015, and A University in 2016. The schools with a lower binary relative evaluation can also be divided into two cases: one is a school with a larger decrease in the current index, such as A University of Agriculture in 2017, and the other is a school with a lower reference index and a lower current index, such as M School of Medicine and Q School of Medicine.
5. Conclusion
A teaching process without an evaluation system is unscientific and unlikely to succeed in experimental teaching as well. The study of a more complete, comprehensive, and operational evaluation system is the urgent need for the development of experimental teaching in the information technology environment to improve the quality of experimental teaching testing standards. The scientific evaluation system should be both a certain standard requirements and to meet the actual situation of an evaluation system needs time to establish and improve the need for continuous improvement and development in practice. With the implementation of the evaluation system application evaluation system will continue to be modified and improved. Higher education laboratory courses due to different professional disciplines experimental content is also different according to the characteristics of the discipline can be applied as above evaluation system.
By using design-based research paradigm to study the effectiveness of informatization teaching, researchers and teachers have a deeper understanding of the factors affecting the effectiveness of informatization teaching. It has some significance to grasp the law of informatization teaching and better use information technology in teaching practice to promote the development of informatization teaching and education informatization. The science of information flow analysis itself is the foundation of teaching analysis research. If information flow analysis is combined with teaching behavior analysis and social network analysis, it must be able to analyze teaching activities in an all-round way.
The strengths of the assessment method described in this paper are: first, it enables students with various characteristics to have the opportunity to be recognized and encouraged because human intelligence is diverse and each student has his or her own superior intelligence; second, it conveys the important idea that learning is complex and that key learning outcomes usually have multiple manifestations and require different skills to be fully demonstrated; and finally, when using performance-based assessment and authentic assessment, it helps to stimulate students’ interest and engagement in learning. Improving outcome assessment, strengthening process assessment, exploring value-added assessment, and sound comprehensive assessment necessarily rely on multidimensional assessment methods. It should be clear that the most important concern is the quality, not the quantity, of assessment, and that it is not better to use more assessment methods for a particular concept to be assessed, but rather to choose the assessment methods that match the purpose of the assessment as much as possible. This principle needs to be strictly observed when selecting assessment methods, taking into account the type of learning objectives and their characteristics.
To make the evaluation results highly reliable and comparable, the key issue is to develop a scientific and feasible quantitative index system. When constructing the evaluation index system of information-based teaching, we can draw on the more mature evaluation index systems of other disciplines, then combine the characteristics of computer-assisted teaching to select evaluation indexes, and determine the weight of each index according to its role in teaching, so that the indexes play an objective and comparable role in the process of quantification. At the same time, when selecting the indicators, we should pay attention to the fact that there should not be too many or too few indicators. Too many evaluation indicators are not easy to operate, and too few indicators are not differentiated enough. Therefore, the index system should be improved continuously in teaching practice to avoid the overlapping of index factors and repeated weighting, so that it can be more suitable for the needs of teaching evaluation.
In summary, it can be seen that the reference index is used as the reference standard for the experimental design and teaching effectiveness of colleges and universities. It is more appropriate to use the binary relative evaluation value as an indicator of the effectiveness of the experimental design teaching in various colleges and universities. This can eliminate the injustice to evaluate the effectiveness of experimental design teaching in colleges and universities due to the quality of objective basic conditions. Thus it truly reflects the management effect produced by people’s subjective efforts. In addition, the method of binary relative evaluation is used to calculate the effectiveness of experimental design teaching in all colleges and universities.
Data Availability
The dataset can be accessed upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest.