Abstract

With the rapid development of artificial intelligence-related technologies, especially the use of big data, an intelligent world is coming. In the era of intelligence, the traditional trading teaching work model is no longer adaptable. If it wants to survive the new wave of technological development, it must carry out a self-revolution in science and technology. This article aims to study the improvement and optimization of the current college education curriculum system by artificial intelligence equipment under the use of big data technology. To this end, this paper proposes a clustering algorithm for data analysis. Through the improvement of the clustering algorithm and using it in the reform of the education system of colleges and universities, the relevant education data is calculated with high performance and fed back to the teacher to improve the teaching method. At the same time, experiments are designed to analyze the performance of the algorithm and the feasibility of the teaching mode. The experimental analysis results in this paper show that the improved data analysis clustering algorithm has improved the data analysis ability in the teaching process by 37%, and the use of big data has increased the teaching quality score of colleges and universities by nearly 1 point. It can well promote the popularization of education informatization in the country and the improvement of teaching quality.

1. Introduction

In today’s world, big data has penetrated all aspects of human society, not only changing people’s ways of thinking, working, and lifestyles but also changing society’s productivity and production relations, and it has also become the future “new oil”, “new gold mine”, “new resource”, and “new engine” for innovation. After years of deliberation, the 5th Congress of the 18th Central Committee held in China in 2015 clearly advocated the “implementation of the national big data strategy”. In order to deal with the issues of the big data era, the government is required to cooperate with enterprises, universities, and scientific research institutions. And universities are undoubtedly participants and promoters of this wave of big data. There are also “work problems” confusion in the field of domestic education big data, but universities have unique conditions for studying big data, and the research and application of big data in education management have broad prospects.

Big data education management is a new stage in the development of university education management, and everything in the past is a prelude. According to big data, universities can use smarter methods to stimulate and generate new wisdom. There is a fundamental difference between the use of big data in the education management of school graduates and big data in the business field. The big data of the university will eventually find a special relationship. Using big data, cloud computing, the Internet of Things, and other technologies to optimize the structure of school operation elements and improve management is an important tool and foundation for universities to improve school operation efficiency and promote the transformation of higher education institutions. Nowadays, there are few research results on the integration of big data and university education management in academia, the depth and scope are not enough, and the specific empirical research is insufficient. This part of the research is also urgent.

In the context of big data, people can more and more easily improve existing problems through a large amount of data acquisition and analysis, and the use of artificial intelligence equipment has accelerated this pace. As these technologies are being used more and more frequently, more and more people are involved in this research. Rongpeng tries to emphasize one of the most basic characteristics of the revolutionary technology in the 5G era. However, in the face of increasingly complex configuration problems and emerging new business needs, if there is a lack of complete artificial intelligence functions, 5G cellular networks are still not enough [1]. Lu H. proposed that underwater cameras are widely used to observe the seabed. They are usually included in autonomous underwater vehicles, unmanned underwater vehicles, and in situ ocean sensor networks. Although it is an important sensor for monitoring underwater scenes, recent underwater camera sensors have many problems [2]. Hassabis D. believes that a better understanding of biological brains can play a vital role in building intelligent machines. He investigated the historical interaction between AI and neuroscience and emphasized the current advances in AI inspired by neural computing research in humans and other animals. Finally, he emphasized common themes that may be crucial to advancing future research in these two fields [3]. Raedt L. D. studied the basis of combining logic and probability into a so-called relational probability model. He introduced the representation, reasoning, and learning techniques of probability, logic, and their combinations. And it pays close attention to two representations in detail: the relationship between Markov logic network, undirected graph model, and weighted first-order predicate calculus formula is extended, the probability extension of the logic program, and it can also be regarded as the extension of Turing complete relational Bayesian network [4]. Goyache F. developed a method of using artificial intelligence to improve the design and implementation of linear morphological systems for beef cattle. The proposed process involves an iterative mechanism, in which type features are continuously defined and calculated using knowledge engineering methods, scored by a group of well-trained human experts, and finally analyzed by four well-known machine learning algorithms. The results obtained in this way can be used as feedback for the next iteration to improve the accuracy and effectiveness of the proposed evaluation system [5]. Makridakis S., by studying similar inventions in industrial, digital, and artificial intelligence revolutions, claims that the latter is targeted; it will bring about extensive changes and will also affect all aspects of our society and life. In addition, its impact on enterprises and employment will be considerable, leading to highly interconnected organizations that make decisions based on the analysis and use of “big” data and intensified global competition among enterprises [6]. Liu R. tried to conduct a comprehensive review of artificial intelligence algorithms in the fault diagnosis of rotating machinery from the perspective of theoretical background and industrial applications. First, the different artificial intelligence algorithms are briefly introduced. Finally, the advantages, limitations, and practical significance of different artificial intelligence algorithms are discussed, as well as some new research trends [7]. Price S. proposed that the most advanced tools from machine learning and artificial intelligence are automating part of the peer-review process. However, there are still many opportunities for further improvement. Such simplification tools also provide perspective improvements on how the peer review process might be carried out. In particular, analytical ideas will naturally lead to peer review perspectives aimed at finding the best publishing location for the submitted papers [8]. The above-mentioned documents provide an explanation of big data artificial intelligence and related technology introduction. And the use of this part of the technology is also very good, but it is still a bit flawed because it does not make good use of the experimental area to verify its own conclusions.

The innovation of this article lies in the research and analysis of the intelligent education curriculum system in colleges and universities from the perspective of data analysis. Through the improvement of the big data data analysis clustering algorithm, the data analysis ability of the algorithm is greatly improved, and it is effectively applied to new teaching methods, which solves the current problem of insufficient teaching informatization. The experimental analysis part creatively uses three different data sets to verify the performance of the algorithm and conducts research and analysis on it in the analysis part to ensure the stable operation of the later big data analysis capabilities.

2. Methods of Optimizing the Education System

2.1. Conception of “Artificial Intelligence + Education”
2.1.1. Changes in the Connotation of Higher Education

The changes brought about by various technologies of artificial intelligence to the connotation of higher education will be described in detail from the following aspects: the purpose of the university, the reform of the education system, the change of teacher training, and student training.

First of all, with regard to the purpose of university education, that is, what kind of people to train, opinions vary from person to person. We must know that the future education in the artificial intelligence environment is to cultivate students’ ability to create and destroy knowledge. How should university education accept future topics? The first question is what kind of person to train. Of course, it is necessary to cultivate research skills and design skills, and the humanities will return strongly [9].

Second, for the reform of the education system, some people believe that the application of artificial intelligence technology will have a greater impact on the education system. After a large-scale discussion at Tsinghua University, it came to the conclusion that it is the so-called “Trinity” teaching model. In other words, the school’s values are not only “transmitting knowledge” or “cultivating competence” but also “formation of values”. The formation of values and the cultivation of abilities are often impossible to achieve in the classroom. The purpose of education is to cultivate a sound personality, innovative thinking, a global outlook, and a new generation of talents with a sense of social responsibility. Regarding such a purpose, AI may not form an opposition but provide services and support to us. Therefore, the positioning must be correct. Under the background of artificial intelligence, the Bayesian model is strengthened. The bayesian prediction model is a kind of prediction using Bayesian statistics. Yess statistics is different from general statistical methods. It not only uses model information and data information but also makes full use of prior information. Through the method of empirical analysis, the prediction results of the Bayesian prediction model and the ordinary regression prediction model are compared, so that course evaluation can be obtained, the learning style of students can be inferred, and the accuracy of data prediction can be improved [10].

An intelligent decision support system is a decision system that combines artificial intelligence and education proposed by American scholars. It is mainly composed of a database, model library, method library, and intelligent components. Its structural composition is shown in Figure 1.

At present, the intelligent decision support system has become the direction of the development of the decision support system DSS, and it has strong development potential and prospects in the application of online education. For example, the Intelligent Decision Support System (IDSS) is widely used in digital databases. The system provides decision-makers with data, information, and background information needed for decision-making, clarifies decision-making goals, and determines problems. This helps to establish or change the decision-making model and provide a variety of options, evaluations, and selection of various options to prepare. Through the interactive function of humans and computers (Multitouch, Gesture Sensing, Voice Recognition, Visual Tracking, and Other Interactive Functions), it provides the support needed for analysis, comparison, judgment, and correct and effective decision-making, adapting to the times to improve teacher training, and supporting teachers before service and teachers in service. Teachers should continue to learn, learn the use of artificial intelligence equipment, and apply it to the actual teaching process to help improve the quality of education and grow by themselves, reform the role of teachers, change the methods of education and evaluation, and learn to cooperate [11], using AI to integrate the role of professional dignity and professional role as a teacher to meet the educational requirements of the AI era [12]. The professional development model of future teachers in the “AI” era is shown in Figure 2.

2.1.2. The Formation Process of “Smart Education”

In 2008, the then President of IBM promoted the concept of an intelligent platform for the first time in his report “Smart Planet: Next Generation Leadership Agenda” [13]. With the strong support of a new generation of information technology, almost everything on the Earth can be identified, connected to each other, and made intelligent [14].

The concept of “smart Earth” continues to penetrate into society, resulting in many new concepts such as smart medical care, smart elderly care, smart transportation, and smart cities [15]. In September 2009, Dubuque in the Midwestern United States and IBM jointly announced the construction of the first “smart city” in the United States. With the widespread use of smart devices, smart education has emerged. At the same time, IBM promotes five main ways of smart education: students’ technological immersion; individualized and diverse learning paths; service-oriented economic knowledge and skills; the global integration of systems, culture, and resources; and the role of education in the 21st-century economy [16]. The research framework of smart education is shown in Figure 3.

Intelligent education is a new educational concept. In order to realize this concept, information technology needs to be used to build an intelligent learning environment (technical innovation: mainly through technology such as big data, data mining technology, clustering algorithm, neural network learning, etc.). In an intelligent environment, it is necessary to construct intelligent education methods and intelligent evaluation (methods) to apply innovation (innovation mainly refers to educational innovations such as teaching models, teaching methods, and student learning methods.), promote learners to implement intelligent learning (practical innovation), and cultivate the ability to hide and intelligently develop potential. They must be good at learning, good at cooperation, good at communication, research, and judgment requiring excellent creativity, and good at solving complex problems [17]. In the new era, smart education is open to the public in the form of reversal teaching. Inverted teaching, as the name suggests, is to transform the traditional learning model of “teaching first and then learning” into “learning first and then teaching”, and the main body of the classroom is transformed from teachers to students, creating a worksheet first, then watching some microvideos before leaving the class, and then conducting some targeted questions and answers. Therefore, due to ample classroom time, students can concentrate on exercises, projects, and discussions, and teachers can also concentrate on explaining the knowledge structure [18]. Therefore, only by answering the object’s question and solving the question, the reverse innovation of the education process can be implemented.

As shown in Figure 4, when most interviewees are studying professional courses, courses related to artificial intelligence are mainly concentrated in science and engineering, while liberal arts majors are relatively few or not [19]. Liberal arts majors should embrace technological development through dialogue with advanced science and technology (such as artificial intelligence) on the basis of preserving tradition.

2.2. Cluster Analysis under Big Data

In cluster analysis, how to define the similarity between samples is very important and determines the performance of the clustering algorithm to a large extent. The similarity between samples is generally expressed by a distance function (or similarity function). Distance is not only the distance in space but also the gap caused by time, state, semantics, and density. So far, there is no distance function that can be applied to all clustering tasks. Different similarity measures should be designed in different clustering problems [20]. Here, we give two samples:

Based on these two samples, several common example functions are introduced.

2.2.1. Euclidean Distance

Euclidean distance is the most popular kind of distance measurement function, derived from the distance formula between two points in Euclidean space, which is defined as follows:

It can also be represented by vector operations:

2.2.2. Harmanton Distance

The definition of Manhattan distance is as follows:

2.2.3. Chebyshev Distance

The Chebyshev distance is defined as follows:

There is another form of expression for Chebyshev distance:

2.2.4. Minkowski Distance

Minkowski distance is a set of distance functions, which is defined aswhere only represents one parameter. When p is equal to 1, the formula is the representation of the Harmanton distance; when is equal to 2, the formula is a representation of Euclidean distance.

2.2.5. Mahalanobis Distance

The Mahalanobis distance was proposed by Indian statisticians and is defined as

2.2.6. Cosine Distance of Included Angle

The angle cosine is defined as

2.2.7. One-Time Sample Weighting Method

Boosting is a supervised technique that uses multiple weak learners to get one strong learner. Adaptive Boosting is the most popular method of boosting. It iteratively generates a distribution of data and uses it to train the next weak classifier. In each iteration, the samples that are difficult to divide (misclassified) get more weight, while the weights of the samples that are easy to divide are reduced. The new classifier pays more attention to those samples that have significant weight. When the algorithm stops, Adaboost combines all the weak classifiers in the iterative process to get a strong classifier.

The effectiveness of boosting technology has been proven theoretically and experimentally. It uses the sample information of the training data to iteratively update the weight of the sample. This also explains to a certain extent why the application of sample weight information to the ensemble clustering problem has not been studied yet. This chapter proposes a one-time sample weighting method to make up for this gap, and its purpose is to make those samples that are difficult to divide play a more important role in the process of ensemble clustering. This method uses the basic clustering results to construct a co-joining matrix and assigns higher weights to difficult-to-divide samples at one time. The difference with boosting is that the method in this chapter is one-off while boosting is an iterative process. The specific details are described below.

First, the co-join matrix A is constructed by collecting the basic clustering result set C as follows:

Among them, is the number of times that samples and appear in the same cluster. R is the total number of clustering results. Then, we divide the uncertainty of the samples and according to the definition as

Next, we use the indicator confusion to define the weight of each sample:

In order to avoid the instability caused by the weight being 0, we added a smoothing term:where e represents a small positive number. This is similar to the idea of the boosting algorithm, which assigns larger weights to samples that are difficult to cluster; conversely, it assigns smaller weights to samples that are easy to cluster.

2.2.8. Sample-Weighted Graph Division

The sample-weighted clustering algorithm is to find a better single clustering result, and we aim to find a more effective consistent clustering result. In order to achieve this goal, we first calculate the corresponding weighted class center for each cluster in the base cluster:

Base cluster:

We calculate the weighted class center as shown in formula (15):

At the same time, we use an exponential function to measure the similarity of the sample to the center of the weighted class:

Among them, t > 0 is a parameter. Then, the probability that the sample belongs to the cluster is given by

Now, we can define the posterior probability vector of the sample in the base cluster as

This process uses raw data and integrated clustering information. Based on the new expression of samples in base clustering, we use cosine similarity to define the similarity of two samples xi and xj:

T represents the transpose of a matrix or vector. For R basis clusters, R similarity matrices can be generated. We merge all the similarity matrices and get a final similarity matrix S:

The weight of the edge Eij connecting nodes and is defined as follows:

The similarity matrix S can be rewritten as matrix B is defined as

The graph segmentation algorithm can be used to segment the resulting mixed graph, where the division of samples is the final clustering result.

2.3. Artificial Intelligence

Recently, artificial intelligence has risen again in 2006 due to the success of deep learning algorithms. According to the strength of artificial intelligence, it can be divided into three categories: weak artificial intelligence, strong artificial intelligence, and super artificial intelligence. Weak artificial intelligence refers to artificial intelligence that is only good at a certain aspect, mainly human explicit intelligence. The current research on artificial intelligence is concentrated on this type. Strong artificial intelligence is a human-level artificial intelligence that can be compared with humans, possessing and demonstrating explicit wisdom. Super artificial intelligence is the intelligence that leads humans, and the degree of intelligence can cover almost all fields. Figure 5 shows the development history of artificial intelligence.

In the education field, in addition to assisting teachers in teaching work through traditional image, speech recognition, and semantic analysis technologies, artificial intelligence has its unique applications and promising prospects. The most representative ones are intelligent knowledge graph analysis and virtual learning assistants. The teaching object is often expressed as a chaotic collection, and there are obvious differences between each sample. Intelligent map analysis can independently extract the characteristics of the knowledge map of the sample, as well as the objective existence and subjective initiative characteristics of the sample knowledge map, formulate the most reasonable teaching content, and plan the most effective learning path for each sample. The intended learning assistant not only reduces the workload of teachers in teaching activities but also realizes “1 to 1” assisted learning and guidance. The biggest difference between virtual learning assistants and traditional computer-assisted teaching is that virtual learning assistants can learn flexibly and independently. Through constant contact with the teaching object, the virtual learning assistant can adjust its own relevant parameters through the feedback information of the teaching object and improve the ability of assisted learning.

Some scholars divide artificial intelligence into four application forms: intelligent tutor system, automated evaluation system, educational game, and educational robot. Combining the above classification of application forms, based on the integration mode of artificial intelligence technology and education and the support form of technology in the framework of instructional design, the application form of artificial intelligence in education is divided into two forms of subjectivity and auxiliary form, such as shown in Figure 6.

The subjective application form refers to the integration of artificial intelligence technology into the traditional teaching subject and object in the role of the teaching subject, such as educational robots and intelligent teaching systems, which mainly assume the roles of tutors, students, assistants, and learning companions; auxiliary application form refers to the integration of artificial intelligence into teaching content, teaching environment, and teaching evaluation or transformation into teaching media tools, which affect the teaching system, and assume the role of teaching aids, learning tools, resources, and scenarios.

Although the application of artificial intelligence technology in education is at the beginning stage, artificial intelligence technology is the most revolutionary technology at this stage. Technology is a tool of education and teaching, and it is also a revolutionary factor that subverts education and teaching. But not all technological revolutions will completely change education, only some revolutionary technologies will have such an impact. With the maturity of artificial intelligence technology, the learning ability of machines is getting stronger and stronger, and its application in the field of education will become more and more in-depth. Humanity’s long-standing educational philosophy will be realized on the basis of artificial intelligence-based big data, cloud computing, deep learning, and adaptive technology.

The cornerstone of artificial intelligence’s rapid and prosperous development again is cloud computing, new algorithms, and big data. Only the comprehensive application of revolutionary technologies will have an impact on the teaching system. Therefore, this article will explore the paradigm shift of instructional design in the context of the era of comprehensive technical support such as artificial intelligence, cloud computing, big data, learning analysis, and machine learning.

3. Simulation Data and Experiments

3.1. Experimental Results of Algorithm Simulation Data

The experiment uses three simulation data sets One, Two, and Three, which contain 3, 4, and 5 circular clusters, respectively. The center point and radius are shown in Tables 13. The method of generating each data cluster set is basically based on the center point and the radius.

Three data sets One, Two, and Three are randomly generated according to the table simulation. One is divided into 3 clusters, containing 300, 100, and 200 pieces of data, respectively. Two is divided into 4 clusters, containing 400, 100, 300, and 200 pieces of data, respectively. Three is divided into 5 clusters, containing 80, 40, 150, 80, and 120 pieces of data, respectively. We can see from the observations of the above three data sets that the data of the first two data sets are basically round, but the third one is different. The state of his data set is elliptical. To this end, we calculate the accuracy of his method and the sum of squares of internal errors for each data set, and the calculated values are shown in Table 4.

It can be seen from the above table that the accuracy of the TR-KMC method and the ID-KMC method in the One, Two, and Three datasets is not much different. However, in contrast, the latter clustering effect is more obvious, and it can better analyze the memory of intelligent education data in colleges and universities.

3.2. Personalized Service Teaching Experiment Based on Big Data
3.2.1. Personalized Learning Module Based on Big Data Mining, Analysis, and Aggregation

There should be identity authentication and unique student number to achieve lifelong learning tracking. In the personalized teaching mode, students first need to register and establish each individual’s own dynamic teaching electronic file. Future education is supported by Internet education, using a new system on the basis of big data to certify personal qualifications, starting from a unique student number, and accompanying individuals for life. The dynamic electronic file records personal basic information, learning experience, educational level, and other necessary information. This basic information is a basic guarantee for the realization of personal lifelong learning.

3.2.2. Use the Scale to Conduct Personalized Tests and Character Characteristics Analysis

When the learner’s learning characteristics match the learning environment, a benign interaction will be realized and the best learning effect will be produced. For this reason, before starting personalized learning, it is necessary to understand the individual characteristics and individual differences of each learner. Before implementing personalized and comprehensive teaching, it is necessary to consider the learner’s previous experience, learning motivation, metacognitive ability, learning style, and learner’s personality characteristics. Internet education can be used for the occupational personality test, occupational personality test, psychological test, and ability test.

3.2.3. Establish Electronic Files for Dynamic Learning

Construct an electronic file of student growth with the main contents of students’ quality literacy, study foundation, study habits, academic achievement, physical and mental health, learning difficulties, task goals, etc. Accumulate and improve learner information big data, build a comprehensive learning electronic file with learner personality analysis as the main content, and change the static evaluation and single evaluation in traditional education into multiple evaluation and dynamic evaluation.

3.2.4. Using Big Data Mining Technology to Realize the Aggregation and Generation of Personalized Information

Under the vision of Internet thinking, learning pays more attention to providing personalized learning support services. This personalized learning support service is based on big data mining. The core link of personalized learning services in the context of Internet education is data. The use of big data to achieve personalized learning includes data collection, data generation, data mining, data analysis, data aggregation, and other data recycling. For example, a batch of data can be collected through the learner’s dynamic learning of the learning trajectory, learning difficulties, and learning habits in the electronic file; and on the basis of data collection, generation, and mining, a database based on the learning object is generated, which includes all the important information of the learning object. Teachers can extract and analyze information in any dimension according to the database of the learning object and then use statistical analysis software for statistical analysis. It can also build a model of the learning object and realize personalized content recommendation based on the basic information of the learning task and the learning object, and the system automatically generates a dynamic learning path map based on the learner.

4. Data Analysis Ability and Teaching Improvement Analysis

4.1. Model Analysis Based on BIC Criteria

In the case of clustering, there are many cluster variables that can be selected, and the method of selecting cluster variables is called model selection. The benchmarks generally used in the selection model are as follows: AIC standard, BIC standard, and HQ standard. This chapter adopts BIC guidelines. Figure 7 shows a specific BIC curve, including the number of variables, that is, the number of models and clusters.

By observing Figure 7 we can see the following. (1) As the number of clusters increases, BIC increases monotonously, and there is no obvious peak phenomenon. Explaining that on this issue, the BIC standard is a cluster. (2) When the number of clusters is greater than 4, BIC will increase more steadily. In other words, the increase in the number of clusters has no great impact on the interpretation of the model. This cluster should be close to 4, but this method cannot get an accurate value.

At the same time, in the calculation process of the predicted strength, the training set and the test set are randomly divided, so some accidental factors may have a great influence on the calculation result of the predicted strength. In order to reduce the influence of accidental factors, this paper adopts an improved method for calculating the intensity of prediction. The specific method is as follows. First, the data set is randomly divided into several equal parts, and the equal parts are obtained, respectively, and then the test set is executed. Finally, the average value is obtained as the predicted strength under this number of clusters. Figure 8 shows the predicted intensity change curve under various variable numbers and cluster numbers.

4.2. Simulation Experiment Based on Clustering Analysis Algorithm

In order to verify the effectiveness and feasibility of the clustering analysis algorithm, simulation experiments are used to verify the effectiveness of the algorithm.

4.2.1. Simulation Experiment Data Set

The first simulation experiment is based on an unbalanced data set for testing: the simulation sample set is composed of two two-dimensional Gaussian random distribution sample sets. The class centers of the two types are (5, 5) and (10, 10), respectively. The number of samples in the first type is 300, and the covariance matrix is taken as [6 0; 0 6]. The number of samples in the second type is 50, The covariance matrix is taken as [1 0; 0 1]. The distribution of the samples is shown in Figure 9.

In actual calculations, particle swarm optimization calculations tend to fall into local optimal solutions, resulting in poor clustering effects. The FCM algorithm is used to cluster the data, and the geometric average of various precisions is selected as the comparison standard to examine the effectiveness of each algorithm. The test results based on simulation data sets 1 and 2 are shown in Table 5.

When PCM is tested based on an unbalanced data set, it is very easy to fall into consistency, making the test result tend to 0, resulting in invalid test results.

The results of simulation experiment 1 and simulation experiment 2 show that the sample size of the data set has an impact on the algorithm clustering results. The test results based on balanced data are better than unbalanced data sets, which shows that the sample size interferes with the performance of the algorithm. Moreover, in the test of unbalanced data sets, there are almost no errors in the test results of the positive class (small sample class) and the clustering error. Basically, they come from the negative class. This reflects that fuzzy cluster classification judgments tend to be positive, which is the opposite of supervised classification which tends to negative classes. It also shows that for balanced and unbalanced data sets, supervised classification and fuzzy clustering are two independent research problems.

According to the results of two simulation experiments, it can be explained that, compared with PCM, EPCM is not only effective for balanced data sets but also can maintain good classification performance for unbalanced data sets. In contrast, because PCM does not contain sample size information, it only performs well for balanced data but performs very poorly for unbalanced data sets.

4.3. The Degree of Education Informatization and Teaching Development in Colleges and Universities

As a special form of education, ideological and political education in colleges and universities, with the help of big data, its informatization level has been significantly improved. The most obvious is that the construction of smart campuses boosted by big data provides an important support and platform for the informatization of ideological and political education in colleges and universities. Smart campuses have demonstrated significant advantages in providing fast, massive, multisource heterogeneous information. However, the domestic emphasis on this aspect is still far from enough. This article selects a mature foreign university teaching system based on big data to compare and analyze with a current domestic university. The statistical data is shown in Figure 10.

From the figure, we can see that foreign countries pay more attention to related informatization. The use of artificial intelligence equipment in colleges and universities is very common, and the degree of informatization is far away from the domestic teaching model, and the quality of teaching is also higher than that in China. It is mainly through the use of big data, statistical analysis of school data through big data. Statistics about students’ habits and other related elements and feedback to teachers will help teachers teach symptomatically according to students’ conditions and improve the problems they encounter in the learning process.

5. Conclusions

The main research content of this paper is the improvement of the intelligent education curriculum system of colleges and universities by artificial intelligence equipment under the background of big data. Through the research and analysis of the data analysis and clustering algorithm of big data, the data processing ability under the new teaching mode is improved, and the application of artificial intelligence equipment is discussed on the improvement of the memory of the teaching system. At the same time, we design experiments to investigate and analyze the improved data analysis clustering algorithm, and the final results are as follows: in the teaching process, the improved data analysis clustering algorithm has increased the data analysis ability by 37%, and the teaching quality score of colleges and universities has increased by about 1 point under the action of big data, which effectively improves the degree of informatization of colleges and universities and at the same time improved the quality of teaching.

Data Availability

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Conflicts of Interest

The authors state that this article has no conflicts of interest.

Acknowledgments

This work was supported by the special scientific research project of on-the-job doctoral students of Chengdu Normal University in 2020″ Research on the Framework of Teaching Knowledge System under the Background of Artificial Intelligence” (ZZBS2020-10); The innovation training program for College Students “Research on the Development of Characteristic Curriculum of Sex Education in Primary School: Based on the Investigation and Analysis of Several Primary Schools in Chengdu”; Key Research Base of Philosophy and Social Sciences in Sichuan Province: key project of Sichuan Primary and secondary school teachers’ Professional Development Research Center “Research on the Social Support System for Rural Teachers’ Professional Development from the Perspective of Excellent Teachers (PDTR2020-02)”; and Key Research Base of Humanities and Social Sciences in Colleges and Universities in Sichuan Province: the key project of Sichuan Primary and secondary school teachers’ Ethics Research Center “Research on the Realization of Primary and Secondary School Teachers’ Fair Virtue from the Perspective of Socialist Core Values“(CJSD20-28).