Abstract

Current data has the characteristics of complexity and low information density, which can be called the information sparse data. However, a large amount of data makes it difficult to analyse sparse data with traditional collaborative filtering recommendation algorithms, which may lead to low accuracy. Meanwhile, the complexity of data means that the recommended environment is affected by multiple dimensional factors. In order to solve these problems efficiently, our paper proposes a multidimensional collaborative filtering algorithm based on improved item rating prediction. The algorithm considers a variety of factors that affect user ratings; then, it uses the penalty to account for users’ popularity to calculate the degree of similarity between users and cross-iterative bi-clustering for the user scoring matrix to take into account changes in user’s preferences and improves on the traditional item rating prediction algorithm, which considers user ratings according to multidimensional factors. In this algorithm, the introduction of systematic error factors based on statistical learning improves the accuracy of rating prediction, and the multidimensional method can solve data sparsity problems, enabling the strongest relevant dimension influencing factors with association rules to be found. The experiment results show that the proposed algorithm has the advantages of smaller recommendation error and higher recommendation accuracy.

1. Introduction

Recommendation algorithms are mainly divided into six categories: content-based filtering, collaborative filtering, recommendation based on association rules, recommendation based on utility, recommendation based on knowledge, and mixed recommendation [1, 2]. Collaborative filtering (CF) algorithms are the most widely used and classic because of their easy implementation, high accuracy, and high recommendation efficiency. However, in the era of big data, one typical feature is that the amount of data is huge but the information density is low, which can also be called information sparse data. Collaborative filtering algorithms are often ineffective when dealing with large amounts of sparse data. Furthermore, the complex data environment results in many factors affecting the recommendation. With the development of the mobile Internet, mobile devices can easily obtain more information about dimensions, such as location, weather, and social relationships. Under different external influences, the recommendation results will change greatly. However, most of the current collaborative filtering algorithms are based on a single dimension for recommendation.

In order to improve the performance of collaborative filtering recommendation algorithms, researchers resolve the problems from different perspectives and propose a variety of recommendation algorithms. Some researchers optimized the user scoring matrix using different methods [36], and others used fuzzy sets to efficiently represent user features [7, 8]. These methods all effectively alleviate the problem of sparse data. In order to find a neighbor set that is more similar to the target user’s interest and improve the accuracy of recommendations, some researchers improved the similarity calculation method [7, 9, 10], and others used location information and trust relationship information, such as [11, 12]. The potential relationship between information mining users provides new ideas for finding neighbors. Researchers have also used demographic knowledge [13, 14] to achieve major breakthroughs, while some scholars used score ranking prediction methods to enhance recommendation performance such as [1517], and others chose the genetic algorithm used in the prediction process to improve recommendation performance, such as [18, 19]. Context as a dynamic description of an item and a user’s situation affects the user’s decision-making process; hence, it is essential for any recommendation system in a big data environment [2022].

These algorithms alleviate the problems caused by data sparsity to some extent, improve the accuracy of calculation similarity and recommendation quality with different methods, but the implementation of the algorithm depends on a large amount of user information and calculation, which can be due to high complexity and hard implementation.

In this paper, we focus on the recommendation algorithm of data in complex environments. Firstly, we study the traditional item rating prediction algorithm and make some improvements with adding the weight problem considering user score. We introduce the system error factor based on statistical learning for the user to develop a personalized rating prediction algorithm. Then, we propose a personalized rating prediction method that is combined with a classical collaborative filtering algorithm User-Inverse Item Frequency (User-IIF) [23] to develop a novel collaborative filtering algorithm. This method is based on both User-IIF and personalized rating prediction. Secondly, we focus on the impact of multidimensional factors and propose a novel multidimensional method, which can separate user groups based on context-aware dimensions combined with both user clustering and item clustering. Finally, we conduct a series of experiments to prove that our algorithms and methods are effective and efficient. The experiment results prove that our algorithms are easy to implement with low computational overhead. In addition, our algorithms can also process sparse data and improve the accuracy of recommendations.

The rest of the paper is organized as follows. The works related to our research and our novel methods to deal with multidimensional recommendation are proposed in Section 2, and the experiment results and discussion are presented in Section 3. Finally, the conclusion and suggestions for future work are given in Section 4.

2. Methodology

2.1. Proposed Item Rating Prediction Method

Traditional item rating prediction algorithms take the target user’s average item historical score as the reference center and then use the similarity between similar neighbors to perform item rating prediction. When user data is sparse, the error rate of traditional item rating prediction algorithms increases and the accuracy rate decreases. This proposed method adds weights to consider user ratings and introduces systematic error factors based on statistical learning to improve traditional item rating prediction algorithms.

2.1.1. User Rating Weighting Factor

Traditional item rating prediction algorithms take the historical average score of the target user as the central value and rely on the neighbor’s score to correct it. Traditional algorithms rely too much on the user’s score and its anti-interference ability is ineffective when it is faced with data sparseness. For example, when a user has not scored many items, even if the average user score is close to 0, using the user’s historical average score as the center value may result in inaccurate recommendation results. The item scoring prediction method proposed in this work considers the factors of public scoring, improves the algorithm using (1), introduces the weighting factor a of the user’s score, and assigns the weight of the scoring (1 − α) to the scoring of the item.

2.1.2. Systematic Error Factor Based on Statistical Learning

A large number of studies show that there are errors in item rating prediction, which only a few algorithms have addressed by performing a statistical analysis calculation on each recommendation result. In order to achieve more personalized recommendations, it is necessary to establish a system error factor generated by the recommendation system for each user.

The system error generated by the recommendation of target user is calculated by (2), where the actual score of user on item i is represented as , and describes the predicted rating of user generated by the system. describes the number of items in the itemsets that target user adopts from the recommendation results. Through statistical learning, the system sets the error factor for each user, and then this is applied to the collaborative filtering algorithm to correct the item rating prediction, as shown in (3), for a more accurate personalized recommendation algorithm for the target user.

Based on the above calculations, this paper proposes a novel algorithm, namely, improved item-rating prediction (IIP) for user scoring. The main steps of IIP are shown in Figure 1(a). The basic idea is to form a set of error factors for each user through statistical learning, and then apply it to the collaborative filtering algorithm to correct the item rating prediction.

2.2. User Scoring Based on Multidimensional Context
2.2.1. User Similarity Calculation

The first step of our method is to get the user’s neighbor cluster, which is obtained through the user’s scoring matrix. Users in the user group whose interests are similar to each other can be selected as neighbor users. This paper utilizes Pearson’s similarity [24] to measure the distance between users as shown in (4). Pearson’s similarity is similar to cosine similarity in form. The average evaluation value of users is subtracted during calculation, which is to normalize the cosine similarity and unify the user’s scoring standard. The range of Pearson’s similarity is [−1, 1], which is more accurate than that of Jaccard’s correlation coefficient and cosine similarity.where represents the similarity value between target user a and its neighbor cluster user, represents the set of items that target user has scored, and indicates the set of products scored by neighbor cluster user . i represents the item that the target user and neighbor cluster user scored together. indicates the rating of item i by target user , and indicates the average rating of target user . Following the same principle, is the score of neighbor cluster user for item i, and indicates the average rating of neighbor cluster user . The traditional collaborative filtering algorithm uses the above formula to calculate the similarity between users.

A user will have different scoring standards under different contexts, such as the user’s rating of a hotel when traveling on business and the rating criteria for a hotel when traveling privately. So after considering the context, it has nothing to do with the previous rating and is replaced by a new symbol, we use instead of and instead of , where represents the average rating of user under context condition c. The range of c can be appropriately generalized or filtered as needed. The proposed improved method can be described as follows:

The improved user neighbor cluster similarity calculation formula takes into account the influence of context factors on the basic rating, making the calculation formula closer to the context recommendation environment. After considering the context, the user’s similarity calculation (4) is improved to (5), and the influence of the context on the user’s rating is taken into account when using the mean calibration error of the score.

2.2.2. User and Item Cross-Iterative Bi-Clustering

The cross-iterative bi-clustering method is used for cluster users and items separately. Due to the sparsity of the user-item matrix, the initial clustering is not accurate enough. Therefore, we use the cross-iterative method to adjust both user clustering and item clustering.

User clustering adjustment is calculated by (6), and item clustering adjustment is calculated by (7)where is the score of user for each item and is the score of user for each item. is a collection of items that and have scored together, and is the similarity between items and . Here, we also use Pearson’s similarity to calculate this. If there are a lot of items that have been rated together, they can be considered as users with similar interests. If the obtained is greater than a certain threshold , it can be kept in the cluster; otherwise, it will be separated from the current cluster. Then, we calculate the similarity between and the other cluster centers. This is added to the cluster with the most similarity to complete the adjustment of the user cluster.where is the score of each user on item and is the score of each user on item . is the set of users that and have scored together. is the similarity between items and . Here, we use Pearson’s similarity. If the obtained is greater than a certain threshold , the item will be kept in the cluster; otherwise, it will be separated from the current cluster. Then, we calculate the similarity between and the other cluster centers. This is added to the cluster with the most similarity to complete the adjustment of the user cluster. Algorithm 2 is proposed for cross-iterative bi-clustering, as shown in Figure 1(b).

2.2.3. Context Similarity Calculation

When the scope of the context is very large, there are many different dimensions, such as time, place, surrounding people, etc. According to the characteristics of the dataset and the environment collection ability, the context dimension selected by the recommendation system will be different. As far as the time dimension is concerned, it can also be specifically subdivided into seasons, weeks, moments, holidays, and so on.

Assume that we select a system with z different dimensions, which is shown as , where is a contextual dimension (such as time, location, weather, etc.). The similarity of the context between two score records x, y on dimension t can be recorded as . We use the degree of influence of the context dimension on the score to measure the similarity between the two context variables as follows:where is the user and describes the rating of item i under the context of by user . is the average score of item i. Similarly, is user ’s rating of item i under context , is the standard deviation of context dimension , and is the standard deviation of context dimension . This paper proposes a novel method to measure the similarity of context x and y, according to the influence degree of different contexts on the score of the same commodity i in the t dimension. Algorithm 3 is shown in Figure 1(c) to calculate context similarity efficiently.

2.2.4. The Proposed Multidimensional Context-Aware Based Method

In multidimensional recommendation, the addition of context results in a lot of interesting rules and mining high-frequency patterns between contexts and items can help discover the impact of different contexts on user decisions. In this paper, we select the multidimensional context from strong association rules with the algorithm FP-growth.

Generally, when determining the neighbor user group, the N-user with the largest similarity can be selected as the cluster neighbor of the target user according to the similarity calculation formula. The context can help the user to filter out some of the user score records that have a large difference in context from the current recommendation environment. Because some commodity decisions are closely related to a certain context factor, the context is called a hard context and must be considered and satisfied in the recommendation. Some score records that do not satisfy the current context can be filtered out preferentially and are not considered when calculating the similarity of neighbor clusters.

Due to the influence of the context, the user’s rating record has its own context background, and the target user’s current background is different, so the rating record in different contexts is different from the user in the current context. In order to distinguish the relevance of the rating record under the current context, we use the contextual similarity calculation method to calculate , which describes the similarity between context x and context y in the t dimension. The user rating predictions in a multidimensional context can be described as follows:where c is the context in which the target user is located and is the system error (the other symbols are described in the previous formula). It is well known that contexts can have many specific dimensions, depending on the data collection, such as time, location, and related personnel. The time dimension can be divided into seasons, weeks, moments, holidays, and so on.

After comprehensively considering the influence of context on the recommendation system, (10) can be replaced with (11). The basic clustering rating prediction formula is modified as follows:

Algorithm 4 shown in Figure 1(d) is proposed as the multidimensional context-aware based method. Using this algorithm, item scores can be obtained under multidimensional conditions.

3. Experiments and Results

3.1. Experimental Datasets and Environment

In order to verify the impact of a user’s scoring weight on the recommendation results and to prove that the recommendation accuracy of the collaborative filtering algorithm based on the user and improved item scoring is more accurate, it is necessary to compare our proposed algorithm with traditional algorithms that are based on classical item scoring prediction methods.

These experiments were conducted under the following conditions: (1) CPU dual core i7-8750H with frequency 2.5 GHz; (2) main memory of 8 G; (3) Windows 10 64-bit operating system; (4) database software version MySQL 5.7. The proposed machine learning algorithms are implemented using an object-oriented dynamic type interpreted scripting language Python, including Python itself with some powerful libraries and third-party modules which cover scientific computing, database interfaces, etc., such as NumPy, pandas, etc. The integrated development environment is JetBrains PyCharm Professional 2018.2.5.

The experiments to improve user rating prediction utilize two datasets, MovieLens and Jester. The Jester dataset was developed by Ken Goldberg and his team at the Berkeley University of California. The Jesters dataset scored [−10, 10], the jester-dataset-1 comprises data from 24,983 users who have rated 36 or more jokes, a matrix of 24983 × 101; and jester-dataset-3 comprises data from 24,938 users who have rated between 15 and 35 jokes, a matrix of 24938 × 101 dimensions. The MovieLens dataset was organized by the Group Lens team at the University of Minnesota. The MovieLens 100K dataset comprises 100,000 ratings from 1000 users for 1,682 movies, rated between 1 and 5. The sparsity of the set is about 93.7%, and the data sparse problem is evident. In the experiment, in order to simulate datasets of different scales and different sparsity levels, the existing datasets were processed to generate four datasets as shown in Table 1. From this table, we can see that the datasets were randomly divided into training sets and test sets during the experiment, where the training set accounted for 80% of the entire dataset, and the test set accounted for the remaining 20%. When the Jesters dataset was processed, the score was formulated with the value 0 to 5 by  = (r + 10)/4.

The source of the experimental datasets for testing the multidimensional collaborative filtering algorithm is CARSKit (https://github.com/irecsys/CARSKit/), which is an open-source Java-based context-aware recommendation engine. We used two datasets: DePaulMovie [25] and TripAdvisor_v1 [26]. In the experiments, DePaulMovie kept its original shape, and TripAdvisor_v1 was filtered and adjusted. We used 70% of the dataset as the training set and 30% as the test set.

3.2. Evaluation Indicators of the Experiment Results

In order to study the performance of the improved recommendation algorithm, the experiment used four indicators, namely, precision, recall, mean absolute error (MAE), and root mean square error (RMSE).

Precision is an important indicator for evaluating the performance of a recommendation algorithm. It describes the proportion of the recommended items that the recommendation system makes for the user. The larger its value, the higher the accuracy of the system, and the better the system’s recommendation. Precision is computed as shown in the following formula:where is the target user who uses the system, is the set of recommended items for the user, and is the set of items in which the user is actually interested.

“Recall” describes how many of the products the user is interested in and how many are actually recommended to him by the system. Recall is computed as shown in the following formula:

The molecular weight of recall is the same as the molecule of the precision, which is the intersection of and ; however, their denominators are different. The denominator part of the accuracy rate is , which is the set of all items recommended to the user, and the denominator of the recall rate is , which is the collection of all the items of interest of the user. A larger recall corresponds to a better performance.

MAE avoids the problem where the errors cancel each other out and accurately reflects the actual prediction error. The calculation method is as shown as in formula (14), which averages the absolute value of the difference between the actual score and the predicted score.where and denote the original data and predicted data, respectively.

RMSE is used to measure the deviation between the observed value and the true value. The calculation method is shown in formula (15), that is, the ratio of the square of the difference between the predicted score and the actual score to the m number of observations squared.

The smaller the MAE and RMSE values, the better the recommended performance of the algorithm.

3.3. Experiment Results
3.3.1. Choosing the Best Value for the User Score Weighting Factor

Firstly, the optimal value for the user’s score weighting factor a is determined for the collaborative filtering algorithm (U&IPRP-CF), based on the user and the improved item rating prediction. To ensure the accuracy of the experiment, the number of item recommendations N of the Jester-500-100, Jester-1000-100, and Jester-1000-200 datasets is set to a constant N = 10, meaning that 10 items are recommended to each target user. However, the number of items in the MovieLens dataset is large, and the number of recommended items N is set to a constant N = 30, meaning that 30 items are recommended to each target user. The K number of the most similar neighbors selected for each target user is a variable, and K is taken from 50, in increments of 10, and sequentially taken to 100, that is, [50, 60, 70, 80, 90, 100].

Each dataset was tested on a set of values of a = 1, 0.9, 0.8, 0.7, 0.6, 0.5 for a. The experiment results of the Jester-500-100 dataset are shown in Tables 2 and 3. The best value of a is around 0.9. The experiment results of the Jester-1000-100 dataset are shown in Tables 4 and 5. The best value of a is around 0.8 and 0.9. The experiment results of the Jester-1000-200 dataset are shown in Tables 6 and 7. The best value of a is around 0.7. The experiment results of the MovieLens dataset are shown in Tables 8 and 9. The best value of a is around 0.7. In summary, as the size of the dataset increases and sparsity increases, the optimal value of the user’s score weighting factor a decreases.

After determining the best value of the user’s score weighting factor, a comparative experiment is carried out, and the algorithms are first sorted and named, as shown in Table 10.

The algorithms are compared in Table 10. In order to ensure the accuracy of the experiment, the recommended number N of Jester-500-100, Jester-1000-100, and Jester-1000-200 datasets is a constant N = 10. The recommended N number of items in the MovieLens dataset is a constant N = 30. The K number of most similar neighbors selected for each target user is a variable. K has a value from 50 to 100 with an interval of 10, expressed as [50, 60, 70, 80, 90, 100].

The comparison results of the performance for different datasets are shown in Figure 2. The experiment results for the Jester-500-100 dataset are shown in Figures 2(a) and 2(b). The experiment results for the Jester-1000-100 dataset are shown in Figures 2(c) and 2(d). The experiment results for the Jester-1000-200 dataset are shown in Figures 2(e) and 2(f). The experiment results for the MovieLens dataset are shown in Figures 2(g) and 2(h). It can be seen that for the different datasets, the error generated by the U&IPRP-CF algorithm is smaller than the traditional algorithm, and it has obvious advantages in terms of the accuracy of recommendations.

In the three datasets Jester-500-100, Jester-1000-100, and Jester-1000-200, the U&URWFRP-CF algorithm does not reduce the recommendation error, whereas the recommendation error for the U&URWFRP-CF algorithm is reduced in the MovieLens dataset. This shows that when the dataset is small and sparse, the user’s score is very reliable; otherwise, when the dataset is large and information sparse, the user’s score is less likely to be referenced under a large base. In this situation, the weights of the user ratings need to be considered.

From these figures, it can be seen that when the scale and sparsity of the dataset are gradually increased, the recommendation error of the U&IPRP-CF algorithm is also reduced and the performance becomes increasingly better, which alleviates the data sparsity problem to some extent.

It can be seen from the experiment results that using the average score of the public to replace the average historical score weight of the user alone can enable the system to predict the user’s score on the unrated item, resulting in less error, thus making more accurate recommendations. To reduce the error of score prediction, a systematic error factor is established for each user, and statistical learning is improved on each recommendation, which can effectively correct the error and improve the accuracy of recommendations.

In addition, combined with experiment 3.3.1, it can be seen that choosing the correct parameters is also key to improving the accuracy of recommendations. When the number of items recommended by item N, the number of neighbors of target user K, and the user’s score weighting factor a are in a practical application, the recommendation system needs to compare and select appropriate values for the experiment.

3.3.2. Performance Comparison Experiments Using Different Algorithms

The experimental results were verified by DePaulMovie and TripAdvisor_v1 datasets published by GroupLens. Our paper compared four recommendation algorithms: CF, CF-AR, Multi-CF, Multi-CF-AR, CF-AR-IIP, and Multi-CF-AR-IIP, the latter two being the algorithms proposed in this paper. The algorithms are explained as follows:(a)CF: classical user-based collaborative filtering recommendation algorithm(b)CF-AR: user-based collaborative filtering recommendation algorithm combined with association rules mining(c)CF-AR-IIP: user-based collaborative filtering recommendation algorithm combined with association rules mining and improved item-rating prediction, which is proposed in this paper(d)Multi-CF: classical multidimensional context-aware user-based collaborative filtering algorithm(e)Multi-CF-AR: multidimensional context-aware user-based collaborative filtering algorithm combined with association rules mining(f)Multi-CF-AR-IIP: user-based collaborative filtering recommendation algorithm combined with association rules mining and improved item-rating prediction, which is proposed in this paper

In the experiments based on the DePaulMovie datasets, the top-K (K = 10, 15, 20, 25) neighbors with the highest similarity for each user and the top-N (N = 10, 15, 20) recommended items with the highest predicted rating for the target users are selected. The system has three context dimensions, namely, C =(C1, C2, C3)=(Time, Location, Companion), where C1 (Time)=(Weekday, Weekend), C2 (Location)=(Cinema, Home), and C3=(Companion)=(Alone, Family, Partner).

In the experiments based on the TripAdvisor_v1 dataset, the top-K (K = 20, 30, 40, 50) neighbors with the highest similarity for each user and the top-N (N = 10, 15, 20) recommended items with the highest predicted rating for the target users are selected. The system has three context dimensions as follows: C =(C1, C2, C3)=(USER_TIMEZONE, HOTEL_TIMEZONE, Trip Type), where C1 (USER_TIMEZONE)=(Eastern, Central, Pacific, Mountain, HI, AK), C2 (HOTEL_TIMEZONE)=(Eastern, Central, Pacific, Mountain), and C3 (Trip Type)=(1, 2, 3, 4, 5). Table 11 shows information on the related context dimensions selected for the datasets.

Figures 3(a) and 3(b) show the precision of the algorithms. Multi-CF-AR-IIP achieves the best precision, and Multi-CF-AR is the second best. Particularly when N is small, Multi-CF-AR-IIP has obvious advantages. As N increases, the precision of all the algorithms decreases, and the difference between the algorithms becomes increasingly smaller. This shows that the increase in the number of recommendations reduces the accuracy of the recommendation. Different algorithms will exhibit different characteristics in different datasets. Multi-CF has an advantage in DePaulMovie, but it does not work well in TripAdvisor_v1.

Figures 3(c) and 3(d) show the recall of the algorithms. The result is the same as for precision, where Multi-CF-AR-IIP achieves the best recall, and Multi-CF-AR is the second best. However, recall increases significantly as N increases. This is because the denominator of recall is the number of items in which the user is actually interested and its value is small.

Figures 3(e) and 3(f) show the MAE of the algorithms and Figures 3(g) and 3(h) show the RMSE of the algorithms. In DePaulMovie, except CF, the resulting errors of the other five algorithms are almost similar. But the error gap in TripAdvisor_v1 is obvious, and the multidimensional algorithms produce a high number of errors. MAE and RMSE calculate the difference between the predict rating and the user’s true rating, which aims to measure the difference between the recommended result and the user’s true preference. The smaller the MAE and the RMSE, the more users like the recommended items. Association rule mining (ARM) improves the precision and recall of the recommendation, which means that it improves click volume and purchase amount. At the same time, it also makes it difficult for the recommender system to predict the rating, and the probability of the system recommending items that the user does not like is increased. From the experiment results, it can reduce MAE and RMSE with improved item-rating prediction (IIP) method.

In general, the fusion algorithm Multi-CF-AR-IIP has better recommendation performance than the others. It recommends more diversified items to users by using multidimension context and AR and recommends items that users may prefer by using IIP.

4. Conclusion

In recent years, recommendation systems have been widely used in various fields. The accuracy and applicability of the recommendation system is very important. In this paper, we proposed a novel recommendation algorithm based on improved collaborative filtering with multidimensional context and association rules. Firstly, an improved cross-iterative bi-clustering based user scoring prediction method is proposed. Then, the multidimensional context-aware method is introduced into the traditional user collaborative filtering algorithm by using context-aware related information. In this method, a multidimensional context is used to filter the original data, the excess data is filtered to adjust the recommendation results, and the context data is integrated into the similarity calculation process of the user and the product to obtain more accurate recommendation results. In addition, in order to better compensate for the impact of data sparseness and increase the user’s satisfaction, association rules can be used to find similar preferences between users with low similarity. By mining the context and the relevance of the user’s selected items, we can find popular items with a high degree of contextual relevance to complement the algorithm’s novelty and reliability. The algorithm proposed in this paper can enhance the user’s experience on the recommendation platform and strengthen the connection between context and recommendation results. The algorithms we propose provide recommendations for users in a multidimensional context environment, which not only complements the omission of the collaborative filtering algorithm, but also improves the accuracy and efficiency of the recommendation results.

In the future, we will study high-dimensional clustering algorithms, which will help solve the problem of data sparsity and determine the decision-making of social groups. To establish a more personalized recommendation system, we must develop effective recommendation methods from multiple perspectives. Another new research direction is how to use recursive neural networks to provide personalized advice.

Data Availability

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Conflicts of Interest

The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Acknowledgments

This work was supported by Fund Item of the China Scholarship Council (CSC) and Key Laboratory Project of Sichuan University (QXXCSYS201705).