Abstract

To better realize music education and improve the accuracy of music recommendation, a multidimensional analysis method of the music education system based on multi-intelligent recommendation is proposed. In the process of using this method to recommend music to users, the music characteristics are extracted and the music data are obtained from MIDI music, and three collaborative filtering algorithms are jointly used, namely user-based, content-based, and model-based collaborative filtering algorithms. The simulation results show that the proposed method can provide users with an intelligent music recommendation scheme according to the user’s basic information and operation information. Compared with a single user-based recommendation or content-based, model-based recommendation, the proposed method has a certain degree of novelty and accuracy. Here, the recommendation accuracy rate can reach 94.8%, which is higher than the other two recommendation algorithms, showing certain advantages.

Music, as the common language of mankind, can satisfy people’s spiritual needs. With the development of the Internet, Internet music has been increasing, providing a brand new educational way for online music education. However, due to the huge amount of music stored on the network, it is difficult for music education to screen music, which brings great inconvenience to users. In order to solve this problem, relevant personnel propose to adopt a recommendation algorithm, through multidimensional analysis of music, to recommend intelligent music to users. Mohammad Tabrez Quasim et al. put forward an emotion-based music recommendation and classification framework (EMRCF) by using correlation analysis and support neural network, thus high-precision classification of songs is achieved by the interpersonal team following individuals with memories and emotional songs. Predicting and identifying most of the emotional responses accurately can make the recommendation and classification of songs effective [1]. Kim Youngjun et al. used the deep neural network to propose a multidirectional fuzzy concept measurement method, realizing the recommended selection of soundscape music and effectively improving users’ experience of appreciating visual art works [2]. Mohamadreza Sheikh Fathollahi et al. used convolutional neural network to design a music genre classification method. Here, advanced features are extracted from the intermediate network layer [3]. Then, the cosine similarity and Euclidean distance are considered to carry out the similarity measure, thus the automatic recommendation of music is realized, which achieves significant recommendation accuracy in 10-best results. Bruna Wundervald proposed a new method to predict music recommendation. By assuming that an artist’s popularity distribution has a latent variable estimated by Gaussian mixture, the popularity clusters at the bottom are found and used to perform prediction by quota related to the mixing ratio of each cluster, so as to obtain the popularity distribution and make the music recommendation more accurately [4]. Saba Yousefian Jazi et al. proposed a music recommendation system to find out users’ favorite music through emotional perception, achieving high-precision music recommendation [5]. Polignano Marco et al. established a general calculation model of emotional perception based on the characteristics of emotional users. The emotional consistency scores of emotional users and nonemotional items are considered. Here, whether an invisible item is suitable for the current emotional state of users is estimated, thus a more accurate recommendation is achieved [6]. Hsin Chang Yang and Xi Shao et al. used the common methods of LOD to propose a method in which the semantic distance between resources is measured, which is applied to music recommendation [7, 8]. The influence of different weights and levels is considered to improve the accuracy of music recommendation. Xia Ning et al. proposed a recommendation method combining user attributes and project characteristics, which greatly improved the accuracy of recommendation [9]; Zhang Tingting et al. proposed a music recommendation method based on music gene and knowledge map, which has certain advantages over traditional music recommendation methods [10]; Shi Juanjuan proposed to predict and recommend through LSTM according to the time evolution of music lovers [11]; Yang Zhou et al. proposed a recommendation method based on context semantics, which can capture semantics and improve the accuracy of recommendation [12]. According to the above research results, the recommendation algorithm has achieved a good recommendation effect on music recommendation, but its recommendation accuracy still needs to be improved. Meanwhile, this study believes that increasing the dimension of user characteristics can improve the accuracy of recommendation. Therefore, this study tries to make music recommendation for users from a multidimensional perspective, which is also the innovation of this study. In addition, to further improve the accuracy of music recommendation, a joint recommendation method based on multi-intelligent recommendation is proposed to realize a multidimensional analysis of the music education system.

2. Introduction to the Collaborative Filtering Algorithm

The collaborative filtering algorithm, a personalized recommendation method commonly used in recommendation systems, recommends items to users by predicting users’ preferences for items, as shown in Figure 1 [13]. Where, represents user, represents item, and represents preference of for .

The three methods, namely user-based, content-based, and model-based collaborative filtering algorithms, are adopted in recommendation systems. Where the calculation formula of user-based method is shown in formula (1) [14]. Content-based methods are calculated through formula (2) and perform recommendations to users through formula (3) [15]. The model-based method has some novelty. In order to better realize the recommendation, the above three methods are combined to recommend music to users.

Formula (1) shows that is users close to ; means all users who have behavior toward ; represents interest similarity between and ; and is interest of in .

In general, collaborative filtering recommendation results can be evaluated by recall and precision [16].

3. Multidimensional Analysis of Music Education System Based on Multi-Intelligent Recommendation

Music formats are complex and diverse, including MIDI, MP3, WAV, and so on. Among them, MIDI music, as the most common music format at present, solves the problem of electronic music communication through a series of instructions to control the instrument’s playing mode and playing time. Therefore, MIDI music is taken as the research object, and multidimensional analysis of music education system based on multi-intelligent recommendation is carried out by extracting music features and collaborative filtering method based on features.

3.1. MIDI Music
3.1.1. Feature Extraction

MIDI music contains a variety of features. According to the literature [17], the following five features are selected as the basic structural features of the MIDI music:(1)Balance degree of left and right channelsThe relative value of the sound size of the left and right channels is expressed by value of the sound track, which can be obtained by analyzing the message “” [18]. When , equilibrium is reached. The left and right balance degree of sound track x is defined as:(2)Average strengthThe average strength is defined as [19]:here is the serial number of sound track; N is the number of notes of sound track ; and represents press strength of note in .(3)Master volumeMaster volume can select the maximum volume by analyzing “OxBn 07 size”, and the value range is [0, 127] [20].(4)Pronunciation timeThe pronunciation time is calculated by the end time of extracting “Ox8n Note Velocity” and the start time of extracting “Ox9n Note Velocity” [21]. For the notes with cross pronunciation, it can be calculated by the following formula :here k is the serial number of pronunciation track, and n is the number of notes.(5)Pronunciation areaPronunciation area is calculated as follows [22]:here, note (i) represents the note numbered as i; represents the length of note; and represents the pitch value of note.

3.1.2. Calculation of Feature Similarity

The feature similarity can be calculated by Pearson’s method, cosine similarity, jaccard coefficient, and Euclidean distance, as shown in the following formulas: [2326]

In formula (9), Sx and Sy are the sample standard deviations of x and y. In formulas (10) and (11), x and y represent two different documents. In formula (13), n represents the spatial dimension. When n = 2, the space is a two-dimensional plane, which can be calculated by adopting the curve function formula, and the similarity is inversely proportional to the distance, which is given as follows:

Among them, Euclidean distance method can calculate the distance between any two points [22]. Therefore, this method is used to calculate music feature similarity. Considering the significant difference in the extracted music feature data, and to avoid its influence, the data are normalized before the calculation of feature similarity, as shown in formula (14). Then the relative distance between features is calculated, and the comprehensive distance is calculated according to different weights. The similarity between features is obtained through the transformation of formula (13).where is normalized data; is original data; and are maximum and minimum values of X.

3.2. Feature-Based Collaborative Filtering Music Recommendation
3.2.1. Recommendation System

The MIDI music recommendation system includes three categories, namely acoustic underlying structure data, structured data, and label data. The above data are integrated to recommend music to users.

(1) Acoustic Underlying Structure Data. Acoustic underlying structure data include frequency center (FC), zero crossing rate (ZCR), short-term average energy (avSTEm), and MFCC coefficient (Ci), and the calculation formulas are as follows [27, 28]:

In formula (15), F(w) is the frequency frame spectrum after the Fourier change. In formula (16), fs represents the sampling frequency. In formula (17), Si is a discrete time audio signal sequence, and is sign function. In formula (18), represents the window function with length n; represents the sample value of the ith frame in the nth frame; and N represents the number of all sampling points. In formula (19), N represents the number of filters; Xk represents k output values; Ci represents MFCC parameters; and P represents order.

(2) Structured Data. Structured data is the information related to the composer during music creation, including singer, songwriter, creation time, creation style, creation background, and so on. At present, the commonly used structured data implementation methods include AMG database and CDDB database. This paper uses CDDB database to process structured data.

(3) Label Data. Acoustic underlying structure data can extract a large amount of music data, but it cannot guarantee that it can extract features consistent with music melody contour. In addition, the extraction cost of structured data is high, and there are great differences in label data for different groups and environments. To avoid the influence on the final recommendation result, the underlying acoustic data, structured data, and label data are integrated to recommend music to users by allocating a certain weight and calculating similarity. And according to the reference, the weight allocation is as follows:

The proportion of left and right channels is 4/15; the creation time is 2/20; the pronunciation area is 2/15; the style is 1/5; the average intensity is 14/60; the song name and pronunciation time are 1/10; the singer and label are 2/5. Acoustic underlying data: the proportion of structured data and label data is 2 : 3. Euclidean method is also used to calculate the similarity.

3.2.2. Recommendation Process

The feature-based collaborative filtering music recommendation process is as follows:

(1) User Management. Users log in to the music software through registration and input their user name, nickname, gender, and other information. Select users who are similar to the newly registered users in terms of age and gender, and check the music interests of similar users.

(2) Music Management. System administrators add the MIDI songs to the system and input the structured information, including song name, release time, singer, label, and genre.

(3) User Feedback. The system records user feedback and analyzes it, and users can also view the feedback.

(4) Joint Recommendation. Combine music data for joint recommendation.

4. Simulation Experiment

4.1. Construction of the Experimental Environment

This experiment is carried out on 64 bit Windows10 operating system; the development tool is MyEclipse10; the database is MySQL5.5; the browser is Firefox4.5; and the recommendation engine is Apache Mahout.

4.2. Data Sources and Preprocessing

In this experiment, the newly registered user “A” in a music software is selected as the research object, and three similar users “B”, “C,” and “D” are selected for analysis. User information and similar user information are shown in Table 1. Where the serial number is the serial number of a song in music software, the preference is the user’s preference for music, and its value range is [0, 1].

4.3. Experimental Results
4.3.1. Method Verification

To verify the effectiveness, the results of recommending songs to user “A” are compared between the recommendation with only music feature data and the joint recommendation combining structured data. There are altogether 9 same songs in the top 10 songs recommended by the joint recommendation of only feature data and structured data, and the only ones different are “Spring Story” and “I love you, China”. However, these two different songs belong to the same red song folk music style. It indicates that the joint recommendation of only feature data and structured data has a better overall effect and can recommend songs within the user’s interest range. The ranking of the joint recommendation list is different, and according to A’s music preference and similar user preferences, the joint recommendation list combined with structured data is closer to A’s music preference. In conclusion, compared with the joint recommendation with only feature data, the joint recommendation with structured data has a better effect.

4.3.2. Comparison of Methods

To verify the multidimensional analysis and recommendation effect of proposed method on music education system, “A” is selected as the research object, and the proposed method is used to recommend music to him. The results are compared with user-based recommendation results, content-based recommendation results, and model-based recommendation results. The first ten recommendation results are selected for analysis, as shown in Table 2. It can be seen that there are a total of 6 same recommended songs recommended by proposed method and user-based recommendation method, and the song numbers are 299, 259, 107, 132, 207, and 125, respectively. There are 3 same recommended songs that are same as the content-based recommendation method, and the song numbers are 299, 107, and 132. Moreover, there are a total of 5 same songs that are same as the model-based recommendation method, and the song numbers are 259, 113, 269, 105, and 134, respectively. There are 3 same recommended songs recommended by user-based and content-based recommendation method, 2 same recommendation songs recommended by user-based and model-based recommendation method, and a total of 0 same songs recommended by content-based and model-based recommendation method. Therefore, the proposed joint recommendation method can not only recommend accurate personalized music based on content and users, but also recommend novel music based on model recommendation method, which has certain advantages.

To further verify the effectiveness of the proposed method, the experiment compares the recommendation lists of different recommendation methods before and after the recommended song feedback of “A”. Table 3 is the recommendation results of different recommendation methods before user feedback, and in which “A” has yet to add “Good day,” “Red dragonflies,” “A Wolf In Sheep’s Clothing,” and “Want to Love You too Much” as favorite songs. Table 4 is the recommendation list of different recommendation methods, in which “A” has added favorite songs. Comparing Tables 3 and 4 shows that the recommended music lists of different recommendation methods have changed before and after user feedback. The recommendation list of the user-based method changes greatly, and two original recommended songs are retained. The reason is that when analyzed from the user level, the preference degree of added music changes significantly, resulting in significant changes in the recommendation results, which indicates that the user-based recommendation method is obviously uncertain. Secondly, for the joint recommendation method, 3 original recommended songs are retained, and the reasons for the great changes are the same as those for the user-based recommendation method, but the new recommendation results generated by the joint recommendation method improve the ranking of original recommended music. Content-based and model-based recommendation lists have little change, with 7 and 6 original recommended music retained, respectively. To sum up, the proposed joint recommendation method has certain recommendation accuracy and novelty, and the recommendation list can be updated according to users’ feedback to better meet users’ music preference recommendation.

4.3.3. Comparison with Other Algorithms

In this experiment, the data from a music website is selected as the research object. This data set consists of two parts: music information data and music scoring data. Among them, the music data set includes 1589 current popular songs, including music name, music type, music tone, and other information; the scoring data set includes 1 million records of 1589 songs by 6,040 users, corresponding to user ID, product ID, user scoring ID, and scoring time, among which the scoring range is 0∼5.

In order to verify the effectiveness of the proposed algorithm, the algorithm is compared with the traditional collaborative recommendation algorithm, and the obtained results are shown in Figure 2. As can be seen, compared with the comparison algorithm, the music recommendation algorithm proposed in this study has higher accuracy and an F1 value, which indicates that the proposed method has a better recommendation effect.

5. Conclusion

In summary, the proposed multidimensional analysis of the music education system based on multi-intelligent recommendation selects the MIDI music as the research object, and there are five characteristics extracted to be characterized. Furthermore, combining with the feature data of different music, user-based, content-based, and model-based algorithms are joined to provide a kind of intelligent music recommendation scheme. Compared with the single user-based recommendation or content-based, model-based recommendation, it has certain novelty and accuracy. However, due to the limitation of conditions, there are still some deficiencies to be improved. In the weight allocation of melody feature data, the existing literature is used for allocation, which lacks certain rationality. To solve this problem, the next step is to try to allocate the feature weight of different melodies according to the actual situation and try to adopt a new intelligent recommendation algorithm for recommendation.

Data Availability

The experimental data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are conflicts of interest regarding this work.