Abstract

With the booming development of intelligent manufacturing in modern industry, intelligent fault diagnosis systems have become a necessity to equipment and machine, which have attracted many researchers’ attention. However, due to the requirements of enough labeled data for most of the current approaches, it is difficult to implement them in real industrial scenarios. In this paper, an unsupervised intelligent fault diagnosis system based on feature transfer is constructed to extract the historical labeled data of the source domain, using feature transfer to facilitate the fault diagnosis of the target domain. The original feature set is acquired by EEMD time-frequency analysis. Then, the transfer component analysis algorithm is adopted to minimize the distance between the marginal distributions of the source and target domains which reduces the discrepancy of features between the different domains. Finally, SVM is used in multiclassification to identify different categories of faults. The performance of the fault diagnosis system under different loads is tested on the CWRU bearing data set, and the experiments show that the proposed system could effectively improve the recognition ability of unsupervised fault diagnosis.

1. Introduction

Rotating machinery is a crucial part of the mechanical system in industrial manufacturing. Its healthy condition seriously affects the safe and stable operations of equipment. It has been demonstrated that 30% of rotating machinery faults are caused by bearing faults [1]. Recently, the bearing fault diagnosis becomes a hot research topic to realize its intelligent surveillance and recognition.

The fault diagnosis methods of rotating machinery can be divided into a model-based method and a data-driven method [2]. The model-based fault diagnosis method is to achieve fault diagnosis by establishing a mathematical model and analyzing the residual error between the mathematical model and the actual signal. Because of the noise and other random factors in the working environment of equipment, the performance of the model-based rolling bearing fault diagnosis is seriously affected. However, data-driven methods collect representative data from signals and design simple models. The data is used to train the model to make it fit, so that we can get an ideal model. Comparatively, data-driven methods are more popular in recent years, owing to the amounts of available data collected from sensors.

Data-driven fault diagnosis methods consist of signal processing, feature extraction, and fault mode recognition [3]. The signal processing aims to obtain the original features by the signal transformation. But, different transformations may bring some redundant information, which will decline the diagnosis accuracy and make the calculation complex. The feature extraction is necessary to remove redundant information. Finally, machine learning methods are used to construct recognition models for fault diagnosis, such as Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), or Fuzzy Logic (FL) [46].

Fourier transformation is usually used to transform rolling bearing signals at the beginning [7]. Yet, since the signal has the features of nonstationary and nonlinear, it cannot get acceptable performance. Some short-time analysis methods such as short-time Fourier transform (STFT), wavelet transform (WT), empirical mode decomposition (EMD), and ensemble empirical mode decomposition (EEMD) can explore the information hidden in the frequency domain [8]. Due to the fixed size of the window, the resolution of STFT is determined by the window size, so the frequency and time resolution cannot be optimized at the same time [9]. WT is easy to lose the high-frequency components of the signal [10]. EMD has the problem of mode mixing [11]. But, EEMD can remedy the defect of EMD when composing the vibration signals. Then, the features referring to the time domain, frequency domain, and time-frequency domain of vibration signals are extracted, which are taken as the input of the classifier to complete the training and fault diagnosis. The classifiers always use traditional statistical machine learning approaches [12]. However, statistical learning is based on mathematical statistics and requires that the learned knowledge should have the same statistical features as the applications [13]. Therefore, traditional statistical machine learning always assumes that the training and testing data come from the same distribution. However, actually, most of the cases do not obey the same distribution. Transfer learning relaxes the constraint that both training and testing data must obey the same distribution in traditional statistical machine learning [14]. It can learn the domain invariant features or structures between the different but related domains, so as to realize knowledge transfer and reuse between domains [15]. On the other hand, when the training and testing data do not satisfy the same distribution hypothesis, the training data will be out of date. Transfer learning can improve the learning ability of traditional statistical machine learning and greatly reduce the cost of labeling data [16].

Transfer learning is the approach that utilizes the learned knowledge from one domain to facilitate the learning tasks in the new domains [17]. Therefore, using transfer learning, we can learn new knowledge more easily through outdated experiences. Figure 1 shows the signals generated by the sphere fault (SF) and inner race fault (IF), respectively. Due to the different fault locations, the distributions are obviously different from each other. But, there still exist some similarities in the condition of fault occurrence, such as the bearing speed and fault diameter when the fault occurs. Thus, through the diagnosis of the SF, we can learn to recognize the IF.

Therefore, the distinctive characteristic of transfer learning is no requirement of the identical distribution between the training and testing data, which is more suitable for a rapid variation of sensor data [1820]. Inspired by transfer learning, we try to construct an unsupervised intelligent fault diagnosis system for the real scenario with different distributions and without labeled data in the target domain. In the fault diagnosis system, the domain invariant feature representation must be learned from the extracted features. Unlike the high cost of feature learning in deep neural networks, we utilize EEMD to decompose the original signals and further extract the statistical features, which is used to learn the common feature space between the source and target domains by reducing the marginal distribution discrepancy. In this way, the proposed intelligent fault diagnosis system can uncover the hidden information in the signals and focus on learning the transferable mapping of the statistical features. Herein, we select transfer component analysis (TCA) [21] to transform the source domain and target domain features into a unified feature space, in which the maximum mean discrepancy (MMD) is used to minimize the distance between the source and target domain, so as to achieve accurate diagnosis task of the target without any labeled data. Then, the multiclassification-based SVM is used to identify the unseen faults that are different from the source domain.

The rest of the paper is organized as follows. Section 2 reviews the related works. Section 3 introduces the proposed intelligent fault diagnosis system from signal processing, feature transferring, and classification. Section 4 describes the experiments, which mainly introduce the selected data set and show the experimental results and analysis. The conclusions are given in Section 5.

Rotating machinery is often running under high speed and high pressure, where the rolling bearing of mechanical equipment is easy to be damaged and faults occur. Mechanical faults are a serious problem to the development of intelligent manufacturing in modern industry. In order to exactly identify the various categories of rotating machinery faults, many researchers try to propose approaches to improve the performance of intelligent fault diagnosis systems. Liu et al. [22] proposed an intelligent fault diagnosis model which is based on variational mode decomposition (VMD) and singular value decomposition. Yu et al. [23] proposed a deep inception net with atrous convolution (ACDIN) to realize bearing fault diagnosis. Besides, Chen et al. [24] proposed an integrated anomaly detection approach for seeded bearing faults, which use EMD and the Hilbert transformation to extract the feature set.

All the above studies utilize traditional machine learning approaches to implement intelligent fault diagnosis systems. However, once the training and testing data do not obey the same distribution, the performance will significantly decline. In real scenarios, most of the bearing faults happen randomly. It is impossible to label enough samples for training a new model. Therefore, transfer learning is necessary to implement intelligent fault diagnosis systems into real industrial scenarios. Among the current researches about transfer learning, Xu et al. [25] used TrAdaboost to transfer the knowledge of source domain to target after extracting features with WT. TrAdaboost assumed that there are a few labeled samples in the target domain and then constructed a mixed data set including the labeled data from the source and target domain to be the training data set [26]. More distinctively, the algorithm used the weight adjustment of AdaBoost, which determined the weights of samples by the feedback of the classification performance on the labeled target data. Thus, the method could make sure to learn an effective model for the source domain, while it might not obtain acceptable performance on the target task. Considering the corruption possibility of data during the collecting procedure, there exists some extent of uncertainty in both the source and target domains. Thus, Xiao et al. [27] proposed to learn the proportions when transferring knowledge from source to target. With the explosive increase of data, transfer learning is combined with deep neural networks to improve the recognition performance of the transferring learning approaches. Prieto et al. [28] proposed a bearing fault diagnosis model based on statistical-time features and neural networks. Shao et al. [29] utilized the scaled exponential linear unit to improve the quality of the feature mapping, which compensated for the lack of labeled samples in the target domain. The good performance of all the deep transfer models benefits from the outstanding ability of the feature extraction of deep neural networks.

However, the training of the deep transfer learning model needs enough samples. Therefore, the study of shallow machine learning methods is still necessary for some real industrial scenarios. Unlike feature learning by some deep neural networks, the proposed intelligent fault diagnosis framework utilizes the statistical features and shallow transfer learning algorithm to learn the feature mapping that could reduce the marginal distribution discrepancy between the source and target domains. In this way, the proposed intelligent fault diagnosis can give another way to solve the data deficiency that may exist in real industrial scenarios. Herein, TCA is used to transform the source domain and target domain features into a unified feature space, in which the maximum mean discrepancy (MMD) is used to minimize the distance between the source and target domain, so as to achieve accurate diagnosis without labeled data [30, 31]. As to the features, we firstly use EEMD to process the signal and extract the feature set and then transfer the features through TCA to establish the unsupervised fault diagnosis model named EEMD-TCA-SVM. It was verified by Case Western Reserve University's (CWRU) public data set. The results show that our proposed system can obtain acceptable performance.

3. Transfer Learning-Based Intelligent Fault Diagnosis

In this paper, EEMD is used to decompose the vibration signals into multiple IMFs. Then, Hilbert envelope spectra (HES) and Hilbert marginal spectra (HMS) are calculated to acquire time and frequency features. After that, the unified feature space is learned by TCA to realize feature transfer from the source domain to the target domain. Finally, various faults are identified by the multiple classifications based on SVM. The specific procedure of the proposed transfer learning-based intelligent fault diagnosis system is described in Figure 2.

3.1. Fault Feature Extraction from Vibration Signals by EEMD

The data here used to extract features are vibration signals collected from accelerometers set on the rolling bearing. Then, it is segmented into short waves having several periods, which is useful to extract the features of time and frequency domains. We select EEMD to decompose the original signals into different IMF components, which improves EMD by adding white Gaussian noise to the signal to eliminate mode aliasing [32].

Before signal decomposition, white Gaussian noise is added to the original signal .where () is the ith superimposed white Gaussian noise, and is the corresponding signal with noise to be decomposed later. By subtracting the mean value of the upper and lower envelope from , the signal component could be obtained by the equation . Then, is taken as a new signal to be decomposed and repeat the above operations till the termination criteria of equation (9) are satisfied.where T denotes the length of the signal. Usually, the range of is [0.2, 0.3]. When the requirements of IMF are satisfied, is the IMF component we would like to obtain. And then, we can get the remaining subsequence , which is the residual component from . Repeating the above process, the ultimate residual component is obtained by

Next, equation (3) can be rewritten as equation (4). Obviously, the original signal can be decomposed into the IMF component and the residual subsequence , respectively.where represents the jth IMF component by N decomposition, which is defined as equation (5). The implementation of EEMD is concretely described in Table 1.

Additionally, an example is given in Figure 3 to show the decomposition performance of EEMD. The blue waveform is the original vibration signal, and the red ones are IMF1, IMF2, IMF3, IMF4, and residual component, respectively. Figure 3 shows that the original signal can be decomposed into IMF components with different frequencies and amplitudes, which efficiently extract features from the original signal. Through the decomposition, the redundant components can be removed, while preserving signal features.

However, not every IMF component can exactly represent the information of the original signal. The selection of IMF components is necessary after EEMD decomposition. In order to simplify the calculation, the first 4 IMF components are empirically used to do the feature extraction. After that, 9 statistical parameters are used to represent the original signal, HES, and HMS of EEMD decomposition. Table 2 shows the detailed formula of 9 statistical parameters.

In order to extract the features of the time-frequency domain, Hilbert transformation is used to extract the information of the variety of the vibration signal with time and frequency. At first, each IMF component is transformed to by Hilbert transformation of the following equation:

Then, each IMF component is further analyzed to obtain an analytical signal by the following equation:where is amplitude function that is the spectra envelope actually, and represents a phase function. Then, the Fourier transformation of is HES of the corresponding IMF component. Based on equation (7), Hilbert spectra are calculated by equation (8). After that, HMS can be obtained on the basis of Hilbert spectra, which is specifically shown in equation (9).where T is the length of the whole sequence. The pseudocode of HES and HMS calculation is shown in Table 3.

Figure 4 shows HES and HMS of the randomly selected vibration signals generated by the OF signal with a motor speed of 1797.

3.2. Unified Feature Space Learning between the Source and Target Domains

Different from the traditional machine learning approaches, we consider the real scenario where the training and testing data come from different distributions, . If the training data is directly used to train a model for the test, the trained model will show a bad performance on the testing data. It is assumed that a feature mapping lets the distributions of training and testing data approximate each other, . TCA is a classical transfer learning approach proposed by Pan et al. [31], which realizes transfer learning by mapping the data of the source and target domains into a High-dimensional Reproducing Kernel Hilbert (HRKH) space. It utilizes feature mapping to reduce the distribution discrepancy between different data sets, and we suppose that the conditional distributions can approximate each other by adjusting the marginal distributions. Specifically, when is satisfied, there will be . Here, maximum mean discrepancy (MMD) is used to estimate the discrepancy between the training and testing data in the feature mapping space. Specifically, it can be calculated by the following equation:where and are the number of samples in the training and testing set, respectively. is the RKHS norm. Equation (11) cannot be calculated directly, which should transform the samples into the mapping space by some kernel method. In order to embed both the training and testing data into a shared low dimensional latent space, TCA introduces a kernel matrix and a distribution discrepancy matrix . The kernel matrix contains the elements defined on the source domain, target domain, and cross-domain data in the feature mapping space, which are detailed in equation (11). The elements of are calculated by equation (12).

Then, the distance of equation (10) can be rewritten as , where the first term minimizes the distance between distributions, and the second term maximizes the variance in the feature space. () is a tradeoff parameter.where is a tradeoff parameter, and is an identity matrix. is the centering matrix, which is defined as . n means the number of samples in training and testing sets. The values after dimension reduction are the mapped features.

3.3. Multicategory Fault Diagnosis

For the classification of possible errors, a penalty term is introduced. The following relation is obtained:

The objective function of the optimal hyperplane can be replaced by . And in general, the penalty factor C is a nonnegative real number; the solution formula of the optimal hyperplane can be expressed as follows:

The optimal hyperplane can be obtained by solving the above objective. To sum up, the decision function of SVM can be composed of the inner product and summation of the support vector. Therefore, the decision function of SVM is similar to neural networks in form. Each intermediate node corresponds to the inner product of the input sample and support vector completed by kernel function, and the output vector is a linear combination of intermediate nodes.

The fault diagnosis studied in the paper is a ten-class classification problem, but SVM is usually used to deal with binary classification. Thus, we combine multiple SVMs to construct a multiclass classifier. At first, one of the SVMs is used to identify the faults of category 1 from category 2 to 10. Likewise, the other 9 categories are classified by the binary classifier in the same way.

4. Experimental Analysis

4.1. Data Set

In this paper, the vibration signals of bearing faults are collected from the platform of Case Western Reserve University (CWRU) [33]. The bearing device is shown in Figure 5, which is composed of a three-phase induction motor, a torque sensor, and a dynamometer. Four kinds of motor loads of 0, 1, 2, and 3 HP are given in the database, referring to different categories of vibration signals. The sampling frequency is 12 kHz. The experimental data used in the following comes from the upper side of the drive end of the motor. The torque sensor collects the vibration signals in different fault conditions at the drive end. Moreover, SVM, TCA, and EEMD-SVM are used to be compared with our EEMD-TCA-SVM, which further demonstrates the feasibility of the proposed intelligent fault diagnosis system.

In the experiments, four data sets are prepared, which refers to different motor loads shown in Table 4. A, B, C, and D are the bearing fault data sets under the motor loads of 0, 1, 2, and 3 HP, respectively. There are ten fault types in total, including IF, SF, and OF with three kinds of diameters and health samples. For each category, the vibration signal with length 120,000 is selected, where the window and the step size are 2000 and 1000, respectively. Each experimental data set (such as A) has 10 categories. There are 1200 samples for each data set, where 960 samples are taken as training set, and the other 240 samples are testing set.

4.2. Experimental Steps and Result Analysis

In order to verify the performance of the proposed method, we use SVM, TCA-SVM, EEMD-SVM, and EEMD-TCA-SVM for comparing their classification performance on different transfer pairs among A, B, C, and D, respectively. Totally, 12 groups of experiments can be set, which is shown in Table 4.

When the data set is set up, the training and testing sets correspond to the source and target domain data in transfer learning. SVM is trained by the training set, and the testing set is then used to check the classification performance. As to TCA-SVM, both the training and testing sets are used to obtain the unified feature space by minimizing the distribution distance between the training and testing sets with TCA. Then, the training set is used to train SVM. The testing set is mapped to the unified feature space and then classified by SVM. As to EEMD-SVM and EEMD-TCA-SVM, the data sets are processed by EEMD. The first four IMFs are used to calculate HMS and HES, where 9 statistical features are calculated. Considering 9 statistical features of the original signal, there are 81 features in total. And the following procedures are the same with SVM and TCA-SVM.

In order to verify the superiority of the EEMD-TCA-SVM model over the other methods, we give the accuracy, ROC curve, AUC value, and confusion matrix in the following.

4.2.1. Accuracy

Accuracy is an important standard to measure fault diagnosis systems, which denotes the ratio of correctly predicted samples to the total samples. Through accuracy, we can easily evaluate the diagnosis performance as a whole. Table 5 shows the accuracies of the methods on the different source-target pairs.

From the results of the first four groups of Table 5, EEMD-TCA-SVM can obtain a relatively higher average accuracy than other methods, where TCA shows the transferability from the average accuracy. In particular, for some cases such as C  D, it can get an almost 20% increase. Compared with TCA-SVM, EEMD-TCA-SVM shows good performance on both average accuracy and each case, which is improved obviously. Thus, the process of the original signals by EEMD is necessary for the fault diagnosis system since the hidden information of different resolutions in time and frequency domains can be extracted through EEMD. In order to verify the reliability of the experiment, Random Forest (RF) is taken as an additional classifier to test the diagnosis performance of the transfer tasks. EEMD-TCA-RF can obtain a higher average accuracy than other methods. Comparing the results of TCA-RF with RF, TCA can effectively minimize the distribution discrepancy between the source and target domain, where the recognition accuracy is improved by about 16%. Comparing the results of TCA-RF with EEMD-TCA-RF, the accuracy is improved by about 30%. EEMD can effectively extract the important information from the original signal. Comparing the results of EEMD-TCA-RF with EEMD-RF, the accuracy is improved by about 5%. The reason is that the decomposition by EEMD and the calculation of the components’ statistical features may alleviate the distribution discrepancy of the original signals to some extent, which does not improve the diagnosis performance so much. Overall, the classifier RF on the different tasks of Table 3 has identical conclusions with the classifier SVM.

4.2.2. Confusion Matrix

The confusion matrix represents the fact that the specific numbers of samples are classified into each category, and then the matrix is used to display the results [34]. The confusion matrix is mostly used to judge the quality of the classifier, which is applicable to the classification methods. It is the basic, intuitive, and simple way to further measure the accuracy of classification methods or systems.

The fault diagnosis is a multiclassification problem. The confusion matrix is a table with the size of 10  10. Figure 6 shows the confusion matrix of SVM, TCA-SVM, EEMD-SVM, and EEMD-TCA-SVM, respectively.

Compared to the other methods, EEMD-TCA-SVM can identify most of the categories accurately. SVM shows the worst performance on the confusion matrix, where some of the categories cannot be recognized completely. In particular, for the healthy category (label 1), all the healthy data is identified as faults shown in Figure 6(a). Other fault categories are also easy to misclassify with each other. SVM does not have transferability, which is not used to do the fault diagnosis directly. When TCA is used to transfer features, the recognition performance in Figure 6(b) is improved to a certain extent but still shows very low classification accuracy. Although most of the healthy cases are identified correctly, the faults are misclassified between each other seriously. Therefore, it is infeasible to transfer the signals without any feature extraction. EEMD is a signal processing method that can separate the signals into different IMF components. In the procedure, the more distinguished information can be found. Based on the separation, the statistical features are calculated, which construct the new fault diagnosis data. Figures 6(c) and 6(d) truly show the improvement of the recognition performance by EEDM. But in the case B ⟶ D, EEMD-SVM misclassifies all the healthy data to the 7th category of faults in which the two categories of data may have more similarity in statistical features. Likewise, EEMD-TCA-SVM improves the recognition rate for almost all the categories by comparison with EEMD-SVM, especially for the healthy data. The domain adaptation is effective for the data with the distribution discrepancy. However, for SVM and TCA-SVM, EEMD-SVM and EEMD-TCA-SVM have more obvious improvement. So, we think that statistical features extracted by EEMD may alleviate the impact of the distribution discrepancy existing in the original signals on the fault diagnosis. It is very necessary to introduce the signal processing-based feature extraction into fault diagnosis systems.

4.2.3. ROC and AUC

Although the proportion of the correct classified samples to the whole testing set can be illustrated by the classification accuracy and confusion matrix, it neglects the relationship between false positive rate (the probability of negative samples wrongly categorized as positive) and true positive rate (the probability of positive samples correctly categorized as negative). Therefore, we further use Receiver Operating Characteristic (ROC) [35] curve and Area Under Curve (AUC) value [36] to evaluate the classification performance. ROC is the way to directly show the relations of FPR (False Positive Rate) and TPR (True Positive Rate). As shown in Figure 7, FPR and TPR are horizontal and vertical axis, respectively. AUC denotes the area under the ROC curve, which provides another way to evaluate the performance of the method. If the method is ideal, its AUC value equals 1. The AUC value of a random model equals 0.5.

Figure 7 illustrates the ROC curves and AUC values of SVM, TCA-SVM, EEMD-SVM, and EEMD-TCA-SVM. By the comparison, we can see that TCA can improve the unsupervised fault diagnosis performance. There are 10 categories in the fault diagnosis problem including healthy condition. All the ten categories are divided into two parts which are healthy and fault. As shown in Figure 7, the curves with different colors correspond to EEMD-TCA-SVM, EEMD-SVM, TCA-SVM, and SVM, respectively. EEMD-TCA-SVM obtains the best ROC curve and the highest AUC value among the four methods while SVM gets the worst ROC curve and AUC value, which are stochastic results. EEMD-SVM gets better performance than SVM and TCA-SVM, which further demonstrates that the feature quality seriously impacts the classification performance. Relatively, the impact of TCA is not so obvious from the comparison between EEMD-TCA-SVM and EEMD-SVM. The AUC values of the two methods are almost the same. In addition, the distributions of the extracted features by EEMD may have a stronger similarity than the distributions of the original vibration signals, which may be one of the reasons for the higher AUC value of EEMD-SVM.

Based on the above results, the feature selection is shown as a very important function in fault diagnosis. Traditional machine learning approaches cannot automatically mine the hidden information from sensor signals. Transfer learning can facilitate the unsupervised fault diagnosis and get promising classification results. The proposed transfer fault diagnosis system still has a bigger promotion space in the future. The ROC curve of the data after data preprocessing is obviously above the ROC curve without data preprocessing, and its AUC value is significantly increased compared with the value without data preprocessing. This shows that the performance of the model has been greatly improved after our data preprocessing; the ROC curve of the data processed by TCA is always at the upper end of the model without TCA processing, and the AUC value is also large. It shows that TCA can improve the performance of the unsupervised model.

5. Conclusion

In this paper, we construct a transferable intelligent fault diagnosis system, which can transfer the statistical features across domains. In the proposed system, the original vibration signals are decomposed by the EEMD algorithm at first. And then, 81 statistical features are calculated to be the initial feature set, which are transferred by TCA to further obtain the sharable features between the different distributions. By minimizing the marginal distributions of the source and target domain, TCA does not need any extra knowledge to assist the transfer. Then, SVM is taken as the classifier to identify different categories of faults. The experiments on the bearing data set of CWRU show that the proposed system has good accuracy, confusion matrix, ROC curve, and AUC value among the four methods. From the specific results, EEMD can extract the hidden information from the signal, and TCA can calculate the common feature space of different domains for fault diagnosis.

Data Availability

The bearing data used to support the findings of this study have been deposited in the Bearing Data Center of Case Western Reserve University repository (https://csegroups.case.edu/bearingdatacenter/home).

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (62006233, 51734009, U1710120, and 51504241).