Abstract
Traditional recognition methods for face local features have a low recognition rate, so the recognition method for face local features based on the fuzzy algorithm and intelligent data analysis was designed. Firstly, the wavelet denoising method was used to reduce the noise of face images, and adaptive template matching was performed on the obtained images. Then, the face information was encoded, and the face features were identified to locate the face features. On this basis, the principal components of the face were analyzed to obtain the global features of the face. Finally, through the candidate set of facial local feature recognition, the extraction of face local features, and the fusion of face local features, the recognition of local face features were realized. The experimental results show that the average recognition rate of this method is 88.84% in a noise environment and 97.3% in noise-free environment. It can accurately recognize the face local features, can meet the needs of recognition of the face local features, and has certain practical application significance.
1. Introduction
With the development of pattern recognition, image processing, and machine learning, fast and effective automatic recognition and verification technology has attracted widespread attention due to its huge theoretical and practical application value. Face recognition refers to biometric recognition technology that is classified based on human face features. The technology first obtains photos containing faces through devices such as cameras. The face detection algorithm completes the collection and localization of various parts of the face, and then the collected faces are reduced in dimensions, and features are extracted to complete the recognition. Face, as one of the innate characteristics of living things, has individual uniqueness. Compared with other recognition technologies, it has the following advantages: first, it is easy to be collected. The identified individuals do not need to cooperate with the collection of images intentionally, and the user basically does not need to do any operations; secondly, there is no need to contact the captured devices; finally, face recognition has concurrency, and multiple individuals can be collected, detected, and classified simultaneously. It has been widely used in public security, banking, security inspection, and attendance.
At the same time, with the development of computer image processing technology, people have raised higher requirements for the accuracy and robustness of face recognition. For example, effective face recognition needs to be performed under some extremely harsh image acquisition conditions, and related recognition technologies will have broad application prospects in police cracking, traffic capture, and other fields. At the same time, research on face recognition methods under the condition of blurred images and face pose changes has application value in the fields of crime forensics and identity verification, so the research on recognition methods for face features has received great attention from researchers. Compared with global features, local features have the characteristics of rotation, translation, and illumination invariance and have higher accuracy and stability. However, when directly using local features for image matching, the calculation is large and it is not suitable for establishing a fast and efficient human-computer interaction system. In recent years, many scholars have proposed improvements to the recognition of face local features, and they have improved the recognition ability of the algorithm to a certain extent. However, they all use the local size relationship between the neighbor points to describe the texture information and ignore the overall relationship between the gray value of the pixel and the center pixel in the same direction. When there are random noise points or there are changes in lighting and edges, the performance will be greatly reduced.
Therefore, the recognition method for face local features based on fuzzy algorithms and intelligent data analysis was designed to solve the problem of the low recognition rate of traditional recognition methods for face local features. Among them, the fuzzy algorithm belongs to the intelligent algorithm. When our understanding of the system model is not very deep, or objective reasons prevent us from conducting in-depth research on the system’s control model, intelligent algorithms often can play a small role. At this time, a fuzzy algorithm is needed. Common fuzzy algorithms include mean blur and Gaussian blur. Intelligent data analysis refers to an analysis method that uses data analysis tools such as statistics, pattern recognition, machine learning, and data abstraction to discover knowledge from data. The purpose of intelligent data analysis is to directly or indirectly improve work efficiency. In actual use, it acts as an intelligent assistant so that workers have the right information at the right time and help them make the right decision in a limited time. Therefore, based on the advantages of fuzzy algorithms and intelligent data analysis, the two methods are combined to accurately recognize the local features of the face. In this article, the noise of the face image is removed by the wavelet filtering method to avoid the interference of noise. The LBP Operator method is used to encode the face image information. The LBP histogram is obtained by connecting the histograms to complete the feature localization. The local sensitive information of the face is recognized by the principal component analysis and the least square method; thus the recognition of the face features is realized.
2. Materials and Methods
2.1. Noise Reduction of Face Images
In order to accurately recognize face images, it is necessary to perform posture correction and noise reduction on the face images. Using wavelet denoising method, the geometric structure models of face images with different attitude features are first given [1]. Feature extraction and sample collection are performed on face models in different poses, and an affine transformation between two vector spaces with face image features is constructed in rotation coordinates [2, 3]. Among them, a new face image is reconstructed based on the pixel values corresponding to the x and y coordinates, and the face images are denoised. By training the face library and adjusting various poses of the face images to be tested, the pixel set of face images in the image set is given:where is the noise point in the pixel concentration, is the wavelet noise reduction parameter, and is the information of the face features.
The wavelet transform is used to filter the noise in the pixel concentration to obtain the underlying data distribution, and the image pixel feature information output after wavelet noise reduction is obtained, which is expressed as follows:where is the scale information of the wavelet transform, is the projection mapping parameters of the geometric structure distribution of the face images, is the diversity feature of the face pose, and and both represent the feature values of the face images.
Based on the noise reduction and attitude correction processing of face images, adaptive template matching is performed on the noise-reduced images, and the image segmentation method is used to block the face images. Feature information parameter extraction is performed on the background seed points of the face images according to the principal component analysis. The iterative formula for the background seed point solution is as follows:where is the profile horizon data of the face position template matching center, is the fault seed point data, is the face image background information, and is the background seed point calculation parameter.
Frame scanning and information fusion are performed on continuous areas of geometric features that characterize the face images structure [4–6] and are based on linear affine subspace transformation. Under the environment of low contrast and uneven lighting, the adaptive template matching of face images is performed to build the polygon information model of the face contour distribution [7, 8]. The LGB decomposition of color features is based on the diversity of human poses, clothing colors, and other factors, and the correlation coefficients of facial contours are as follows:where is the dimensionless quantity, is the local information subspace reflecting the facial features, is the image segmentation parameter, and is the adaptive template matching parameter.
At this point, denoising of the face images is completed. Through adaptive template matching and image segmentation methods, face images are segmented to provide accurate data output basis for recognition of face local features [9–11].
2.2. Face Features Positioning
2.2.1. Face Information Encoding
After face image denoising and pose correction processing and segmentation, accurate face feature data is output. In order to further realize feature localization, face image information needs to be encoded [12–16]. The LBP operator method is used to encode the face image information, which is mainly binary encoding using the gray value difference between the central pixel and the domain pixel. Finally, the LBP histogram is obtained by concatenating the histogram [17, 18]. Assume that the gray value of a given central pixel is and the pixel value of the neighborhood is . The coding rule is as follows:
The field of the central pixel is generally taken as a 3 × 3 field, and a total of eight surrounding pixels are encoded with the difference (n = 0, 1, ..., 7). Figure 1 shows the calculation process of binary coding of a pixel.

Because the effective radius of the neighborhood is often not unique, in order to calculate a more reliable effective neighborhood point, a circular neighborhood has replaced the square area as the first choice. The LBP value of the center point encoding at a certain radius is as follows:where is the radius of the selected neighborhood, is the number of surrounding neighborhood points, is the number of neighborhood points, and is the face features point encoding parameter.
Figure 2 shows the LBP operators with different radii and neighborhood points when encoding face features points.

As the number of neighborhood points increases, the types of LBP pattern values will inevitably increase sharply. If 8 neighborhood points are increased to 16, the number of binary patterns will reach 216. Although more LBP modes are accurate in describing texture details, they increase the data storage capacity and computational complexity, which reduces the efficiency of face recognition [19–21]. Therefore, the uniform mode algorithm is adopted to solve the problem of a large increase in calculation amount due to too many neighborhood points. It simplifies binary encoding and is defined by the number of changes between 0 and 1 in the encoding sequence. The calculation formula iswhere represents the number of patterns in which the neighborhood points satisfy the uniform pattern condition, is the measurement parameter of the uniform pattern, and represents the uniform condition.
According to the above definitions, the encoding of face information is completed to provide a basis for the recognition of face local features.
2.2.2. Face Features Recognition
The LBP histograms of each block are connected in series to form a joint histogram of the entire face image [22–27]. The process is shown in Figure 3.

The traditional face recognition algorithm just performs simple cropping and global feature extraction before face recognition or averages the whole image into blocks. After the feature points are extracted and the histogram is obtained for stitching, although the local performance of the features is enhanced, it also brings a long processing time and redundant information [28, 29]. Therefore, based on the above-mentioned face image noise reduction processing, local sensitive information of the face is identified in order to locate the face features points. The most prominent features in face recognition are eyes, eyebrows, nose, mouth, and chin [30–32]. The energy differences in this part (especially eyebrow shape, eye shape, depth of nasal sulcus, nostril changes, mouth shape, chin shape, etc.) are especially obvious in various faces [33, 34]. They can become one of the important criteria for face discrimination. At the same time, in order to further reduce the complexity and redundancy of the directly preprocessed image, the sensitive parts of the face, such as eyes, eyebrows, mouth, nose, and chin, which have important discrimination characteristics, are jointly used for face recognition [35–37]. This can lay the foundation for feature point positioning. The AdaBoost method is used to locate each part of the face. The schematic diagram is shown in Figure 4.

The final calculation formula for face features is as follows:where represents pixel coordinates, and represent gray values of adjacent pixels and center pixels, respectively, and are the judgment parameters of face features.
According to the above-mentioned face information coding and face features point recognition, the location of the face features points is completed, providing a basis for local feature extraction.
2.3. Face Global Features
The principal component analysis of the face is to find the most orthogonal basis of face features [38–43] so that the mean square error between the transformed variable and the original variable is the smallest. The so-called “principal component” is to transform the original independent variable into another group of variables, and then some important components are used as independent variables. Finally, the least squares method is used to estimate the model parameters after selecting the principal components. In face global feature recognition, the space formed by feature vectors is called feature face space [44]. For a sample set with face features, the covariance matrix of the face global features samples set:where is the corresponding eigenvalue and is the eigenvector corresponding to the eigenvalue constituting the principal component.
Each face can be projected into the subspace of the feature face to obtain a set of coordinate coefficients, which can be used as the basis for obtaining the global features of the face. The position, scale, number, and direction of the feature vectors extracted from each face are different. Even if the same person has different expressions or different conditions, the detected feature vectors will be inconsistent. Therefore, there are many mismatches, such as the left face matching the right face and the eyes matching the nose. In the process of feature matching, some detected feature points that have a small effect on classification and recognition will play a major role in classification and recognition, such as matching hair ends, which will cause recognition errors. Therefore, the PCA method is used to extract the global features of the face, and the PCA method (principal component analysis method) mainly uses the K-L transform. K-L transform is an optimal orthogonal transform for image compression. The K-L transform can change the coordinate space, and the transformed space is called the subspace. Recognition using the PCA method is achieved by projecting the face images that need to be recognized onto the subspace of the feature faces obtained from the training image.
The steps of PCA to obtain global features of the face are as follows: Step 1: get the pixel matrix of all training set images. Step 2: get the feature matrix according to the pixel matrix. Step 3: get the covariance matrix corresponding to the feature matrix. Step 4: find the eigenvalues and eigenvectors of . Step 5: arrange the eigenvalues in descending order and filter out larger eigenvalues. Step 6: orthogonalize and normalize the eigenvectors corresponding to these eigenvalues. Step 7: combine the obtained feature vectors into columns to form the transformation matrix , then the subspace (face global features space) is obtained. The error generated by the training output label and the actual label is backpropagated layer by layer using DBN to achieve the purpose of fine-tuning the global features of the entire face. The structure of the DBN is shown in Figure 5. Step 8: the global feature data formula for face images is as follows: According to the above formula, the coordinate position in the subspace is obtained. Each image in the training set corresponds to a point in the subspace, and each dimension of this point constitutes the PCA feature of the image. Step 9: the feature data of the image to be identified is also obtained according to formula (7), and the coordinate position in the subspace is obtained so that the image to be identified is also corresponding to a point in the subspace, and the PCA feature is obtained.

2.4. Extraction and Fusion of Face Local Features
2.4.1. Candidate Set for Recognition of Face Local Features
Because there is a range when extracting face local features, the recognition rate can reach the maximum value within this range. When it is smaller than this range, with the increase of training samples, the number of extracted recognition features increases, and the recognition rate has a steady improvement process. When it is larger than this range, the recognition rate is maintained at a stable maximum, and sometimes even decreases. In other words, with the excess training samples, the recognition rate of the algorithm has not been significantly improved, but the calculation amount of the algorithm has increased. The recognition time gradually increases, which seriously affects the efficiency of the algorithm. Therefore, before the recognition of face local features, the gray features of the face images are used to simply filter the face features. The screening process is shown in Figure 6:

According to the above process, some test samples that are significantly different from the test samples and interfere with recognition are eliminated, and the training samples are controlled within a range that maximizes the recognition rate and minimizes the amount of calculation.
Assuming that the gray distribution information of the test sample is represented by the matrix , the similarity between the test sample and each training sample is calculated using the following formula. The calculation formula is as follows:where is the image similarity threshold, is the similarity parameter between the image and the test sample, and is the pixel coordinates of the image.
Face samples with a distance less than the threshold are selected as candidate sets, and different numbers of candidate sets are selected as the final recognition of face local features.
2.4.2. Extraction of Face Local Features
Because the two-dimensional face images have a greater influence on the attitude angle and light irradiation, it is difficult to extract features in linear space. Therefore, the attitude and illumination of the three-dimensional face model are controlled according to the spherical harmonics theory. The two-dimensional virtual image obtained by the mapping is divided into different subsets through five parameters of the change in the attitude and illumination angle. This can increase the clustering of the training samples and solve some nonlinear problems in feature extraction. Global features and local features are indispensable for face perception [45–51] and recognition, but research shows that global feature descriptions are often used as front-end input information for perception. If the person to be identified as a particularly obvious local feature, the global description will retreat to a relatively minor position. Therefore, the self-adaptive feature selection function of the human visual system is simulated to achieve the purpose of relying on the most individual local features of each person for fast face recognition.
The feature vectors of the face’s left eyebrow, right eyebrow, left eye, right eye, nose, and mouth are calculated in six subregions. Assuming that each training sample set in the face images contains 3 training samples, the normalized image is expanded in rows as follows:where represents the j-th sample in the i-th training sample set. A performs singular value decomposition on the overall dispersion matrix of each sample set and finds its nonzero eigenvalues and corresponding eigenvectors as follows:
Considering the amount of calculation, you can only choose the matrix composed of the eigenvectors corresponding to the first few eigenvalues as the projection matrix for principal component analysis and find the eigenvector of the -th face in the -th sample set. The calculation formula is as follows:where is the total number of face features to be identified, is the number of face features participating in training, and is the distance from the -th feature vector to the feature vector .
It not only retains the information of the key feature point j itself but also contains the surrounding structure information, which shows the locality and structural topology of the face features. The distance corresponding to each feature vector in the face is sorted in descending order. A larger coefficient indicates that the local feature is more obvious.
2.4.3. Fusion of Local Face Features
Face features contribute differently to feature extraction in different situations. Therefore, according to the face features, it is weighted under different conditions, and the corresponding extracted local features are separated into the corresponding 6 local features, which are expanded into vectors on a row basis, and each vector corresponds to a subspace projection. The classification discriminant function is defined as follows:where represents the discriminant function and is the distance between a set of local features of the unknown face images and the corresponding local features in the database.
Classification is to find the local features with the smallest distance between the two groups so as to obtain the lowest classification error rate. Therefore, the local feature classification is constructed by the following formula:where is the comparison parameter of the local feature region and represents the weight of the face local features. A larger value indicates a greater contribution of the corresponding local feature. is the face features fusion factor.
On this basis, the decision-level fusion method is used to fuse face local features (Figure 7).

This method has better fault tolerance and can weaken the incomplete information and the influence of erroneous data. Firstly, the hierarchical cross-validation method is used, and the ULAP, SLGS, and V-SLGS algorithms are used to construct the base classifier. Then combined with the BP neural network [52–56], the posterior values of the samples to be classified belong to each category, and a grade score is given according to their size. Finally, the fusion scores are made for the grade scores obtained by all the base classifiers (Figure 8).

The specific face local features process is as follows:
Input: learning algorithms ULAP, SLGS, and V-SLGS, training sample set, samples to be classified
Output: the prediction category of the sample to be classified.
Define a level vector , representing the number of categories:
First: for the given training sample dataset , it is randomly divided into folds using the layered cross-validation technique (each fold contains all categories, and the number of samples of each type is the same). Extract folds from it for training to generate a base classifier;
Second: for the -th training , use different feature subspaces and BP neural network to construct a heterogeneous base classifier. The above process is repeated times to build base classifiers.
Third: initialize , and loop from to . On the th classifier, the target belongs to each class and the score is . After getting , for the fusion of face local features, the final category of classifiers after decision fusion is as follows:
According to the above definition, ignore the influence of other posterior probability values on recognition. All types of information provided by each base classifier are allowed to participate in decision-making according to a certain weight so that the information that ultimately participates in decision-making is more comprehensive, thereby obtaining more accurate recognition results.
3. Results
3.1. Experimental Samples
In order to verify the effectiveness of the above-designed recognition methods for face local features based on fuzzy algorithms and intelligent data analysis, FRGC and BU-3DFE 3D face libraries were used to generate a complete training sample set for experimental analysis. The first step is to verify the complete sample set generated. A two-dimensional virtual image is obtained through the five parameters of the change in the illumination angle. The experiment here only focuses on the horizontal offset of the face and does not consider the face’s pitch and elevation. The control rules are as follows:
First: set the horizontal offset angle range of the 3D face attitude is set to [−45°, 45°].
Second: the light irradiation angle range is set to [0°, 60°] and the light projection angle range is set to [30°, 90°].
Third: the range of the light intensity parameter is set to [1.0, 2.0], and the step increment value is 0.25.
According to the above-mentioned generation rules, each 3D point cloud model can generate 2375 2D face virtual image samples. A total of 1600 pictures are selected to build a training set, and the design platform is trained. The maximum number of iterations is set to 5000, the initial learning rate is 0.1, and the learning rate change factor is 0.2. After the training is completed, 10 face images are randomly selected as face local recognition test samples.
At the same time, artificial noise or occlusion is added randomly, which increases the difficulty of the experiment. In the occlusion experiment, 3 out of 10 test samples were randomly added with 20% occlusion. In the noise experiment, the 30-fold random function was used to generate noise. Some samples after adding noise or occlusion are shown in Figure 9:

The image above is part of the experimental image after adding noise or occlusion. Five of the 10 images were added with noise, and the remaining 5 were not added with noise. Check the recognition of the two methods.
3.2. Experimental Platform
Windows Core i7 processor, 16 GB memory, 2.20 Hz flash memory, and 1 TB hard disk are used as the hardware configuration of the experiment. The experiment uses an ARM Cortex-M3 core microprocessor to build the hardware platform. In addition, the hardware platform also includes the Ethernet interface, the RS-456 interface, the sample value transmission interface, the GPRS wireless transmission module interface, and the field acquisition module. The specific experimental platform is shown in Figure 10:

This experiment mainly verifies the application effect of the recognition methods for face local features based on fuzzy algorithms and intelligent data analysis. In order to ensure the tightness of the experiment, the face recognition method based on machine learning is used as a comparison method, which is compared with the designed face local feature recognition method, and the recognition rate of the two methods is compared. HDGSIU simulation software was used to simulate the experimental environment, and the interference factors designed above were stored in the database, and the recognition of face local features under the influence of the interference factors was compared between the two methods. At the same time, the on-site acquisition unit collects experimental data, and the on-site test unit performs A/D conversion on the sensor signal after filtering and amplification. Then the A/D converted data is input into the database through high-speed fiber. Finally, the information in the database is sent to the Ethernet, and the host computer processes and displays the received data. In order to ensure the accuracy of the experimental results, according to the requirements of EHTFU-1/8 on the sampled values, the sampled values are filtered to ensure the accuracy of the experimental data sampling. The calculation formula is shown as follows:where represents the number of sampling data points, represents the sampling data value, represents the average of the sampling points, and represents the transformation function.
Through the above formula, the sampled values are processed and the experimental results are analyzed using Edifshark, a commonly used IEC589 message capture and analysis software.
3.3. Analysis of Experimental Results
After the above experimental samples and experimental platform are prepared, the experiment is performed. The comparison results between the traditional method and the designed recognition methods for face local features based on fuzzy algorithms and intelligent data analysis are shown in Table 1.
From the comparison results, it can be seen that the recognition methods for face local features based on fuzzy algorithms and intelligent data analysis are less affected by noise when identifying local features of the human face. It can be seen from the experimental comparison table that the designed method can guarantee a high recognition rate under the condition that noise is added or not. However, the traditional method is greatly affected by noise and has a low recognition rate. At the same time, in the absence of noise, the recognition rate of the traditional method is also lower than the recognition rate of the current design method. This is because the method in this article uses the wavelet denoising method to denoise the face image to avoid the interference of noise and encodes the face information to recognize the face features so that the feature location is more accurate and the analysis accuracy of the overall face features is higher, thus achieving a high-precision recognition rate.
Therefore, the above experiments prove that the designed method can improve the recognition rate of face local features. This also proves the effectiveness of the design method, which can meet the requirements of recognition of face local features.
4. Conclusions
The face recognition algorithm based on feature discriminant analysis in the intelligent environment involves many fields, such as pattern recognition, computer vision, modern signal processing, human-computer interaction, and cognitive science. It is a very challenging subject with many research contents. This research mainly addresses the low recognition rate of traditional recognition methods for face local features. The experiment proves that the designed method has a higher recognition rate than the traditional method and has certain practical application significance. It can provide reliable technical support for face recognition, visual presentation, etc. It can be considered to be applied in face collection, detection, recognition, and other directions and can be continuously adjusted in the application to gradually optimize the design method so as to improve the adaptability. Combined with this research, further research will be done on the following two aspects:(1)When extracting multiple face features for fusion, there is a lack of a universal face database with limited scale and systematic evaluation criteria. In face recognition, which kinds of face features are selected for fusion and how to distinguish the classification capabilities, algorithm complexity, and implementability of the selected features require further research and experimental proof.(2)There is still room for improvement in the adaptive weight selection method proposed in this article. When using experiments to explore the weights corresponding to the highest recognition rate for various face features under different sample numbers, they still contain experience components. A large number of experiments have shown that when performing face recognition based on feature fusion, the contribution of different types of face features to the recognition rate has the phenomenon of “one after another.” How to form a reasonable adaptive weight selection method that allows different types of face features to participate in the representation of face images in a “competitive” manner is a question to be studied in depth.
Data Availability
The data used to support the findings of this study can be obtained from the author upon request.
Conflicts of Interest
The author declares that there are no conflicts of interest regarding the publication of this article.
Acknowledgments
The research was supported by the Applied Research of Face Recognition Technology Based on Deep Learning in Monitoring System of Network Learning (2018 University-Level Research Project, no. SDP1804).