Abstract
Aiming at the difficulty of automatic blade detection and the discontinuous defects on the full image, an aeroengine blade surface defect detection system based on improved faster RCNN is designed. Firstly, a dataset of blade surface defects is constructed. To solve the problem that the original faster RCNN is hard to detect tiny defects, RoI align is adopted to replace RoI pooling in the improved faster RCNN and the feature pyramid networks (FPN) combined with ResNet-50 are introduced for feature extraction. To address the issue of discontinuous defects on the full image, the nonmaximum suppression (NMS) algorithm is improved to ensure the continuity of defects. A four-degree-of-freedom (4-DOF) motion platform and an industrial camera are used to collect images of blade surfaces. The detection results generated by the improved faster RCNN are compared with the results of the unimproved method. The experimental results prove that the defect detection system based on the improved faster RCNN can realize automatic defect detection on the blade surface with high accuracy. It also solves the issues of tiny defect detection and discontinuous defects on the full result image of the blade.
1. Introduction
Blade is an important part of aeroengine, which plays a crucial role in the normal operation of aeroengine. Surface defects of blades, such as cracks, folds, pockmarks, scratches, polishing marks, local chromatic aberrations, and coating shedding may occur during the process of aeroengine blades manufacturing. These defects of aeroengine blades pose a potential threat to the normal operation of aeroengine. It is of great practical significance to study the detection of blades in the manufacturing process, and the improvement of detection technology is conducive to guiding the manufacturing process planning and improving the manufacturing accuracy [1–4], which helps to ensure the safe operation of aeroengine and prolong blades service life.
At present, the detection method applied in industrial actuality is to search and discriminate the defects on the blade surface through artificial visual under white light. The inspection personnel judge whether the blades meet factory requirements according to the size and types of surface defects of the blades. This traditional detection method is time-consuming and labor-intensive. Moreover, the method is inefficient, and the detection results depend on the experience level of the inspection personnel.
In addition to artificial visual inspection, traditional defect detection methods mainly include magnetic particle detection, penetration detection, eddy current detection, ultrasonic detection, X-ray detection, and so on [5]. Rizk et al. [6] used hyperspectral imaging technology to detect the defects of wind turbine blades. Mevissen and Meo[7] developed an ultrasonic stimulated thermographic test system in order to detect cracks in turbine blades effectively. Infrared thermography testing can be used to detect impact damages, fractures, and cracks [8]. Ciampa et al. [9] summarized the application of the infrared thermography testing for aerospace components damage.
These methods generally have some limitations for blade surface detection in the manufacturing process. For example, the magnetic particle detection method requires blades to be removed from the production line, which has low efficiency. The X-ray detection method has certain harm to human body.
With the rapid development of machine vision, computer, and artificial intelligence technology in recent years, many scholars carried out relevant research studies on the surface defect detection method by using machine vision and deep learning. Yang et al. [10] proposed a real-time tiny part defect detection system for manufacturing using an end-to-end CNN algorithm. The defect detection is realized by the SSD algorithm. Li et al. [11] studied the issue of low detection accuracy caused by high background noise of aeroengine blades. The YOLOv3-Lite method was proposed for blade surface crack detection, which was 50% faster than YOLOv3 with the same detection accuracy. Li et al. [12] proposed an improved YOLOv4 algorithm for surface defect detection of aeroengine components, which improved the detection accuracy of surface defects of aeroengine components. Li et al. [13] proposed a coarse-to-fine detection framework for high-resolution aeroengine blade surface images. Firstly, the image was roughly detected. Secondly, fine detection was carried out in the possible defect area, which improved the detection efficiency and realized the detection of tiny defects in the high-resolution blade images. Shang et al. [14] proposed a blade damage detection method based on deep learning by using pore-probing camera. A shallow texture information network was designed. Shen et al. [15] applied fully convolutional networks to the rapid inspection of aeroengine by using borescope. The result of the model detection is consistent with the damage area marked by technicians.
Most of the detection methods mentioned above are used for engine blades or components that have been used for a long time. The shape, size, categories, and generating mechanism of surface defects of blades after long operation are different from those in the manufacturing process. So far, there are few studies on blade surface defect detection methods in the manufacturing process, and automated detection cannot be achieved. Therefore, it is of great significance to study an automatic and efficient detection method for improving blade surface quality in the practical blade manufacturing process.
The method proposed in this paper constructs an automatic detection system for surface defects of aeroengine blades. Through the improved deep learning network, the automatic, efficient, and accurate detection of aeroengine blade defects is realized. The rest of the paper is organized as follows: Section 2 explains the construction of the blade detection system and the improvement of the faster RCNN structure. Section 3 shows the experiments using the improved faster RCNN and discusses the experimental results. In Section 4, the work of this paper is summarized.
2. Design of Blade Surface Defect Detection System
According to the existing issues and the overall requirements, the workflow of aeroengine blade detection is designed, as shown in Figure 1. To solve the problem of automatic detection of the blade surface in the manufacturing process, a 4-DOF motion platform is designed. It includes X, Y, and Z axis motion tracks, motion control motors, workpiece rotation table, motion controller, industrial computer, 20-megapixel industrial grayscale camera, lens, annular light source, and other major components. The real-time image data are transmitted to the computer through the Gigabit Ethernet (GigE) protocol. The computer communicates with the motion controller through a serial port. The motion controller communicates with X, Y, and Z axis track control motors and workpiece rotation table control motors through controller area network (CAN) bus.

The 4-DOF detection platform mentioned is used for automatic image acquisition of blade surface, as shown in Figure 2. The detailed working process of the 4-DOF detection platform is as follows: firstly, the internal parameters of the camera are obtained by the camera calibration method, and the distance between each pixel in the image collected by the industrial camera corresponds to the length of the real blade surface. Secondly, the blade size is set in computer upper control software, and the computer sends instructions to the motion controller. According to the command, the motion controller controls the track motors to drive the industrial camera, which collects the image of the blade surface area in the meantime. Eventually, four 20-megapixel region images of the blade surface are combined into an 80-megapixel high-resolution full image through image mosaic after the acquisition. The database module consists of two parts. The first part is the original image after image mosaic, image preprocessing, defect marking, and finally stored as annotated data for deep learning network training. The second part is the detection results output by the defect detection module. This part analyzes the detection results, and finally stores the detection and analysis result data. These two parts constitute the database module.

The images of the dataset are collected from the defective blades surface of aeroengines. Since the surface defects such as cracks, folds, polishing marks, local chromatic aberrations, and coating shedding rarely occur in manufacturing process, scratches, pockmarks, and bruises, which often occur are selected as defect data in this paper, as shown in Figure 3. A total of 5066 images are annotated in PascalVOC format and saved in XML format files. The defect data obtained by the detection network consist of the defect label, length, category, and location coordinate information, which are stored in specified TXT format files.

(a)

(b)

(c)
3. Improved Faster RCNN Detection Network
3.1. Deep Neural Detection Network
The defect detection network based on deep learning (DL) can be structurally divided into: two-stage network represented by faster RCNN [16] and one-stage network represented by singleshot multibox detector (SSD) [17] or you only look once (YOLO) [18] network [19]. The main difference between the one-stage network and the two-stage network is the former directly detects the target and predicts the class and location of the defect. The two-stage network generates region proposals first and then the target detection is further carried out based on the feature map.
Compared with the single-stage network, the two-stage network has the advantage of getting more accurate detection at a certain speed loss. Considering that high accuracy is required for blade surface defect detection, in this study, the representative faster RCNN network of the two-stage detection network is adopted as the deep neural detection network and improved on this basis.
3.2. Structure of Faster RCNN
Faster RCNN consists of four parts: feature extraction network, region proposal network (RPN), RoI pooling, and classifier. The basic structure of faster RCNN is shown in Figure 4.

The feature extraction network is composed of a group of convolutional layers, ReLU layers and pooling layers. It is applied to generate feature maps for subsequent RPN and RoI pooling. In faster RCNN, convolutional neural networks (CNN), such as VGG16 [20] or ResNet [21], are generally used for feature extraction.
RPN is employed to generate RoI, and the classification of foreground and background classes is obtained through the predefined anchors of Softmax classification. At the same time, the bounding box regression offset of candidate boxes is calculated to adjust accurate boxes. The input of RoI pooling involves the feature map extracted by CNN and the proposals, which are the correct candidate boxes generated by RPN.
Since the different sizes of the proposals generated by RPN are different, RoI pooling conducts the proposals mapping to the scale of the feature map. Then, each proposal will be max-pooled so that the proposals are fed into the subsequent full connection (FC) layer with a fixed size.
The classifier adopts the FC layer and Softmax to calculate that each proposal belongs to a specific category and generate the prediction probability of the category. Meanwhile, bounding box regression is intended to obtain the position offset of each proposal to carry out the regression of the target bounding box.
Surface defects of aeroengine blade are small relative to the whole blade in the manufacturing process. There are two noteworthy issues in detecting blade surface defects using the original faster RCNN.(1)The first issue is the weak ability to detect tiny defects. Due to the high resolution of the image after automatic acquisition, the defect areas of the actual blade surface are relatively low in the image area, and the size of the defects is tiny. Also, the resolution of feature maps decreases continuously during the process of convolving and pooling. The loss of detailed feature information in the input image leads to poor detection of tiny defects.(2)High-resolution images that are fed directly into the network will result in a lack of memory and high computational complexity. As the resolution of the full image of the blade surface is as high as 80 megapixels, extremely high memory and computational complexity are required to directly input the full image into the network and detect defects. To address this issue, the usual method is to resize the high-resolution images and feed the resized images, which are suitable for training and testing, into the detection process. Another commonly used method is to crop the high-resolution images into small pieces whose sizes are suitable for training and testing. If a high-resolution image is resized to a low-resolution image for feeding into the network, the significant semantic information of the tiny defects in the original image will be lost in the process of resizing. If the high-resolution image is cropped into smaller ones and fed into the network for detecting defects, as shown in Figures 5(a) and 5(b), the cropped images are spliced into a full image after detection, and the large defect, which across several cropped images will be discontinuous on the full result image, as shown in the red box, which is a dashed line in Figure 5(d).

To solve the two issues mentioned above, this paper proposes an improved Faster RCNN, as shown in Figure 6. Firstly, the full image of the blade surface is cropped into images with a fixed resolution below 1000 × 1000. At the same time, the same full image is resized to a fixed resolution below 1000 × 1000. Although the information about tiny defects will be lost in the resizing process of high-resolution images, the network has the complete ability to detect large defects. Secondly, the cropped and resized images are successively fed into the ResNet-50 network combined with the feature pyramid network (FPN) structure for feature extraction and fusion at different scales. The proposals were obtained by the RPN. Thirdly, the obtained feature maps in different sizes are fed into the RoI Align layer to generate the fixed feature map. Fourthly, the fixed feature map is sent to the FC layer and Softmax for classification and bounding box regression. The results of classification prediction and bounding box prediction are processed by an improved NMS algorithm. Eventually, the cropped images are sequentially spliced back to a full image. The processed bounding boxes are drawn in the full image, and the detection data are generated. The improved faster RCNN ensures the ability to detect tiny defects while solving the discontinuous issue of larger defects in the full result image.

3.3. Feature Pyramid Network
To solve the issue that the basic faster RCNN structure has a weak ability to detect tiny defects, FPN is introduced to construct the bottom-up network with feature maps extracted by the ResNet50 network. The feature maps generated by the feature extraction network include maps 2 to 5. Maps 2 to 5 consist of a top-down network, as shown in Figure 7. Map 6 generated by map 5 is directly fed into the downstream tasks. In addition, the 1 × 1 convolution is used to reduce the dimension of map 3, and 2 times upsampling is performed on map 8. The feature maps after upsampling and dimension reduction are added. The feature maps, after addition, were fed into the RPN to obtain proposals. Then, the same process is performed on maps 6, 7, and 8. FPN, combined with the feature extraction network, realizes the extraction and fusion of multiscale features. It avoids the information loss caused by the filtering of tiny defect features during the process of convolving and pooling. The ability of tiny defect detection is improved, and the robustness and generalization ability of the detection network are also improved.

3.4. RoI Align
RoI pooling is performed on the region proposal of the feature map generated by FPN and RPN. As shown in Figure 8(a), the feature region is mapped to the pooling layer by quantization, which is from dark areas to blue areas. After that, quantization is applied to feature map pooling. Two quantizations result in the issue of misalignment [22] between defect feature information and the original feature, which reduces the defect detection accuracy of the network.

(a)

(b)
To solve the issues of information loss and accuracy reduction caused by two quantization, this paper adopts RoI align instead of RoI pooling. As shown in Figure 8(b), RoI align directly maps proposals onto the pooling layer. Bilinear interpolation is used instead of quantization, which ensures the accurate extraction of feature information.
3.5. Improved Nonmaximum Suppression Algorithm
The NMS algorithm [23] relies on a lot of candidate boxes obtained by the classifier and the classification probability values of the candidate boxes. Firstly, in accordance with the classification probability obtained by the classifier, all candidate boxes are sorted in descending order of classification probability. Secondly, the category and candidate box corresponding to the maximum probability are selected, and all other candidate boxes are traversed. If the intersection over union (IoU) is larger than a certain threshold, the candidate box is deleted. The candidate with the maximum probability is selected from the unprocessed candidate boxes. Eventually, the candidate boxes of the repeated detection can be removed during repeating the above procedure.
To address the discontinuous issue of large defects covering several cropped images, the NMS algorithm is improved in this paper. The specific algorithm workflow is shown in Figure 9.(1)The predicted probability of the group of data is judged to be greater than the set threshold. If the value is greater than the threshold, the next step is taken. The data are removed if the value is smaller than the threshold, and then return to the previous step. All candidate boxes and their predicted categories are sorted in descending order by area size.(2)Firstly, if the candidate box is generated from the resized image, the unprocessed candidate box with the maximum area of the resized image is selected in the same classification category. Secondly, the IoU between the candidate box and all other unprocessed candidate boxes of resized or cropped images is calculated. Thirdly, when the IoU is greater than the set threshold, the other overlapping candidate boxes of resized or cropped images are removed. Fourthly, the unprocessed candidate box with the maximum area of the resized image is selected, and the above steps are performed until all candidate boxes of the resized images are processed. Eventually, the bounding boxes, which are processed candidate boxes of the resized image, are mapped to the original full image according to the resizing ratio.(3)Firstly, if the candidate box is generated from the cropped images, the unprocessed candidate box with the maximum area of the cropped image is selected. Secondly, the IoU between the candidate box and all other candidate boxes in the cropped image is calculated. Thirdly, when the IoU is greater than the set threshold, the overlapping candidate boxes of the other cropped images are removed. Fourthly, the unprocessed candidate box with the maximum area of the cropped image is selected, and the above steps are performed until all candidate boxes of the cropped images are processed. Eventually, the bounding boxes, which are processed candidate boxes of the cropped images, are also mapped to the original full image according to the cropped order.(4)The detection bounding boxes, categories, and probability are drawn on the full image. The improved NMS algorithm is able to ensure the continuous detection of large-size defects due to the addition of the processing of the candidate boxes of the overall resized image of the blade. The improved algorithm plays a significant role in the correctness and comprehensiveness of blade surface defect detection.

4. Experiment and Result Analysis
4.1. Model Evaluation Index
The improved faster RCNN network adopts the cross entropy loss function [24] as the classification loss function .where is the index of anchors. denotes the probability that anchor is predicted to be a certain type of blade surface defect. denotes the probability that anchor is the label of the ground truth. , when anchor predicts the true defects higher than the IoU threshold, otherwise .
Smooth L1 loss is used as the position regression loss function.
The overall loss function of the network is shown in the following formula:where represents the vector of the bounding box. is equal to the vector of the ground truth. refers to the size of minibatch. corresponds to the quantity of anchor positions.
Mean average precision (mAP) is used as the model evaluation metric. mAP means to average the area under the precision-recall curve of each category. The value of mAP is within the interval [0, 1], and the larger the value is, the better the model is. The calculation formulas are
In formulas (4) and (5), true positives (TP) is the result of positive samples detected by the model. False positives (FP) is the result of positive but negative samples. False negatives (FN) is the result of negative samples detected by the model but actually positive samples.
4.2. Experimental Environment and Parameters
The main environmental parameters in this paper are shown in Table 1. The hardware part of the experimental platform adopts the designed 4-DOF detection platform, as shown in Figure 10. The track running speed of each motion axis of the detection platform is 10 mm/s. The camera is a 20-megapixel industrial camera. The CPU of the computer is I7-10700, and the GPU is RTX3090. The size of the aeroengine blade used in the experiment is 82.6 × 36.4 × 1.6 mm. In the software part, PyTorch 1.8.0 framework is adopted, and the GPU is used for accelerated calculation. Meanwhile, the batchsize is 4 and the initial learning rate is 0.001. Weight decay is 0.0005. Each experiment trained 30 epochs.

As shown in Table 2, the dataset containing 5066 images of blade surface defects is constructed through self-collection. A total of 8914 scratches, 981 bruises, and 2980 pockmarks are obtained through the statistics of the three defect types in the dataset. The ratio of training set, validation set, and test set is 8 : 1 : 1.
4.3. Comparative Experimental Results of Training
The above experimental parameters are used to train the original faster RCNN, the faster RCNN with RoI Align, and the improved faster RCNN in this paper. The loss curve, as shown in Figure 11, is obtained. The loss curve of the faster RCNN converges to about 0.52 when the iteration reaches 20000. The loss curve of the faster RCNN with RoI align converges to about 0.48 when iteration reaches 20000. The loss curve of the improved faster RCNN in this paper converges to about 0.20 after iteration reaches 18000. The comparison shows that the improved model has better effect in terms of training.

(a)

(b)

(c)
Figure 12 reflects the variation trend of precision-recall curves of different models. By comparing Figures 12(a)–12(c), it can be observed that the improved faster RCNN model in this paper has significantly higher detection accuracy and recall rate compared with the faster RCNN and the faster RCNN with RoI align. Based on Table 3, it can be seen that the proposed model has greater detection ability for defects of different sizes due to the addition of RoI align and FPN. Specifically, the mAP of the proposed model is 16.5 and 10 higher than the other two, respectively.

(a)

(b)

(c)
4.4. Comparative Experimental Results of Deployment
In the deployment part, camera parameters are obtained by using the camera calibration method. The actual distance between each pixel in the image captured by the camera is 11.873 μm. The trained model is deployed in the computer of the equipment, and the blade of the specific model is fixed on the workpiece rotating table by a fixture. Then, blade surface images are collected by the developed upper interactive software automatically. Specifically, the motion platform drives the camera through a connecting mechanism to capture four images with a resolution of 3672 × 5496 at 20-megapixels in the lower left, lower right, upper right, and upper left regions of the blade. Through image mosaic, the four captured images are spliced into an 80-megapixel full image with a resolution of 7344 × 10992. The full image after splicing is used as the input for the defect detection module.
Defects in this image are detected by the original faster RCNN, the faster RCNN with RoI align, and the improved method in this paper. The blade surface detection results of the three methods are shown in Figure 13. It can be seen from area 1 that both the original faster RCNN and faster RCNN with RoI align cannot detect the dark strip scratch defect completely on the left. Meanwhile, the continuous on the right side is detected as two independent defects. The proposed method is able to completely detect the dark strip scratch defect on the left side and ensure the continuity of the scratch on the right side. Comparing the detection results of the three methods in area 2, the original method and the method of adding RoI align cannot completely detect the two tiny pockmarks. However, the method proposed in this paper detected two tiny defects of pockmarks in this area, which shows that the improved method in this paper achieves better detection ability for tiny defects. In area 3, the defects of scratch obtained by the original faster RCNN and faster RCNN with RoI align are error detection. There are actually no defects in this area, and the proposed method does not detect any defects in this area. The proposed method has better robustness compared with the original faster RCNN and the faster RCNN with RoI align while ensuring the ability to detect tiny defects and the continuity of defects in images. The categories and locations of defects are able to be correctly predicted on the blade surface with high noise by the proposed method in this paper.

As shown in Table 4, there is a comparison of the elapsed time and occupied video random access memory (VRAM) resources of the original faster RCNN, the method added by RoI align, and the improved method in this paper. Although the improved faster RCNN method proposed in this paper takes more VRAM resources than the former two methods, the integrity and accuracy of blade surface defect detection can be ensured while the elapsed time is increased by about 3 s.
The pixel distance corresponding to the diagonal of the detection bounding box generated by the improved faster RCNN is quantified to obtain the physical length of the detected defect. The results are compared with the actual measured defect length, as shown in Table 5. As the length of the scratch defect is longer than that of the pockmark defect, the length error obtained is also larger, and the detection error is less than 15%. The reason is that the defect may not be completely put into the labeling box when the tiny defect with a length of less than 3 mm in the dataset is labeled. Specifically, there is a certain distance between the labeled box and the real defect, which leads to a larger error in the quantization of the tiny defect with a length of less than 3 mm compared with the defect with a length of more than 3 mm.
Compared with the original faster RCNN, the time consumed by this method is 3.67 s longer. On the other hand, although the error between the detection length and the real length is less than 15%, the error is still large. In the future, emphasis will be placed on reducing the time consumed by this method. The structure of the neural network will be improved. Meanwhile, the error between the detection length and the physical length will be reduced, and the rapidity and accuracy of the method will be improved.
The image after detection can be displayed instantly in the developed computer detection software based on PyQT, as shown in Figure 14. The resultant image of blade surface detection can be scaled through the operation of the right button. Meanwhile, the software displays the coordinates, types, length, probability, and other information of defects generated by our method, which is convenient for inspection personnel to check.

5. Conclusion
In this paper, an aeroengine blade surface defect detection system based on the improved faster RCNN is designed. It is proven that the system is able to realize the rapid and accurate detection of blade surface defects through algorithm improvement and verification experiments. The system has a fine prospect for application at blade manufacturing sites. The results are summarized as follows:(1)The typical defects in the manufacturing of aeroengine blades are analyzed. Two problems that are automated detection hard and the discontinuous defects in the complete image existing in the detection of blade surface defects are summarized.(2)The hardware platform and software program for automatic defect detection have been constructed. The hardware platform is used to realize automatic image acquisition of the blade surface, and the software program is used to realize the inspection personnel to check the test results quickly.(3)On the basis of the faster RCNN network structure, RoI align is used to replace RoI pooling, and FPN is introduced. The accurate detection of tiny defects on the blade surface is realized, and the mAP of the network is improved.(4)The NMS algorithm is improved to keep the tiny defects while ensuring the continuity of the larger defects in the complete image of the blade surface, and the integrity of detected defects is ensured. The error detection results are observably reduced, and the accuracy of blade surface defect detection is significantly improved.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
The author would like to acknowledge the support and contributions of our colleagues in Xi’an Jet-Engine (Group) Ltd. This research was supported in part by the Project on the Integration of Industry, Education and Research of Jet Engine Corporation of China (grant no. HFZL2020CXY020).