Abstract

Automatic detection of fabric defects is important in textile quality control, particularly in detecting fabrics with multifarious patterns and colors. This study proposes a fabric defect detection system for fabrics with complex patterns and colors. The proposed system comprises five convolutional layers designed to extract features from the original images effectively. In addition, three fully connected layers are designed to classify the fabric defects into four categories. Using this system, the detection accuracy is improved, and the depth of the model is shortened simultaneously. Optimal detection rates for testing dirty marks, clip marks, broken yams, and defect-free were 88.01%, 90.15%, 98.01%, and 97.73%, respectively. The experimental results show that the proposed method is effective, feasible, and has significant potential for fabric defect detection.

1. Introduction

Image classification is the foundation of defect classification using computer vision and plays a significant role in manufacturing product quality control. Defects on the fabric is one of the most important factors that affect the quality and price of textiles. Most defects on the fabric are caused by textile machine failures and problems in production. To the best of our knowledge, the detection of fabric defects remains primarily manual, which has low efficiency, high labor cost, and low accuracy. In addition, fabric defects are becoming increasingly difficult to detect because of the wide variety of fabrics and increasingly complex patterns and colors. With the advancement of science and technology, computer vision with higher detection accuracy and lower cost has gained widespread interest in fabric defect classification applications.

In recent years, several techniques have been proposed to improve the accuracy of fabric defect detection [14]. Kumar [5] presented a method for segmenting local textile defects based on a feed-forward neural network. This effective method offers a low-cost and single-PC-based solution for fabric defect classification. Pourkaramdel et al. [6] proposed a novel approach based on one dimensional local binary patterns. The experimental results have showed that the proposed approach has a high detection rate for different surfaces such as stone, paper, and textile. Seçkin and Seçkin [7] proposed a new feature extraction method called intertwined frame vector feature extraction for defect detection. These features are used in machine learning classification algorithms. The experimental results have shown that the proposed method is faster and provides higher accuracy. Li and Zhang [8] proposed a novel automatic inspection scheme using smart visual sensors for warp knitting machines. The proposed scheme was effective, and its classification rate reached 98%. Çelik et al. [9] proposed a method adaptable to different fabric types based on a machine vision system. The system could detect defective areas in a denim sample with an average true classification rate of 91.7% and a false classification rate of 6.5%. These methods consist of two processes: First, they describe the image characters extracted from the original images and then use them as input to the classifier. This method typically relies on human feature selection, which significantly affects the classification results if the selected features are incomplete.

Many studies have demonstrated that deep learning performs excellently in feature learning and classification tasks. Convolutional neural networks (CNNs) exhibit excellent performance in image recognition and classification [10, 11]. Usually, the accuracy of a CNN can be improved through various aspects, such as network architecture, nonlinear activation functions, supervision components, regularization mechanisms, and optimization techniques [12]. These improvements have led to further research on image classification using deep convolutional neural networks (DCNNs). DCNN has shown advantages and potential in various areas of classification tasks, such as traffic density measurement [13], medical diagnosis [14], visual recognition [15], and facial recognition [16, 17]. Recently, Jing et al. [18] proposed a modified AlexNet using CNN for the classification of yarn–dye fabric defects. The experimental results showed a promising performance with an acceptable average classification rate and strong robustness for yarn–dyed fabrics. Yao et al. [19] builds a deep learning defect detection network incorporating an attention mechanism. The number of samples of each defective type were enriched by the data augmentation strategy. The experimental results show that this algorithm can effectively detect 39 categories of fabric defects. Zhang et al. [20] proposed a lightweight network for defect detection in resource-constrained scenarios, the network improving small defect detection accuracy by applying the channel attention mechanism to extract more refined defect features under the complex background disturbance. Alruwais et al. [21] developed a novel hybrid mutation moth flame optimization with deep learning-based smart fabric defect detection technique for sustainable manufacturing. The experimental values demonstrate that the accuracy of the proposed method is 95.47%. However, it is difficult for these existing methods to achieve better adaptability, owing to the wide range of fabrics with complicated patterns and colors. The existing deep learning techniques are usually too complex to be run and trained on an industrial computer with limited computing resources.

This study aims to accurately detect microdefects in fabric with multifarious patterns and colors. To classify the defects, a model for feature extraction is proposed. These features are solely used for fabric defect classification without any other processing. The model uses batch normalization (BN) and inception to decrease the depth of the network with the same precision and enhance the training speed. The proposed model effectively detects microdefects in fabrics with various patterns and colors and can be used for automatic detection in fabric production lines.

2. Detection Algorithm for Fabric Defects

Inspired by the traditional AlexNet [22] and the application of other typical models [23, 24], the proposed model for fabric defect detection consists of BN and inception. BN has recently proven its effectiveness and importance in deep learning [25]. Each time a SGD is executed, the corresponding activation is normalized using a minibatch to make the mean of the computed result zero. Using x as the training sample, the BN formula is as follows:where k represents the kth dimension of the data. and are the expectation and variance of training sample x, respectively.

The absolute value of is significantly small, and the overall data are located in the nonsaturation zone of the sigmoid. To solve this problem, an inverse transformation is performed to ensure the recovery of the original data using the following formula:where and are trainable parameters.

The accuracy of DCNN can be improved by adding nodes or layers. However, this leads to model overfitting and increases the computational cost. The inception model adds nodes to a DCNN while increasing the layers [26].

Figure 1 shows the architecture of the inception model. To eliminate the influence of the filter size on the recognition results, multiple features were extracted using filters of different sizes. The inception model uses 1 × 1 convolutions, 3  3 convolutions, 5  5 convolutions, and max pooling of size 3  3 convolutions in paper. Figure 1(a) shows a naive version of the inception model. However, the paper indicated that the parameters of the inception model filters are the sum of all branches, leading to a computational blowup within a few stages.

1 × 1 convolutions can reduce the dimensionality of the output matrix and combine information in different channels. That is, 1 × 1 convolutions can be used to compute reductions before the expensive 3 × 3 convolutions, 5 × 5 convolutions, and after max-pooling layers. Figure 1(b) shows the final inception model.

Based on our experience, the size of the input images is set to 224  224 with three color channels. Next, the inception layer, which is stacked by 1 × 1 convolution kernels, 3  3 convolution kernels, 5  5 convolution kernels, and max pooling with a filter size of 3  3 follows, as shown in Figure 2. Then, average pooling with a filter size of 3  3 is performed, which can reduce the estimator of variance caused by the limits of neighborhood size. Next, there are two fully connected layers of 256 neurons. A dropout follows the two fully connected layers to avoid overfitting. In the training procedures, the last fully connected layer has the same number of neurons as the number of output classes, which is the obtained output matrix. Subsequently, the defect type is then obtained using the SoftMax activation function. After the DCNN is trained, the SVM classifier replaces the SoftMax classifier and the last fully connected layer for the final classification. In this case, the size of the output feature map of the last fully connected layer is 1 × 1 × 256.

The loss between the predicted and actual labels is calculated using sigmoid cross-entropy. The next step is to train the model and optimize the loss. Gradient descent is a typical practice for solving optimization problems. A gradient update is performed each time all database data are required. However, the update speed is slow, and the model cannot be updated online. Therefore, Bottou [27, 28] proposed a SGD, which can converge to the global minimum of the convex function and local minima of the nonconvex function. Using SGD, the update speed is fast, and the model can be updated online. Despite the extensive use of gradient-based optimization techniques for DCNN and image classification, the method remains a challenge because of its substantial limitations. Therefore, improved optimization techniques have been proposed, including Momentum [29], Adagrad [30], Adadelta [31], RMSprop [32], and adaptive moment estimation (Adam) [33]. The advantage of Adam is that the learning rate is restricted to a limited scope in each iteration by the bias correction method to enable comparatively stable parameters. Hence, the loss function is and its gradient is . is a parameter. Based on the concept of momentum in physics, the gradient is replaced by accumulated momentum, resulting in the following formula:

and denote the mean and variance of the gradient, respectively. and modify and , respectively. When is 0.9 and is 0.9999, the parameter update formula is as follows:

It was found that the gradient estimate does not require additional memory and is dynamically adjusted by the gradient. In addition, the learning rate is dynamically constrained by . is used to ensure that the denominator is a nonzero value.

3. Experiment

3.1. Image Dataset and Evaluation Criteria

Samples with fabric defects were obtained from an umbrella factory. Generally, there are four types of fabric defects: dirty marks, clip marks, broken yams, and defect-free defects. A fabric dataset, including large and small datasets, was created. The large dataset contained 4,000 fabric images, including four defect types: dirty marks, clip marks, broken yams, and defect-free. The images were randomly divided into training and validation sets in a 5 : 1 ratio. A small dataset containing 2,000 fabric images was selected for the testing sets, and it included four classes: dirty marks, clip marks, broken yams, and defect-free. Typical defect samples in the fabric datasets are shown in Figure 3.

The improved defect inspection equipment is shown in Figure 4. The fabric images were captured using a line-scan digital camera. Furthermore, the line light source is behind the fabric. Image storage and inspection are performed by a computer.

To evaluate the performance of the fabric defect detection system, accuracy (ACC), false positive rate (FPR), and recall rate are adopted. ACC, FPR, and recall rate are defined as Equations (8), (9), and (10), respectively. A high FPR repeatedly stops the machine because of a false alarm, significantly affecting the generation efficiency. A low recall rate can result in defective products going to market. Furthermore, a low recall rate affects product quality.where TP is the number of defective samples classified correctly, TN is the number of nondefective samples classified correctly, and FN and FP are the numbers of defective and nondefective samples falsely detected, respectively.

3.2. Training and Testing

The training set was used to train the proposed DCNN model. The choice of training parameters was as follows: (1) An optimization algorithm was used to optimize the parameters of the model, and four gradient descent variants were used as optimizers to compare the performance of the proposed method. (2) The learning rate controls the training speed of the network weights and must be set to a value within a range to ensure that the model has good performance. Learning rates of 0.01, 0.001, and 0.0001 were used to train the model. (3) A batch size of 32 was set based on the sample size of fabric defects. (4) The epoch is a cycle period that includes one forward and one backward transmission for training all samples. The following formula was used: . We expected to achieve the required results by training the model through multiple iterations.

4. Results and Discussion

The performance of the proposed DCNN model was evaluated using the training, validation, and testing sets. The input samples of fabric defects were resized to 224 × 224, and the evaluation was conducted with learning rates of 0.01, 0.001, and 0.0001; the number of iterations was set to 2,000. Table 1 shows the recognition performance using different learning rates. The learning rate of 0.001 yielded the highest performance.

To demonstrate the effectiveness of the proposed DCNN model, BN and inception were removed from the model. The learning rate of the models is 0.001. The performances are presented in Table 2. As shown in Figure 5, removing either BN or inception did not yield better accuracy.

Table 3 lists the detection and false detection rates of the proposed model with different optimizers. The detection accuracy of the proposed model using different optimizers is shown in Figure 6. When SGD and Momentum were used as optimizers, the proposed DCNN model exhibited low classification accuracy. When Adam and RMSProp were used as optimizers, the detection accuracy and recall rate of the proposed model were high and similar; however, RMSProp exhibited a higher FPR.

In addition, AlexNet is a traditional DCNN with good performance in classification tasks, whereas YOLOv8 is a state-of-the-art image recognition method. Therefore, the performance of the proposed model was compared with AlexNet and YOLOv8 and evaluated. The hyperparameters of the models are listed in Table 4, and the results are listed in Table 5. Our proposed model exhibited a lower accuracy and recall rate than that of YOLOv8; however, in practical applications, real-time detection is a key index, which directly affects production efficiency. Therefore, the network structure proposed in this paper is relatively simple, so the running time of the proposed method can meet the real-time requirements of application scenarios. The experiments in this work are based on the GeForce GTX 1080Ti, and the runtime is 135 ms, which meets the real-time demand.

In the above experiments, the proposed DCNN model exhibited excellent performance using a batch size of 32 and a learning rate of 0.001. To generalize our experiment and results, we repeated our experiments 10 times using the validation and testing sets and calculated the average of all times as the final classification accuracy. Table 6 presents the results.

We calculated the number of false detections for different types of fabric defects in the testing set, as shown in Figure 7, and the false detection rates for different types of fabric defects are listed in Table 7.

In the experiments, better results were achieved in terms of the detection rates of the datasets. This proves that the proposed DCNN model performs well regarding detection rate. However, a low detection rate was still obtained for dirty and clip marks, which is possible because the characteristics of dirty and clip marks are similar.

5. Conclusion

Deep learning is an effective image-classification method. This study proposed a fabric defect detection model based on a DCNN to ensure the quality of fabrics with multifarious patterns and colors. The application to fabrics with multifarious patterns and colors showed that the proposed DCNN model achieves good accuracy for fabric defect detection. The experimental results showed that the proposed model can achieve a detection accuracy of 92.7%.

Although the proposed DCNN model exhibited good performance, it still requires further improvement that can be covered in future studies. (1) The number of samples is not sufficient; hence, a database of other defects, such as hanging threads and buttonhole selvage, is required. (2) Increase the speed of the training process and improve accuracy by optimizing the network. (3) The network is optimized using optimization techniques such as initialization schemes and skip connections.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Young and Middle-Aged Teacher Education Research Program of Fujian Province (grant no. JAT201037) and Doctoral Research Fund Program of Chengyi College, Jimei University (grant no. CK21018).