Abstract

In order to fully explore the multi-scale features of 3D clothing design, at the same time, so as to realize the extraction of the clothing itself and the overall characteristics of the clothing, a research and application of 3D clothing design based on deep learning is proposed. Deep learning neural network model is applied to recognize 3D human shape features, the characteristic line is obtained by the threshold judgment, feature points are obtained based on the identified feature lines, the body circumference curve is fitted, and the measurement of human body size is realized. Simulation results show that for the training of convolutional neural networks in the research of feature recognition algorithms the final feature recognition accuracy is made to reach 92.56%. The training network based on the front projection has a good recognition effect for the neck, armpit, and waist, while the side projection has a good recognition effect for the back neck, shoulder, chest, waist, and buttocks. In general, the characteristic lines of the key parts of the human body have been effectively identified. The author uses convolutional neural networks in deep learning, carrying out 3D scanning human body feature recognition, a large number of experiments have verified the effectiveness and feasibility of the algorithm.

1. Introduction

As time goes, as a carrier of character and state, clothing also has different styles. The standard of living now basically meets people’s physiological needs for clothing; people began to pursue the psychological needs brought by clothing, in order to improve their quality of life, in order to highlight their individuality and distinctive aesthetic state [1]. Futuristic style clothing is one of many clothing styles; its own characteristic is to break the conventional clothing design thinking, good at incorporating fresh technology into clothing design. The concept of deep learning is influenced by the human retinal system in medicine; it was born as an inspiration to the hierarchical processing of signals, at the same time, this also laid a theoretical basis for the development of deep learning. Deep learning has developed rapidly in recent years; it has become a research hot spot for major technology companies [2]. Deep learning after the network is initialized is able to carry out autonomous training and adjust network parameters by itself; this feature extraction method is compared with manual feature extraction, it not only saves time and reduces manpower but also has high recognition accuracy, and online computing efficiency has also been greatly improved, making the whole process easier [3].

3D printing technology has become one of the most popular technology clothing production methods; it can print precise, complex, and exaggerated shapes, and apply them to futuristic style clothing, and it is the starting point of the research on this subject. With the emergence of technology and digitization, there have been many designers in recent years, bringing 3D printing, an emerging technology of rapid manufacturing, into the field of clothing. 3D printing, as a product of the digital age, satisfy consumers’ unique and personalized requirements for items, and in the current era of fast sales, it greatly saves the time of choosing clothes and making clothes, people can directly use digital 3D model software, design your own clothing style that suits you, data induction and analysis of the designed model, and generate (STereoLithography, STL) format, after selecting the required materials, can connect the printer to make finished garments, the overall production process is very convenient and fast, which can satisfy people’s imagination, it is a powerful supplement to abstract thinking [4, 5]. The emergence of 3D printing technology not only has brought a new revolution to the manufacturing industry, but at the same time, it can inject new blood into the field of art, and bring a distinctive visual experience in the realm of improving design aesthetics. At present, the innovative design is purely based on the internal factors of the clothing, it has been difficult to meet people’s growing needs, designers continue to innovate, draw inspiration and creative techniques from other industries, applied to clothing design, conduct cross-domain, multi-angle and in-depth exploration, and application, in order to conform to the development trend of the times [6]. Figure 1 is a schematic diagram of 3D printing production. The concept of deep learning was inspired by the hierarchical processing of signals in the human retinal system in medicine, which also laid a theoretical basis for the development of deep learning. Deep learning has developed rapidly in recent years and has become the research hot spot of major technology companies at home and abroad. After deep learning initializes the network, it can adjust the network parameters through independent training. This feature extraction method not only saves time and reduces human resources but also has a high recognition accuracy. The online operation efficiency has also been greatly improved, making the whole process simpler. With the development of big data and the development of GPU, the development speed of deep learning is also accelerated. Deep learning has become the key to artificial intelligence and is bound to dominate in image recognition. Although deep learning has been successfully applied to the fields of voice processing calls, face recognition, image classification, and behavior recognition, essentially deep learning is still in the development stage. It has not expounded a systematic theoretical structure of the initialization, parameter setting, model selection, and other problems of the training model.

Based on this research, the author proposes a research and application of 3D clothing design based on deep learning. Aiming at the technical points and difficulties of the 3D digital clothing design system, by drawing on the latest research progress of artificial intelligence deep learning technology, and the preliminary research results in the field of 3D digital clothing, a deep learning neural network model that can “visually perceive” the 3D scanned human body is constructed.

2. Literature Review

Looking at the development of the clothing customization industry, the application of 3D printing technology in garment manufacturing is already in a budding state. According to expert predictions, 3D printing technology will be used in the creative design of clothing and the choice of fabric types, and clothing processing and other aspects have triggered a new round of technological changes. Burak and others believe that the process of combining 3D printing technology with clothing fully embodies the personalized characteristics of clothing, and because of its unique production mode and production process, the uniqueness of the design is guaranteed [7]. Ji et al. proposed the Deep Belief Network, which makes deep learning once again sought after by academia, and research on deep learning has also been successively launched [8]. Agrusa and others proposed that 3D printing technology is a new type of three-dimensional generation technology, which uses 3D graphics files produced by 3D software as a specific data system and converts them into an STL file, then converts it into a standard template library file and import it into the 3D printer, after the hierarchical printing process of the printer, layer by layer bonding or sanding and splicing, etc. Finally, it is colored and formed into the final physical product. It is called the precursor of the “third industrial revolution” [9]. Lun et al. designed a convolutional neural network architecture with up to 152 layers, the network reduces complexity by deepening the network level, easier to train, reducing the error rate to 3.6%, and the human level is only 5%–10%. It can be seen that the deep learning network already has the distinguishing ability equivalent to that of humans [10]. Baroud et al. designed a 22-layer convolutional neural network in the competition, reducing the top-5 error rate to 6.7% [11]. Han and others proposed 3D printing as a product of the digital age, satisfying consumers’ unique and personalized requirements for items, and in the current era of fast sales, it greatly saves the time of choosing clothes and making clothes, where people can directly use digital 3D model software, and design your own clothing style [12]. Lan and others can use 3D printing technology to print futuristic clothing, can also create dynamic textiles, make technology and our original emotions cleverly combined, as a result, clothing can adapt to future lifestyles [13]. Gardini et al. proposed that 3D printing technology is an emerging technology that is rapidly developing in the manufacturing industry, it is also a kind of rapid prototyping technology, and its academic name is “rapid prototyping” [14]. Amarouayache and others believe that the emergence of 3D printing technology brings new impetus to the creative part of clothing design. As the core manufacturing technology of creative design, 3D printing technology has successfully created a batch of shocking clothing works [15].

3. Research Methods

3.1. Research on Clothing Human Body Feature Recognition Algorithm
3.1.1. Data Deredundancy

The original training data are the mannequin data in st1 format. Data in this format consists of closed surfaces formed by multiple end-to-end triangular faces, each defined by the 3D coordinates of the respective triangular vertices and the normal vector of the triangular faces. Image files in St1 format in two storage formats, ASCII plain format, and binary format. The binary format is easy to store, and the data processing of the ASCI format is more convenient.

Since the st1 model must satisfy the closure, where all triangles are required to form an internal and external closed geometry, the adjacent triangles share neighbors, causing data redundancy, so this redundant information needs to be removed before data processing to reduce the scale of data processing. Redundancy information to be removed includes duplicate points as well as the normal vector of the triangular slice. The data after removing redundant information was changed from raw data in st1 format to 3D point cloud data.

3.1.2. Correction of Human Body Posture

This network selects a 3D gray scale image as the training data of the neural network, so the coordinate point data information in the 3D point cloud format needs to be projected from 3D to 3D. Because there is an error in the human standing posture to some extent when collecting human information data, it needs to be corrected before projection.

3.1.3. Acquisition of Partial Images of the Human Body

According to the proportional relationship of various parts of the human body, segment the 3D point cloud human body model to obtain 3D point cloud model data of various parts of the human body. The body ratio here refers to the length ratio of the human body. Table 1 shows the characteristic values adopted for human feature point recognition based on 3D point cloud data.

The author uses the apex of the head as a starting point; therefore, it is exactly the opposite of the above ratio. On the basis of the proportional relationship in the above table, add a certain range of floats, segment the grayscale image of the human body, and obtain a partial image of the human body. The floating space selected here is 0.04; therefore, the proportion of segmentation and the corresponding feature values of each part are shown in Table 2.

3.1.4. 3D to 2D Mapping

Project the point cloud data of the 3D human body to the 2D xoz plane and yoz plane, respectively, to obtain gray scale images of the front and side projections of the human body. Since the 3D point cloud data is a floating-point type of coordinate point data, therefore, all data points are rounded up, and converted into specific pixels; the gray value of the pixel is the number of points projected there, divided by the number of projections [16]. Normalize the size of the gray image projected on the front and side of each part. These local feature images of the human body are the training data of the neural network. Therefore, the steps to obtain training data are as follows:Step l: De-redundant data in stl format to obtain 3D point cloud data.Step 2: Perform position correction on the human body in a 3D point cloud format.Step 3: Segment the human body in a 3D point cloud format, and obtain the partial point cloud model data of the human body.Step 4: Project the point cloud data of each part of the 3D human body onto the 2D plane, and obtain gray-scale images of local features of the human body projected from the front and side.

3.1.5. Human Body Characteristic Point of Clothing

The human skeleton is the most direct determinant of the shape of clothing. The benchmark point and baseline of the human body are closely related to clothing. The human body has 206 bones, which are divided into three parts: skull, trunk bones, and limb bones. The main joints of the human body are the shoulder, elbow, wrist, hip, knee, ankle, and mandibular joints. The main characteristic points in clothing design are the neck, shoulder, chest, armpit, side waist, hip high, etc. They are important reference points for clothing design, which determine the size and shape of the clothing. In addition, the position of the waistline determines the proportion of the upper and lower division of clothing, which is particularly important in the process of garment production.

3.2. Convolutional Neural Network

The convolutional neural network is an important algorithm of deep learning; it is an artificial neural network. Convolutional neural networks simulate the behavioral characteristics of the human brain’s visual nervous system, and then carry out distributed and parallel information processing. This statistics-based deep learning method can autonomously discover features from the given sample training set, can make it compare with artificial design features in many aspects, shows more superiority, has better robustness, and can better adapt to relatively complex environments. Deep learning is aimed at data with unobvious features and a large amount of information such as images, able to use its own multi-layer network structure, discovers the features hidden in the image from the large-scale massive training data set; therefore, the application of convolutional neural network to image feature recognition is expected to obtain better results [17, 18]. Convolutional neural networks have shown excellent results in many applications, such as handwriting recognition, face recognition, and image classification.

There are three important ideas for convolutional neural networks, namely, sparse connectivity, weight sharing, and sampling in space or time. The local region sensing properties of sparse connections enable the neurons in the current layer to connect only to the previous layer to explore some local features in the data, such as an arc or an angle in the picture, which is very important for feature recognition. Weight sharing makes each convolutional filter in the same layer share the same parameters, including the weight matrix (the convolutional kernel) and the bias top. The weight-sharing strategy reduces the size of the parameters to require training, making the trained model more generic. The purpose of sampling is to extract features. It expounds on the specific location of the image. The absolute location is obviously not important for the feature but in its relative location. This strategy of confusing the specific location can well deal with the influence of deformation and distortion on the recognition image and shows strong robustness. This is precisely the superiority of convolutional neural networks over other feature recognition methods.

3.2.1. Convolutional Neural Network Structure

The transmission process of a single neuron is shown in Figure 2.

Among, is the input signal, −1 is the bias variable, after the function of the neuron, the calculation formula for obtaining the output signal is shown in (1):where f(∙) is the activation function. When multiple units are combined and have a hierarchical structure, a neural network model is formed.

3.2.2. Back Propagation Adjustment Weight

Backward transmission is the most complicated part of convolutional nerves. The basic ideas of convolutional neural network and BP neural network for processing residuals are the same on a macro level [19]. But because the convolutional neural network is not like the BP neural network and has a single structure that is fully connected between layers, therefore, the processing methods are different for different layers. And because the convolutional neural network has added weight sharing, it makes the residual processing and calculation change more difficult [20, 21].

When the weight is adjusted, the square error cost function is used to calculate the residual error, in the multi-classification problem, for class t, when the number of input samples is N, the expression formula of the function is as follows:

Among, represents the t-th dimension of the label corresponding to the nth sample, represents the t-th output of the network output corresponding to the n-th training sample. Backpropagation adjusts the weight, essentially through training one at a time; multiplying the change rate of each layer’s error to weight and bias by a certain learning efficiency, the weights and biases are adjusted step by step. Back propagation error can be understood as the rate of change of the error to the offset b. For a single-input L-layer neural network, the output of the l layer is equation (2), let , then the formula for the rate of change of the offset b relative to the error E is shown as follows:

Then, the change rate formula for the first layer is shown in (4):where means that each element is multiplied, and the weight and bias of each neuron are updated according to the value of . For the first layer, the reciprocal of the error for each weight of the layer is the cross product of the input of the layer and the rate of change of the layer [22]. Then, the weight of this layer is updated on the basis of this partial derivative, multiplying by a negative learning rate, the update equation formula is as shown in (5):

Among them, is the learning rate, the size of its value will be directly related to the magnitude and accuracy of each weight adjustment, j represents the jth output.

3.3. Research on 3D Clothing Prototype Modeling
3.3.1. Expansion Algorithm of Curved Surface

Most of the algorithms for surface unfolding are by constructing a deployable surface and then unfolding it. The developable surface is a special ruled surface; the ruled surface is produced by a family of continuously varying straight lines; this kind of surface has the characteristic that the Gaussian curvature is always zero. The family of straight lines that produce the ruled surface is called the generatrix of the ruled surface, when there is only one tangent plane along each generatrix of the ruled surface, at this time, the ruled surface is malleable [23].

Gaussian curvature reflects the general curvature of a curved surface; using the positive and negative properties of Gaussian curvature, it may be able to reflect the structural characteristics of the curved surface formed by this point and its neighboring points; when the Gaussian curvature K is greater than zero, it is an ellipse point; when K is less than zero, it is a hyperbolic point; when K is equal to zero, it is a plane or parabolic point. In differential geometry, for smooth parametric surfaces, when the Gaussian curvature of any point on the surface is always zero, at this time, the surface is a developable surface, otherwise, the surface is a non-developable surface.

For any point P on a smooth continuous surface , the two principal curvatures of the surface at point P are , the normal vector n of the surface at point p, then the calculation formula of its Gaussian curvature is shown as follows:

The necessary and sufficient condition for the surface to be developable is that the surface is the envelope of a single-parameter plane family. Generalized to the discrete form, the Gaussian curvature formula of any point p on the surface of the triangular mesh is shown in (7):

In this formula, and are, respectively, the angle and area corresponding to all the triangles in the neighborhood of the point.

4. Results and Discussion

In order to verify the accuracy and effectiveness of the algorithm, the simulation experiment is based on the 3D scanned human body data provided by the China National Institute of Standardization, selecting a representative 3D scanned human body model. In order to verify the universality of the algorithm, the 3D scanning human body model selected in the experiment has the characteristics of different ages, different genders, and cross-regional. Then, use the algorithm proposed by the author to perform feature recognition on the selected 3D scanned human model, and mark the specific location where each feature appears. Finally, by obtaining the intersection of the tangent plane of the 3D scanned human body model at the relative position of the feature and the 3D scanned human body model, then use polynomial-based least squares for curve fitting. The data marked with in the table are provided by the China National Institute of Standardization [24, 25]. The experiment in this chapter is based on Matlab programming for algorithm implementation, because the experiment is based on the front and side training of the human body, so different training models, the recognition effect, and recognition probability of each feature of the human body are also different. When the recognition probability of a certain part of the body is less than 50%, it is believed that the network does not have the ability to recognize this feature. The calculation formula of recognition probability is shown as follows:

Among them, p is the recognition probability in the table, k is the number of parts of the network with recognition ability, is the number of images at location k, is the probability that the image of the i-th k part is recognized as correct. The feature recognition effect and recognition probability of various parts of the human body are shown in Table 3 below.

The simulation results are shown in Figures 35.

Figure 3 shows a 49-year-old male with a fat body and a height of 1638 mm, the weight is 58 kg, which is representative of the elderly. Simulation results show that the characteristic lines of the key parts of the human body have been effectively identified, the simulation result has a small deviation from the reference value, and the algorithm has been effectively verified.

Figure 4 shows a 45-year-old man who is obese, with a height of 1617 mm and a weight of 71.6 kg, it is represented in the elderly. Simulation results show that the characteristic lines of the key parts of the human body have been effectively identified. The relative error between the neck circumference and chest circumference and the reference value is relatively large; the measured values of other features have small deviations from the reference values, and the algorithm has been effectively verified to a certain extent.

Figure 5 shows a 50-year-old woman who is obese, her height is 1610 mm and her weight is 63.9 kg, which is representative of the elderly. The simulation experiment results show that the characteristic lines of the key parts of the human body have been effectively identified. The deviation between the simulation result and the reference value is small; the algorithm has been effectively verified. The analysis of the experimental results shows that for the training of convolutional neural networks in the research of feature recognition algorithms, the final feature recognition accuracy is made to reach 92.56%. The training network based on frontal projection has a good recognition effect on the neck, underarms, and waist; the side projection has a good recognition effect on the back of the neck, shoulders, chest, waist, and buttocks; taken together, the characteristic lines of the key parts of the human body have been effectively identified.

5. Conclusion

Convolutional neural network algorithms in the field of image processing and pattern recognition have been widely used. The author uses convolutional neural networks in deep learning, which recognize the features of the human body by 3D scanning, and a large number of experiments have verified the effectiveness and feasibility of the algorithm. Deep learning in biomedical data analysis, image analysis, face recognition, artificial intelligence, and other fields have been widely used, and the error rate is continuously decreasing. Deep learning will be the general trend in graphics and image processing in the future; applying deep learning to human body feature recognition is very important for 3D clothing design. However, deep information has a non-negligible role in image processing, so follow-up studies on human feature recognition in deep learning-based convolutional neural networks will be trained on training data in 3D space. In the study of 3D clothing prototype modeling, because the sampling point of the boundary curve is located on the surface of the mannequin, if there is a bulge in a certain garment sheet, it may produce the penetration of the human body, so the segmentation of the human body should be optimized in the subsequent work.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.