Abstract
To improve the three-dimensional (3D) reconstruction effect of intelligent manufacturing image and reduce the reconstruction time, a new CAD-aided 3D reconstruction of intelligent manufacturing image based on time series was proposed. Kinect sensor is used to collect depth image data and convert it into 3D point cloud coordinates. The collected point cloud data are divided into regions, and different point cloud denoising algorithms are used to filter and denoise the divided regions. With the help of CAD, FLANN matching algorithm is used to extract feature points of time-series images and complete image matching. Three-dimensional reconstruction of sparse point cloud and dense point cloud is carried out to complete 3D reconstruction of intelligent manufacturing images. The experimental results show that the image PSNR of this method is always above 52 dB, and the maximum reconstruction time is 4.9 s. The 3D reconstruction effect of intelligent manufacturing image is better, and it has higher practical application value.
1. Introduction
In recent years, in the field of information technology and industry, significant changes have taken place, such as large data, cloud computing, mobile Internet, 3D printing, and industrial robots, and these changes have brought a new round of revolution in global manufacturing, including intelligent manufacturing as a product of informatization and industrialization depth fusion, but also got great progress. It has been widely concerned and valued by the governments of various countries [1], such as the “Advanced Manufacturing National Strategy” of the United States, the “New Industry France Plan” of France, the “New Industry Creation Strategy” of Japan, and the “Industry 4.0” of Germany [2]. An important reason why developed countries attach importance to intelligent manufacturing is that the financial crisis in 2008 fully exposed the fragility of the virtual economy, so many countries have re-examined the important role of manufacturing in social development, taking intelligent manufacturing as a breakthrough to improve the country’s comprehensive competitiveness [3]. Intelligent manufacturing helps to improve the level of China’s traditional manufacturing industry, achieve high-end technological innovation, relieve energy pressure, promote the transformation of new production modes, and accelerate the transformation from a manufacturing country to a manufacturing power. However, due to the continuous development of manufacturing technology, information technology, and network technology, the concept and connotation of intelligent manufacturing are also in constant change, enrichment, and improvement [4]. At this stage, the field has generally recognized the definition of intelligent manufacturing and believes that intelligent manufacturing integrates a variety of modern technologies, which can realize the monitoring and decision-making in the process of intelligent production and improve the intelligent degree and production efficiency of product production [5]. Therefore, in order to cope with the competitive market environment, ensure the clarity of intelligent manufacturing images, and enhance the competitiveness of enterprises. Image 3D reconstruction has become the focus of current business and academic circles. So it is very important to study the 3D reconstruction method of intelligent manufacturing image.
There is not much existing image design 3D reconstruction methods in intelligent manufacturing, so we just need to migrate to other areas of image 3D reconstruction method, for example by Zhuang and Wan [6], who proposed a framework based on P2M single-image reconstruction of 3D model of the improved method, mainly because the traditional 3D reconstruction method cannot rebuild object’s invisible part, it takes a long time to reconstruct, and it is also difficult to reconstruct objects without texture. Three-dimensional reconstruction is carried out around a single image. The main work is to replace the backbone of feature establishment, improve the feature extraction network by VGG-16 network, and propose an improved DenseNet network to obtain better two-dimensional feature points. To improve the reconstruction effects, Liu et al. [7] proposed a single-image 3D reconstruction method based on vanishing point optimization. First, the image is processed, the long lines in the image are extracted, the characteristics of the lines are analyzed, and the lines in different directions are grouped. Then, according to the linear distribution relationship of the lines in each direction, the linear model of the line parameters is obtained by the improved regression algorithm, the error lines are eliminated, and the extinction point is solved by the least square method. After the accurate extinction point is obtained, the internal and external parameters of the camera are obtained according to the properties of the extinction point. The minimum two-dimensional points of the image are obtained through interaction. The 3D reconstruction of the object is realized by calculating the 3D coordinates of the camera parameters and the geometric features of the object. Zhao et al. [8] proposed a 3D reconstruction method based on sea ice scene image classification. Aiming at the problem of large amount of computation and easy accumulation of errors in 3D reconstruction process, this method uses a classification network to screen suitable image sequence through training and uses 8-neighborhood filling algorithm to segment the reconstructed region of the obtained specific category sea ice scene image and finally carries out the 3D reconstruction of the segmented image. Yu et al. [9] take flange parts as an example to propose a reconstruction method of flange parts based on working drawing images. This method takes the scanned images of engineering drawings as the research object; carries out noise reduction, segmentation, and refinement processing; and then uses Harris corner recognition algorithm and Hough circle detection algorithm to extract and make statistics on their upper features. Back-propagation (BP) neural network was used to identify and extract the necessary parameters of parts reconstruction, and finally realized the three-dimensional reconstruction of flange model. Liu et al. [10] proposed a 3D reconstruction method of fuzzy image edge contour based on machine learning. Through the detection of image edge contour, the method of stereo matching is used to find the corresponding points in the detected image edge, so as to obtain 3D coordinates. According to the obtained 3D coordinates, the 3D point cloud of the object surface is obtained, and the 3D reconstruction of fuzzy image edge contour is realized based on machine learning.
But the aforementioned methods’ image denoising processing effect is poor, the reconstruction effect is poorer, and the reconstruction takes longer, so this article proposes to solve the aforementioned problems existing in the method as the research target, to access the sensor as the tool of depth image data acquisition, combined with ordinary filter algorithm and bilateral filtering algorithm as the theoretical basis, to deal with the noise in point cloud data. Based on FLANN matching algorithm, feature points of time-series images were extracted, and 3D reconstruction of sparse point cloud and dense point cloud was carried out by CMVS algorithm and PMVS algorithm, respectively, so as to complete 3D reconstruction of intelligent manufacturing images.
2. CAD-Aided Intelligent Manufacturing Image 3D Reconstruction Method Design
2.1. Point Cloud Data Collection
As a 3D depth sensor, Kinect has the functions of real-time dynamic capture, voice recognition, microphone input, and intelligent interaction in addition to obtaining color images and depth images applied in this paper. The appearance is shown in the figure below. The external structure is simple and light, and the internal device is complex and powerful [11]. Kinect sensors emit infrared light using color and depth cameras to collect raw data, which are combined to generate 3D data [12, 13]. The imaging principle of Kinect camera is similar to that of pinhole imaging. In this paper, the pinhole model is used to introduce the camera imaging principle. Assume that represents any object in space, the pinhole plane is the vertical plane where the camera view is located [14], and the imaging plane after passes through the optical center is called the image plane. The horizontal ray passing through the optical center and perpendicular to the two planes is called the optical axis, and the intersection between the optical axis and the pinhole plane is called the main point. The distance between the image plane and the pinhole plane is the focal length of the camera, the horizontal distance between and the pinhole plane is , the real height of the object is , the imaging height is , and the imaging point is [15]. According to the similar triangle principle:
Considering the image coordinate system, axis represents the rows of the image and axis represents the columns of the image [16]. Then, the coordinate conversion relation of and is as follows:
The relationship between camera coordinate system and image coordinate system can be expressed by the following formula:
In the aforementioned formula, represents the included angle between two coordinate systems, and represents the focal length of the camera. Combining the aforementioned two equations, the relationship between camera coordinate system and image pixel coordinate system can be expressed by the following formula:
In the aforementioned formula, represents camera parameters and represents parameter matrix [17]. For the same pixel, the relationship between the world coordinate system and the camera coordinate system can be expressed by the following formula:
In the aforementioned formula, is the rotation matrix, is the translation vector, and is the external parameter matrix of the camera.
Combining the aforementioned two equations, the relationship between the image pixel coordinate system and the world coordinate system can be obtained, which can be described by the following formula:
In the aforementioned formula, represents the coordinates of pixel points, and its corresponding world coordinate system is , and represents the scale coefficient.
The camera coordinate system consists of optical center , coordinate axes , , and , and the optical axis is represented by axis . The and planes of the image coordinate system constitute the image plane and are perpendicular to axis . The origin is the intersection of the optical axis and the image plane. The projection of point in space onto the image plane is . Firstly, the points in the world coordinate system are transformed into the camera coordinate system, and then the camera coordinates are transformed into the image physical coordinate system according to the similar triangle principle, and finally the coordinates p(u,v) in the image pixel coordinate system are obtained.
The original data collected by the Kinect sensor cannot be used directly but need to be converted into data that can be understood and directly applied. The raw output data generated by the sensor device are usually mapped to the grayscale space in the form of a depth image. The image takes the origin of depth image, and the unit is pixel. Each pixel represents the depth distance from Kinect. The function R Depth Gen.Convert Projective To Real World in Open Source library Open NI is used for coordinate conversion. The following relationship exists between the world coordinate system and the depth image coordinate system:
In the aforementioned formula, , , and , respectively, represent the , , and axes in the world coordinate system with Kinect device as the origin, and and , respectively, represent the horizontal and vertical coordinate axes of the depth image coordinate system. fxzfyz is the focal length of the corresponding depth camera. The and coordinates of the camera center are 320 and 240, respectively (image resolution 640 × 480). The coordinates after transformation are the real-world coordinates. Coordinate values of , , and coordinate form 3D point cloud, and the depth image data are converted into 3D point cloud coordinates for subsequent processing.
The specific conversion process is as follows:
Since the origin of the world coordinates and the origin of the camera coincide, that is, there is no rotation and translation, the 3 × 3 rotation matrix and 3 × 1 translation matrix are calculated as follows:
At this point, the coordinate origin of the camera coordinate system and the world coordinate system coincide, so the same object under the camera coordinate system and the world coordinate system has the same depth, namely . Then there is
From the aforementioned transformation matrix formula, the coordinate transformation formula of depth image point conversion into 3D point cloud can be calculated:
2.2. Point Cloud Data Filtering and Denoising
To obtain better denoising results of point cloud model, the features of the model can be preserved while removing noise. This paper first divides the scanned point cloud data into regions, and then adopts targeted point cloud denoising algorithms for the divided regions [18]. The specific region division method is as follows: the local surface fitting method is used to calculate the differential geometric information of the point cloud, and the local feature weights are set according to the mean curvature of the model point cloud obtained by calculation. Then the local feature weight threshold is given, and then the neighborhood feature weight of the sampling point is judged with the set threshold. Thus, the region of the point cloud data of the model is divided, and the flat region with less feature information and the region with more feature information are obtained [19].
Regional division is carried out by calculating the local feature weights of points. Given that the average curvature at sampling point is , the local feature weights of sampling points in k-nearest neighbor neighborhood are defined as follows:where is the average curvature in the neighborhood of the sampling point [20]:
If the local feature weight at sampling point is less than the threshold value of the set feature weight, then sampling point is judged to be a point in the flat region with less feature information [21]. If the local feature weight value at sampling point is greater than the threshold value of the set feature weight value, then sampling point is judged to be the point in the area with more feature information.
For point cloud data with well-divided feature types, targeted filtering denoising algorithm is adopted [22–24]. The neighborhood distance average filtering algorithm and the adaptive bilateral filtering algorithm are used to denoise the two different feature areas.
2.2.1. Point Cloud Denoising in Flat Area
Owing to the characteristic information of the few curvature changes more gently on the surface of the flat area of the model, the low complexity of denoising methods, namely using sampling point k-nearest neighborhood interior point to the average distance between neighboring points of statistical filtering algorithm for discrete points and noise points in the area to remove noise effectively after receiving new model of point cloud data [25], as follows:where is the average distance between the inner point and the nearest neighbor point in the neighborhood of the sampling point K, is the average distance of the whole flat region, and is the standard deviation. The calculation formula for each parameter is as follows:
The algorithm steps are as follows:(1)For each sampled data point in the flat region with small curvature changes, all points in its k-nearest neighbor neighborhood are searched;(2)Formulas (10) to (12) were used to calculate the average distance from k-nearest neighbor to nearest neighbor of each sampling point , and the average distance and mean square error of all sampling points in point cloud of flat region .(3)The mean-variance theory is used to determine the noise points and discrete points to be removed [26], that is, formula (9) is used to judge. If the average distance of sampling point exceeds the given threshold, the point is removed.
2.2.2. Point Cloud Smoothness in Feature-Rich Regions
Compared with other denoising algorithms, bilateral filtering algorithm has more advantages in the noise removal process of 3D scattered point cloud. It was initially used in image processing to preserve the image contour. Owing to its ability to maintain high-frequency features, this paper chooses the point cloud bilateral filtering method to denoising the feature-rich region of point cloud containing noise [27]. The specific method is shown in formula (16):where is the filtered point data, is the initial point cloud data, is the normal vector, and is the bilateral filtering factor. Its calculation formula is shown as follows:where represents the sampling points in the neighborhood nearest to the sampling point [28], and are Gaussian filtering functions in the spatial domain and frequency domain of bilateral filtering functions, respectively, and their specific forms are shown as follows:
In the aforementioned formula, represents the effect factor of the distance between sampling point and points in its neighborhood on point . The larger the value of is, the more neighborhood points will be selected. In this case, the denoising result will be better [29, 30], but the effect of point cloud feature preservation will be reduced. is the action factor of logarithmic point on the projection of the distance vector from sampling point to the point in its neighborhood on the normal direction of the point, which mainly plays a role in regulating the degree of feature retention of point cloud data. The larger the value of is, the better the feature retention effect of the point cloud model will be. In general, is the neighborhood radius of the point, and is the standard deviation of the neighborhood point.
The specific algorithm steps are as follows:(1)Search k-nearest neighbor points of each data point in the region with drastic curvature changes in the point cloud model, that is, in the feature-rich region;(2)For all its nearest neighbor points, the value of in parameter and in parameter are calculated;(3)Calculate and according to the calculation formula of 6 and 7 and the selected value of 8;(4)The values of and calculated in Step (3) are substituted into the calculation formula of bilateral filter factor to calculate bilateral filter factor ;(5)Then, formula (14) is used to move the points in the feature-rich region normally to obtain the filtered new point cloud data;(6)After all points are calculated in turn and new point cloud data are obtained through transformation, end.
2.2.3. D Image Reconstruction Based on Time Series
In the real world, the collection of a large number of data is related to time, and the data are time related. These kinds of data are called time series. Strictly speaking, time series refers to a group of numerical sequences formed by sequential observation values of the same phenomenon at different times. It is a limited sequence of real values recorded in the direction of the time axis, reflecting the characteristics of attribute values in the time order. If the data series is continuous, it is called continuous time series [31]. If the data sequence is discrete, it is called a discrete time series. In daily life, a large amount of time-series data will be generated in different fields, which can be referred to as “time-series data.” By collecting, recording, and organizing these time-series data and using advanced data mining tools, we can find a lot of valuable knowledge from time series, which can be used to guide our work and life. Therefore, 3D reconstruction of intelligent manufacturing image is carried out in this paper with the help of CAD.
By analyzing the structure of CAD-shaped DXF file, the corresponding object information can be extracted. In intelligent manufacturing image object data, the starting point coordinates of a line are the corresponding following values after block codes 10, 20, and 30, and the end coordinates are the corresponding following values after block codes 11, 21, and 31. In the entity extraction process, the first judgment is whether the group code is 10. If so, it indicates that the subsequent group value is the entity coordinate. The extracted object is projected (converted) onto the image according to the transformation parameters, and the projected value is taken as the initial value of the least square template matching. As the objects in some images may occlude each other, it is necessary to eliminate the lines that occlude each other. On this basis, FLANN matching algorithm is used to extract the feature points of intelligent manufacturing time-series image and complete the image matching. AKAZE algorithm introduces FED, a fast display diffusion mathematical framework, to solve partial differential equations quickly, which improves the solving speed and quality.
The nonlinear diffusion filter describes the evolution of image brightness and controls the diffusion process using the scaling parameter as the divergence factor of thermal diffusion function. Usually, partial differential equations are used to solve the problem [32]. Owing to the nonlinear nature of differential equations, the scale space is constructed through the diffusion of image brightness. The classical nonlinear diffusion equation is as follows:
In the aforementioned formula, is the image brightness matrix, and and represent the solution operation of divergence and gradient, respectively. Owing to the introduction of conduction function into the diffusion equation, the diffusion can be adaptive to the local structure characteristics of the image. The conduction function depends on the local difference structure of the image and can be scalar or constant. The time parameter corresponds to the scale factor and is controlled by image gradient in the diffusion process [33].
Assuming that the number of layers and towers are represented by and , respectively, the scale space is constructed by performing time of the diffusion function:
AKAZE feature detection is also achieved by looking for Hessian local maximum points normalized at different scales [34, 35]. In the same scale, response values of eight field points and 18 upper-- and lower-level points are identified as extreme points, namely feature points. Hessian matrix formula is as follows:where represents the scale parameter, is the corresponding group, and are the transverse and longitudinal differential, and represents the second-order cross-differential.
Based on the successful extraction of image feature points, FLANN matching algorithm is used to complete the rough matching process [36]. The specific process is as follows:(1)Find the feature point P1 in the image R1 and the feature point P1 in the image R2, and get the feature set (P1, P2), traverse the images of all feature points , be made up of all minimum distance matching points for collection, calculate the minimum distance from the collection of , , setting the threshold value if , as candidates will match point.(2)Perform step (1) for all feature points in image to obtain FLANN matching point pair from figure to figure . If , the matching is successful; otherwise, the point is removed, and the cycle continues until all image matching is completed.
According to the geometric constraints between images, the point cloud model is reconstructed by triangulation. On the basis of the camera motion, the 3D spatial position of feature points can be estimated. However, the depth information of pixels cannot be obtained only from a single image, so the method of triangulation (triangulation) is adopted to estimate. The principle of trigonometry is to determine the distance of a point by observing the angle of the same point in two places.
Consider that there are two images and , and their camera light centers are and . Taking the image on the left as a reference, the transformation matrix of the image on the right can be determined as . Assuming that feature point in figure , and of the feature points form corresponding, regardless of other factors, under ideal conditions, and will intersect at one point in the space, which is two feature points of the map point position in the 3D scene but cannot be totally eliminated due to the effects of noise, the two straight lines generally cannot smooth rendezvous in the actual situation, so the least square method is used to find the point closest to the intersection of the two lines. According to the definition in polar geometry theory, if and are set as the normalized coordinates of the two feature points, the relationship between them is as follows:
In the aforementioned formula, and are the depths of the two feature points. Multiply the left and right sides of the aforementioned formula by to obtain the following formula:
According to the formula, we can solve for and in turn. In practice, the influence of noise cannot be completely eliminated, so solving and may not make the result of the aforementioned equation exactly 0. Therefore, in practice, it is more common to replace the zero solution with the least square solution.
The point cloud results obtained by SFM algorithm can be optimized by means of clustering multiview stereo (CMVS) algorithm for image clustering classification. CMVS can reduce the time of dense matching, and on this basis, dense point clouds can be obtained through patch-based multiview STEREO (PMVS) processing. There are three necessary conditions for clustering: the clustering should be as small as possible under the condition that every cluster can be reconstructed; maintain density while removing mismatches and redundancies; and the result of reconstruction should be relatively complete. The overall process of the algorithm is shown in Figure 1.

The purpose of SFM screening is to screen out some points and take out the average value of each field location of feature points. Select images with high resolution and remove images that do not meet the requirements. The standard image segmentation method was adopted to segment the screened images, and the SFM feature points that were not included were constructed to construct image clusters. The efficiency basis was set, and the maximum efficiency was regarded as the behavior of the added clusters. The process of clustering classification and increasing images is repeated to obtain clusters that meet the requirements.
PMVS is a dense matching based on facet model, which can reconstruct the cluster of CMVS independently. Patches are generated based on sequential images and feature points, which are called seed points. The algorithm flow is shown in Figure 2.

It can be found that the algorithm consists of three parts: the first part is the generation of seed surface, the second part is the diffusion of surface, and the third part is the filtering of surface. The last two parts are iterated three times according to experience. After three iterations of surface diffusion and surface filtering, the number of outpoints in the diffusion reconstruction can be reduced, and the result is better after the third iteration. The normal section of a part of the surface of the target object represents a plane, and the seed plane refers to the sparse reconstruction of the spatial point cloud plane. First, corner detection is carried out on the sequence, and the obtained feature corner points are matched and 3D information is recovered according to triangulation measurement, and finally the face is generated. This process is called sparse reconstruction of 3D point cloud. For a given facet, it is impossible to ensure that all image blocks are included in the acquisition of facet, so the results of sparse reconstruction are extended to ensure that each image block has at least one facet. In this process, there will be wrong surfaces, which need to be filtered. The filtering is based on gray consistency and geometric consistency.
To sum up, this paper mainly uses Kinect sensor to collect depth image data and convert it into 3D point cloud coordinates. The collected point cloud data are divided into regions and the 3D point cloud data are filtered and denoised. With the help of CAD, FLANN matching algorithm was used to extract the feature points of time-series images, and then the images were matched. Combined with the image matching results, 3D reconstruction of sparse point cloud and dense point cloud was carried out to complete 3D reconstruction of intelligent manufacturing images.
3. Experimental Design and Result Analysis
3.1. Experimental Design
To verify the 3D reconstruction effect of CAD-aided intelligent manufacturing image based on time series designed in this paper, an experimental design is carried out. To ensure the reliability of the experimental results, this experiment needs to be carried out in the same environment, and the specific experimental environment parameters are shown in Table 1.
Multiple intelligent manufacturing images were collected, and the collected images were integrated to construct the experimental sample set. The experimental sample set was input into the computer, and the data in the sample set were processed using the reconstruction method based on P2M framework improvement, the reconstruction method based on vanishing point optimization and the reconstruction method based on time series. In this way, 3D reconstruction results of intelligent manufacturing images with different methods are obtained.
3.2. Analysis of Experimental Results
The PSNR of intelligent manufacturing image is one of the important indicators to verify the 3D reconstruction effect of intelligent manufacturing image. The higher the PSNR is, the better the 3D reconstruction effect of the image is. The specific comparison results are shown in Figure 3.

The analysis of the data in Figure 3 shows that with the increase of the amount of experimental data, the PSNR of images reconstructed by different methods changes in a fluctuating manner. Among them, the PSNR of images reconstructed by the improved P2M framework changes between 37.5 dB and 44.1 dB. Image reconstruction method based on vanishing point optimization PSNR between 33.9 dB and 40 dB changes, and reconstruction method based on improved P2M framework, based on the reconstruction of the vanishing point optimization method, the reconstruction method based on time-series images of the PSNR is always above 52 dB, reconstruction after using this method of intelligent manufacturing high image quality, intelligent manufacturing image 3D reconstruction effect is better.
The 3D reconstruction time of intelligent manufacturing image is one of the important indicators to verify the 3D reconstruction efficiency of intelligent manufacturing image. The lower the 3D reconstruction time of intelligent manufacturing image, the higher the 3D reconstruction efficiency of intelligent manufacturing image. The specific comparison results are shown in Figure 4.

By analyzing the data in Figure 4, it can be seen that with the increase of the amount of experimental data, the time of 3D reconstruction of intelligent manufacturing images by different methods presents an increasing trend. When the amount of experimental data is 800 MB, the reconstruction methods based on P2M frame improvement, vanishing point optimization, and time series have reached the maximum time of 3D reconstruction of intelligent manufacturing image. Among them, the maximum time of 3D reconstruction of intelligent manufacturing image based on the improved reconstruction method of P2M frame is 9.3 s, vanishing point optimization is 7.7 s, and time series is 4.9 s. Compared with the reconstruction methods based on P2M frame improvement and vanishing point optimization, the reconstruction time of image 3D reconstruction based on time series always remains at a low level, indicating that this method has a very high efficiency of 3D reconstruction of intelligent manufacturing images.
To further verify the 3D reconstruction effects of intelligent manufacturing images by different methods, two images are taken as examples to verify the 3D reconstruction of intelligent manufacturing images by different methods, and the results are shown in Figures 5 and 6.

(a)

(b)

(c)

(a)

(b)

(c)
By analyzing the 3D reconstruction results of CAD assisted intelligent manufacturing images in Figures 5 and 6, the reason is that this method uses different point cloud denoising algorithms to filter and denoise the divided area. With the help of CAD, FLANN matching algorithm is used to extract feature points of time-series images and complete image matching. Three-dimensional reconstruction of sparse point cloud and dense point cloud is respectively carried out to complete 3D reconstruction of intelligent manufacturing images.
4. Conclusion
As the economic development situation enters the new normal, the manufacturing industry faces increasing external constraints such as resources and environment, and the manufacturing cost of enterprises keeps increasing. At this time, the extensive development path that mainly relies on input of factor resources is difficult to continue. Turn the current situation, structure, methods, promoting development, and it is urgent to form a new economic growth, shape the new international competition advantage focus on manufacturing, and in the process of manufacturing production, image clarity is to ensure that all production can proceed smoothly, the basis of which can accurately control the production progress and the problems existing in the production process, so to improving the quality of industrial production. Research of intelligent image 3D reconstruction method is of great significance, so in this paper, based on the time series of CAD auxiliary intelligent manufacturing image 3D reconstruction method, the experimental results show that the method of image peak signal-to-noise ratio is always above 52 dB, intelligent manufacture image 3 d reconstruction takes a maximum of 4.9 s, and intelligent manufacture image 3D reconstruction effect is better.
Data Availability
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Conflicts of Interest
The authors declare that they have no conflicts of interest regarding this work.
Acknowledgments
This work was supported by Bureau Level Heilongjiang Provincial Higher Education Institution Basic Scientific Research Business Expenses Projects) “Dimension reduction of high dimensional data based on low rank matrix decomposition preserving structure”(2021-KYYWF-0579).