Abstract

Digital media art differs from other forms of existence in the intersection of digital technology and art, and this new form of existence contains its unique value. Digital media art education crosses the disciplines of art, computer science, and technology and builds a new perspective that can encompass the academic discourse of literature and science, transcending the opposition between scientific and humanistic cultures and creating a broader knowledge structure and a new view of values and resources. In this paper, we use image quality evaluation based on local Gaussian weighted fusion and image collocation for restoration of precious cultural relics and images to realize their cultural values, in view of the current problems such as damage caused by a large number of precious paintings due to improper preservation and other problems.

1. Introduction

The digital virtual world of art calls for the integration of two cultures. While the software framework provides the technical support for the digital virtual art world, artists with traditional literacy must play an integral role in the virtual world. The digital virtual art world requires the artist’s keen insight to draw out the context, and the virtual design needs to be guided by the musician [1]. When art designers do not collaborate with engineers, the construction of virtual worlds loses its foundation. Artists and engineers face the same challenges, and they must work together to build digital spaces.

The purpose of this paper is to realize the value promotion of digital media culture and art. However, there are few researches in this area, so the literature mainly constructs local weighted fitting. Through the study of this treatment, we can provide reference for ourselves.

In the digital age of art practice, the two cultures enter a high level of organic fusion, where rigorous technology is subordinated to artistic creativity. Emerging artists do not introduce digital technology into their creations, but rather expand the artistic creativity of digital technology itself. They have created a new artistic language, dedicated to tapping into virtual reality and graphic technology to create works in near-perfect form. They are both users of cutting-edge digital technology and developers of hardware and software. Just as in the 1960s, when the French group “Exploration of Visual Arts” was engaged in the practice of digital media art, there was no such thing as a lone artist or technologist, but rather a team effort that transcended the dichotomy between two cultures [2]. In this paper, we propose a weighted local full-feathering model for image stitching, which can combine multiple images with similar backgrounds into a single panoramic image, and can supplement some missing parts of the image [3]. Firstly, the local firsts of the image are feathered, then the similarity of the grayscale images of the neighboring pixels is optimized by using the method of one-dimensional linear regression, and finally the local firsts of the image are weighted to highlight the block similarity between the pixels in the domain of the image and reduce the traces of feathering. The comparative simulation results of the models show that.

The Bayesian method using Gaussian model as block prior has achieved excellent image denoising performance in recent years, but the performance of this method is less stable in solving the inverse problem beyond denoising. A hierarchical Bayesian-based Gaussian mixture model is proposed to model image blocks, introduce prior knowledge to the model parameters, and model the probability distributions of the mean and covariance matrices using the Gaussian-Wishart distribution to make the block estimation process more stable [4]. Based on the coherence of neighboring blocks, the clustering of similar blocks in local windows is accomplished using L2 parametric metric, similar blocks in local windows are modeled using multivariate Gaussian probability distributions of specific means and covariances, and the computation time of similarity metric is accelerated using the numerical optimization method of cumulative squared plot and fast Fourier transform. The aggregated weights of Gaussian distribution similarity based on the equine distance are used to combine the spatial domain Gaussian similarity on the images to better fit the statistical properties of natural images. The effectiveness of the proposed model in image restoration solving is verified experimentally [5].

2. State of the Art

With the popularity of image applications, image quality research has attracted a lot of attention. The classical peak signal-to-noise ratio (PSNR) evaluation algorithm has been widely used. In recent years, some quality methods have been proposed based on image engineering theory, and Kipf et al. proposed the structural similarity (SSIM) evaluation method by using the grayscale, contrast, and structural statistics of images to improve the evaluation index of correlation coefficient (CC) of PSNR algorithm. Sheikh proposed the Information Fidelity Rule (IFC) evaluation algorithm based on the Gaussian modeling of natural images [6]. IFC evaluates the image quality based on the amount of information shared between the distorted image and the reference image and is characterized by a parameterless evaluation algorithm [7]. Currently, some improvement algorithms have been proposed for the above algorithms, such as Zhang Lin based on image visual salient features, Ding Yong based on image local statistical characteristics to improve SSIM, Sheikh based on image visual attributes to improve IFC, and Hu Anzhou based on image singular value weight evaluation feature vector to improve M_SVD [8].

However, these improved algorithms have the following problems: (1) the complexity of the algorithm increases, which reduces the real-time evaluation; (2) the performance of some evaluation indexes decreases, such as measurement error and prediction correlation indexes; (3) the algorithm relies heavily on the parameter settings, and the sensitivity of each parameter causes the stability and effectiveness of the algorithm to decrease [9].

Larson improves the index level of the distortion evaluation algorithm through the preprocessing of image frequency domain threshold filtering, and the SSIM algorithm improves the average weight value of image equivalence based on the preprocessing of local Gaussian weighted fusion [10]. Compared with the preprocessing of frequency domain threshold filtering, local Gaussian weighting has the advantages of simple algorithm and high real-time performance. Inspired by this, this paper studies different types of image quality assessment built upon local Gaussian weighted fusion based on the SSIM algorithm [11]. The influence of parameters such as filter window size and filter standard deviation on the performance of the SSIM algorithm and the overall algorithm framework are shown in Figure 1.

3. Methodology

3.1. The Relevant Concepts of the Emerging Concept Filter

Image filtering, that is to suppress the noise of the target image under the condition of preserving the detailed characteristics of the image as much as possible, is an indispensable operation in image preprocessing. Its processing effect will directly affect the effectiveness and reliability of subsequent image processing and analysis.

Due to the imperfections of imaging system, transmission medium, and recording equipment, digital images are often polluted by a variety of noise in the process of their formation, transmission, and recording. In addition, in some links of image processing, when the input image object is not as expected, noise will also be introduced into the result image. These noises often appear as isolated pixels or pixel blocks that cause strong visual effects in the image. Generally, the noise signal is not related to the object to be studied. It appears in the form of useless information and disturbs the observable information of the image. For digital image signals, the noise table is either large or small extreme values. These extreme values act on the real gray value of image pixels through addition and subtraction, causing bright and dark point interference to the image, greatly reducing the image quality and affecting the subsequent work of image restoration, segmentation, feature extraction, image recognition, and so on. To construct an effective noise suppression filter, two basic problems must be considered: it can effectively remove the noise in the target and background; at the same time, it can well protect the shape, size, and specific geometric and topological features of image targets.

3.2. The Principle of Local Weighted Fusion of SSIM Algorithm

In the present research, the use of fitting is a relatively common method. A lot of literature works show that the feature extraction based on local weighting has better performance than traditional feature extraction. Compared with this method, neural network has been able to achieve better results, so this paper uses deep neural network.

The learning and training of deep neural networks can be realized on many platforms at present. The initial weights of neural networks are randomly generated, and subsequent optimization can be completed by computer braking.

The principle of local weighted fusion of SSIM algorithm is shown in Figure 1. SSIM calculates the local grayscale weighted similarity characteristic l (x, y), local contrast weighted similarity characteristic c (x, y), and structure weighted similarity characteristic s (x, y) of the image, respectively, and then assigns the corresponding evaluation weights to these three statistical characteristics and performs mean fusion to obtain the evaluation results of distorted images shown in Figure 2.

The local weighted fusion principle of SSIM algorithm is shown in Figure 2. SSIM calculates the local grayscale weighted similarity characteristic l (x, y), the local contrast weighted similarity characteristic c (x, y), and the structure weighted similarity characteristic s (x, y), then assigns the corresponding evaluation weights to these three statistical characteristics, and performs the mean fusion to obtain the evaluation results of distorted images [12].

The definitions of l (x, y), c (x, y), and s (x, y) are as follows:where the subscripts x, y represent the reference image and the distorted image; the grayscale μ, the variance σ, and the interrelationship number σ are defined as follows:where represents the weighting coefficient of Gaussian filter and N represents the scale size of the square Gaussian filter window. Equations (2) to (4) illustrate that the SSIM algorithm uses local Gaussian weighting coefficients to measure the grayscale and structural similarity of the image. The similarity SSIM (x, y) is defined as follows:where α, β, and γ are the weight coefficients corresponding to l (x, y), c (x, y), s (x, y).

The SSIM algorithm only provides the experimental results of Gaussian filter weighting, while the commonly used filters for images are Average, Disk, Gaussian, Laplacian, Laplace Gaussian (Log), Motion Blur, Edge Enhancement (Prewitt), and Edge Extraction. Each filter has control parameters, and different types of filters usually exhibit different performance in different image processing fields. Therefore, in this paper, based on the LIVE image quality evaluation database, we experimentally study the effects of different types of filter weighting operators and their parameters on the performance of SSIM algorithm based on the international video experts group (Video Quality Experts Group, VQEG) specification [13].

3.3. Effect of Different Types of Filters on the Performance of SSIM Algorithm

The experiments tested the Root Mean Square Error (RMSE) of the prediction, the CC, and the Spearman Rank Order Correlation Coefficient (SROCC) of the original data [14]. The bold numbers in the table indicate a good level of the index, and the overall evaluation indicates the overall evaluation of all distortion types. Compared with other filters, the stability of the evaluation of distorted images is higher for Average, Disk, and Gaussian filters. The SROCC index shows that the Average and Gaussian filters have better performance; the RMSE and CC indexes show that the Gaussian filter has better performance than the Average filter [15].

3.4. Effect of Gaussian Filter Window Scale on the Performance of SSIM Algorithm

Compared with other filters, Gaussian filter weighting has a better overall performance. The effect of the Gaussian filter window scale of N pixels × N pixels on the SS performance is further investigated experimentally, and the results are shown in Figure 2. It can be seen that the SSIM algorithm shows the best performance in terms of RMSE, CC, and SROCC when the window scale is between 80 pixels × 80 pixels and 120 pixels × 120 pixels [16]. The performance of the extreme points of the Gaussian filter window scale is an important guide for the selection of different algorithms in terms of complexity control and evaluation speed as shown in Figure 3.

3.5. Effect of Gaussian Filter Data Standard Deviation on the Performance of SSIM Algorithm

In the experiments in Section 3.3, the standard deviation of Gaussian filter σ = 1.5. In fact, the shape and visual effect of Gaussian filter will be different when the value of σ is different [17]. Figure 4 shows that the evaluation indexes RMSE, CC, and SROCC show the best levels when the filter window N ≥ 24, which correspond to the same standard deviation value (σ ˜ 2) on average. When the window scale becomes smaller, the index levels all show a certain decrease, and the decrease rate accelerates with σ < 2 as shown in Figure 4.

The classical PSNR evaluation algorithm based on image pixel statistics has been widely used because of its simplicity and speed, but it has the problems of low correlation and large prediction error compared with the subjective perception of human eyes [18]. Therefore, based on the experimental findings, this paper improves the PSNR and M_SVD algorithms by using partial Gaussian weighted fusion. The improved algorithm first chunks the image with a size of 11 pixels × 11 pixels, then evaluates each subblock based on local Gaussian weighted SNR and M_SVD, and finally performs mean fusion on the evaluation results of each subblock to obtain the final image quality evaluation results [19].

The local peak SNR evaluation algorithm PSNR (L) and the local singular value decomposition evaluation algorithm M_SVD (L) are defined as follows:where S (i), S (i) are the singular eigenvalues of the reference image and the distorted image, respectively, defined as follows: constitute the left and right eigenvectors of the image singular value decomposition.

Based on the image subblock L, the MSE (L) of PSNR evaluation algorithm and D (L) of M_SVD evaluation algorithm are improved by fusion of local Gaussian weighting, respectively, and the following equations are obtained:

The improved evaluation results of PSNR (L) and M_SVD (L) for image subblock L are as follows:

The mean fusion of PSNR (L) and M_SVD (L) yields the improved signal-to-noise ratio evaluation algorithm PSNR and the improved singular value decomposition evaluation algorithm M_SVD for image quality:where T is the number of subblocks of the image. Table 1 shows the comparison of the evaluation performance of PSNR, M_SVD, PSNR,M_SVD, and SSIM algorithms. It can be seen that the three index levels of PSNR algorithm have been greatly improved, among which the S index level has been improved by 3.78%, which is basically the same as that of SSIM algorithm. The RMSE and CC index levels have been improved by 2.02% and 2.40%, which are obviously more than that of SSIM algorithm. The three MSEs, CC, and SROCC of the improved M_SVD algorithm improved by 4.99%, 0.67%, and 1.78%, respectively.

Table 2 shows the evaluation results of the algorithms based on different distortion levels of images. The mean opinion score (MOS) of distorted images is based on the subjective test specification of the VQEG. From Table 2, we can see that the PSNR algorithm has good stability and meets or exceeds the SSIM algorithm. The M_SVD algorithm has more obvious performance improvement for the evaluation of images with average and high distortion levels [20].

3.6. Real-Time Testing of SSIM Algorithm Based on Gaussian Window Scale

The SSIM algorithm evaluation time based on different Gaussian filter window scales is tested experimentally, and the results are shown in Table 3.

The same filter can be seen such that with the increase of Gaussian filter window scale, the evaluation speed of SSIM algorithm decreases slightly but still maintains a high evaluation speed compared to other algorithms.

4. Real-Time Testing of the Improved Algorithm Based on Gaussian Weighting

The experiments tested the running speed of the improved algorithm after Gaussian weighting, and the results are shown in Table 3. It can be seen that the evaluation speed of the improved PSNR algorithm is comparable to that of the SSIM algorithm, which still maintains a relatively high time performance; the running time of the improved M_SVD algorithm only increases by 6.9%, which still has a large advantage compared with other evaluation algorithms, as shown in Table 4.

4.1. Local Gaussian Mixture Model with the Hierarchical Bayesian Model

In order to apply spatial constraints to images and exploit the coherence of neighborhood blocks, it is assumed that similar blocks in a neighborhood can be derived from a single multivariate Gaussian probability distribution with a specific mean and covariance, letting each block be associated with a Gaussian model whose parameters are computed from the similar blocks selected in its local nearest neighbor (search window). The nonlocal Bayesian algorithm uses this idea and is applied to the image denoising problem R. In this paper, we focus on how the local block model can be applied to more general inverse problem solving, not only to the denoising problem. The main difficulty of this application is the estimation problem of the model, especially when having a high percentage of missing pixels in the degraded image, which is a serious discomfort problem.

In this paper, we propose a hierarchical Bayesian local Gaussian mixture model (IBLGMM), and the general idea of the proposed model is given in Figure 5.

As shown in Figure 5, the similar blocks are computed in the local window using L2 parametrization to make full use of the coherence of the neighboring blocks. Here, it is assumed that the local similar image blocks satisfy a Gaussian prior distribution with mean μ and covariance Σ matrix. A Bayesian hyperprior is introduced for the mean and covariance parameters, and a Norm-Wishart prior is introduced for the image block mean μ and covariance Σ metric (with parameters and ), using a joint maximum posterior estimation formula to estimate the image block and parameters , , and , , using alternating minimization updates to make the local estimation of Gaussian statistics more stable. In the aggregation from block to image process, for overlapping regions the similarity of blocks for different local Gaussian distributions is used to calculate Gaussian weights based on Mahalanobis distance, and thus the distribution is extended to a Gaussian mixture model. This allows the accuracy of local model estimation to be used for generic recovery problems, especially for missing pixel values (padding problems).

The observed image Y is decomposed into I mutually overlapping blocks , with block size . Each block can be considered as a realization of the random variable Y, and Y can be expressed aswhere is the degeneracy operator; is the original block to be estimated; is the additive noise term, usually modeled as a Gaussian distribution . The conditional distribution of for a given case is

The noise covariance matrix Σ assumes a diagonal array and can represent constant variance, spatially varying variance, or variance depending on the pixel values (approximate Poisson noise).

4.2. Image Recovery Algorithm

The complete image recovery algorithm for the local Gaussian mixture model with hierarchical Bayesian is shown in Algorithm 2, with the number of iterations set to ITERS. The algorithm requires two stages: minimizing f by Algorithm 1 and estimating the hyperparameters and . In order to estimate these parameters, it is necessary to rely on the first stage to estimate the image obtained by aggregation of all blocks.

4.3. Computational Complexity

In this chapter, the results of two image restoration problems, image denoising and image filling, are presented. The model proposed in this paper is verified by comparing it with some good methods. (In the case of the padding problem, the interpolation is done for two cases: one for randomly observed pixels and another for the zooming problem which can be viewed as an interpolation problem for uniformly observed pixels). The code for the comparison method is taken from the authors’ public code. The experimental images are partly taken from classical synthetic images, such as Lena and Barbara, and partly selected from test images in the BSDS dataset. Some of the experimental images are shown in Figure 5; the first one is a classical synthetic image and the second one is a BSDS image. PSNR is used as an objective measure of the algorithm, and PSNR is defined aswhere x is the original degraded image, is the estimated image, and m is the number of pixels in the image. The higher the quality of the estimated image, the higher the PSNR value. The final PSNR value is obtained by averaging 10 times for each experiment.

4.4. Parameter Selection

The main parameters involved in the algorithm include the four parameters γ, , the prior mean , and the prior covariance of the Norm-Wishart distribution that needs to be set for the calculation of and the parameters for the calculation of the Gaussian weights.

4.4.1. Determination of γ and

As seen in equation (16), the calculation of the mean μ involves the mean estimate of the similarity block and the a priori mean . The parameter γ is associated with the confidence level of the prior , so its value can be a compromise between the confidence of the prior and the information provided by the similar blocks. As the number of similar blocks Nsp increases and the number of known pixels in the block Nsp increases, the information provided by the similar blocks improves. Thus, γ is calculated as follows:where TH is the threshold value. A similar approach can be adopted for ν (7) plus n because ν > n − 1 as required by the Normal-Wishart prior. The above simple rule has been shown experimentally to be an effective approach. The above approach is mainly aimed at the filling problem, but it needs further investigation in generalizing to other applications of a more general nature as shown in Figure 5.

4.4.2. Determination of and

and can be computed from the set of similar blocks of Co by the classical maximum likelihood estimation method :

4.4.3. Setting of ξ

In order to obtain good recovery performance, a small adjustment of ξ in equation (17) is necessary. For example, in the denoising experiments, different ξ values are set for different noise levels: at time, ξ = 0.015; at time, ξ = 0.01. In the filling experiments, ξ = 0.01 is set by the small adjustment of ξ, and the PSNR value achieves a certain improvement.

4.5. Image Denoising

In the experiments, the original images are contaminated using additive Gaussian noise with standard deviations of 30, 40, 50, 60, and 70, respectively, and in the denoising experiments, H is the unit array because there are no unknown pixels to be interpolated. In Table 5, the method of this paper (labeled HBLGMM) is compared with the denoising algorithms such as BM3D, LSSC (learned simultaneous sparse coding), NLB, and EPLL. The values in the table are the average PSNR values (in dB) of all tested images at each noise standard deviation, and the best results are indicated by bolded font.

At all noise levels, the method in this paper is superior to the method of global GMM without applying local constraints (EPLL), as well as the GMM-based method (NLB), from an average point of view. Similar to other global GMM methods, all observation blocks in the group are subtracted is the mean vector of each cluster and added after completing the estimation of MAP, the main purpose of this is to accelerate the execution of the algorithm The performance of the former GMM-based methods such as PLE and EPLL for image denoising is generally lower than that of the sparse-based methods. The denoising results of this paper are better than the sparse-based methods in the noise standard deviation range from 30 to 60 in terms of average results. The superiority of the LSSC method in terms of PSNR under high noise is related to the advantage of its internal dictionary training, but it is worth mentioning that the online word time of the LSSC method is too long and the practical usability is poor. The example image for comparison is shown in Table 5, and the noise with a standard deviation of 30 is added to the Barbara plot. It is clear from the enlarged area that the method in this paper is more natural in the restoration of pants texture.

Table 5 shows the comparison of the test image denoising results in the BSDS set under high noise (noise standard deviation of 70). From the comparison graphs, we can see that the BM3D and NLB methods have relatively poor image view in this dataset, and the face restoration is not natural enough. The BM3D method has more ripples in the hat area, the EPLL method is not smooth enough in the hat area, the LSSC method has a lot of artifacts in the face and hat area despite the better PSNR value, and the overall view is low. This method is more ideal in both PSNR and visual perception.

4.6. Image Filling

Two application scenarios are designed in the image filling experiment: one is the random masking operator (for random filling); the other is the downsampling masking operator (for image enlargement) under the uniform grid. In the random fill experiment, random masking operators with pixel missing rates of 20%, 30%, 40%, 50%, 60%, 70%, and 80% are set to be applied to the original image. The performance of the proposed method is compared with MCA (morphological component analysis), FOE (field of exports), KR (kernel regression), BP (beta process), and PLE, and the size of the image blocks of all methods is used with a size of 8 × 8 size for all methods. The parameter ξ in the equation used to calculate the cumulative weight which is set to 0.01. The results of comparing this method with other image filling methods are presented in the table, and the values in the table are the average PSNR values of all tested images with different data loss rates. As can be seen from the table, the method in this paper outperforms the comparison methods at all loss rates.

Figures 6 and 7 give a comparison example of local Barbara image restoration with 80% random pixel loss, focusing on comparing the results of smoothing and filling the texture areas of the image. As can be seen from the figure, the other methods in this paper are more effective in both textured and smoothed regions. A special case of image filling is the image resolution scaling problem, which can be seen as an interpolation problem for uniformly sampled images. However, the image magnification problem is more challenging than the filling of randomly observed pixels. Because of this regular sampling, many of the algorithms proposed so far fail to recover the underlying texture in the image well. Random sampling yields better results than uniform sampling. The algorithm in this paper is more robust to the recovery errors caused by uniform sampling.

Table 6 shows comparison of the visual quality of random filling at 80% loss rate. In the experiment, the real high resolution image (high resolution, HR) is downsampled by 2 times to obtain the ground resolution image (low resolution, LR), and after upsampling the LR, uniform sample masking is applied to obtain the degraded image for image filling recovery. The method in this paper is compared with EPLL, Patch Nonlocal, NEDI (new edge-directed interpolation), NARM (nonlocal autoregressive modeling), Bicubic, and other methods.

From Table 6, we can see that the method in this paper has clearer reconstruction results than other methods and can find the real texture better.

4.7. Analysis

With a priori modeling using Gaussian mixture model, it is not possible to use a fixed set of clusters to accurately represent blocks that rarely appear in an image, such as edges or textures. Methods like PLE are actually semilocal (128 × 128 regions) and do not solve this problem. EPLL, on the other hand, is a Gaussian mixture model with 200 components learned from 2 million natural image blocks, which has more mixture components, and EPLL’s strategy is more inefficient than the PLE method in the denoising task. In all the above experiments, HBLGMM is superior to the EPLL and PLE methods in both the denoising and the fill and zoom problems. In terms of visual quality, the details of the image are better reconstructed and the image is clearer. The method in this paper performs estimation from similar blocks in a local window and reinforces the assumption of self-similarity by considering the local neighborhood. This strategy performs neighborhood restriction for the model and therefore makes the estimation more robust. The similarity of the blocks is measured by the similarity of the Gaussian distribution, and the aggregation weights take into account the similarity of the image blocks to the estimated Gaussian distribution, combined with the similarity of the image spatial domain, so that more similar Gaussian blocks will be clustered together by the L2 parametric metric. Benefiting from the use of different weights for each block in the clustering, this plays an important role in improving the image recovery quality.

5. Conclusion

Aiming at the problem of image quality evaluation and fusion, this paper studies the performance of local weighted fusion evaluation of various filters based on SSIM algorithm. The results show that Gaussian filter has better evaluation performance and can be improved by properly adjusting the filter window scale and standard deviation parameters; the Gaussian filter obtains the best evaluation level. Based on the optimized weighting factor of Gaussian filter, the PSNR evaluation algorithm and singular value decomposition (m_svd) evaluation algorithm are improved respectively. The experimental results show that the SROCC of the improved PSNR algorithm CC and RMSE indexes increased by 3.78%, 2.40%, and 2.02%, respectively, reaching or exceeding SSIM algorithm. The corresponding indexes of SVD algorithm are improved by 1.78%, 0.67%, and 4.99%, respectively. It can be seen that the improved algorithm maintains good evaluation stability and real-time evaluation. In the next step, the optimization algorithm of fused image quality local evaluation results will be discussed combined with the perceptual characteristics of human visual system.

In this paper, we propose to introduce hierarchical Bayesian model into Gaussian local prior modeling of image blocks and use it for solving image restoration problems, making the solution of inverse problems for missing pixels more stable similar to block estimation in a local window, and extending the local Gaussian model to a Gaussian mixture model framework using an aggregated weighting method based on the Gaussian distributed similarity of the horse-like distance. Using the above image restoration problem can be performed in a unified framework, the proposed model is applied to image denoising, image filling, and other problems with good results. For image blocks, trying to use their multivariate distributions, such as multivariate Laplacian, and using different geometric distance measures will be the subject of future research.

Data Availability

The labeled dataset used to support the findings of this study is available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This work was supported by the School of Shijiazhuang Institute of Technology Hebei Institute of Communication and Yihualu Integrated Technology Co.