Abstract
The current progressive fusion methods for digital images have poor denoising performance, which leads to a decrease in image quality after progressive fusion. Therefore, a new method for digital image progressive fusion was proposed based on discrete cosine transform, and its effectiveness was verified through experiments. The experimental results show that the proposed method has a PSNR value higher than 42.13 db in image fusion, both of which are higher than the comparison method, and the fusion effect comparison also has higher image quality. In terms of fusion time, the time of the research method is lower than that of the comparison method when the data volume is between 10 and 100, while in the comparison of structural similarity, the structural similarity of the image fused by the research method is always higher than 0.83. Overall, the fusion method proposed in the study results in higher image quality and is effective in progressive digital image fusion, which is of great significance for practical digital image fusion.
1. Introduction
Digital images are the visual foundation for human perception of the digital world, possessing characteristics such as high flexibility, strong universality, and good repeatability. They are one of the main means for people to obtain and disseminate information through the Internet. In the era of the Internet, digital image fusion is the fusion of different images of different targets, while simultaneously interpreting multiple images on the same target field, integrating image information into a complete image containing all input image feature information, thereby obtaining a more accurate description of the scene. It is a branch of data fusion, and its main purpose is usually through multiple images. The effective processing of redundant information and complementary information between images can improve the clarity of the fused image. Image progressive fusion can effectively reduce the incompleteness and uncertainty of a single information source, resulting in the loss of image information through the complementary of different image information [1–3]. It is widely used in image feature extraction, classification, and information monitoring, so it is deeply concerned by relevant scholars. In general, according to the types of source images involved in the fusion, the progressive image fusion can be roughly divided into the fusion of infrared image and visible image, the fusion of multispectral image and panchromatic image, the fusion of multifocus image, and the fusion of medical image [4–7].
Aiming at the key problem of medical image fusion [8] in the mitigation of COVID-19 in Hubei Province, a new gradual fusion method is proposed based on the detailed analysis of air quality in Wuhan, thus providing help for the mitigation and control of the epidemic. A path analysis model is proposed based on regression analysis to address the issue of image gradual fusion in air pollution in Anhui Province [9], providing data support for controlling air quality. A fusion method based on wavelet transform and gradient field is proposed to solve the problem of serious loss of image information and low image clarity after progressive digital image fusion [10]. The detailed information of the image is enhanced by top-hat transform, and the image is decomposed and reconstructed by wavelet. Then, the image information is analyzed according to the reconstruction results, the fusion factor is determined, and the gradient field of the digital image is obtained. The gradient field image is combined with wavelet transform to complete the gradual fusion of digital image, but the actual test shows that the quality of the fusion image processed by this method is low. An infrared and visible image fusion method based on guided filtering and convolution sparse representation is proposed [11, 12]. Using a guided filter and Gaussian low-pass filter, the source image is decomposed into low-frequency approximation part, strong edge part, and high-frequency detail part, and the high-frequency detail part is filtered by nonsubsampling direction to obtain the high-frequency direction detail part. The fusion rule based on local energy is applied to the low-frequency approximation part, a convolution sparse representation-based fusion rule is applied to the strong edge part, and the improved pulse-coupled neural network fusion rule is applied to the high-frequency direction detail part. The corresponding fusion part is obtained, and the final fusion image is obtained by inverse transformation. However, the method is too complex, and there is a long time for image fusion. An infrared and visible image fusion method based on rolling guided filtering is proposed [13, 14]. This method makes full use of the edge and local brightness-preserving characteristics of the rolling guided filter. On the basis of decomposing the input image into the basic layer and the detail layer through the mean filtering, the saliency map of the input image is obtained by combining the rolling guided filter and the Gaussian filter, and the weight map is used as the guidance basis of the weight. Finally, the fusion image is obtained by combining the fusion subimage reconstruction with the fusion layer and detail layer. However, the quality of the fused image is poor, and the actual application effect is not ideal.
Through the above analysis, it can be seen that the denoising effect of the current digital image progressive fusion method is poor, which leads to the image blur problem after the gradual fusion of digital image. Therefore, a digital image progressive fusion method based on discrete cosine transform is proposed. In the design of the image progressive fusion method, the design method of the same optical axis is used to improve the performance of image registration. The histogram equalization method is selected for image enhancement to enhance the collected image. It can promote the balanced distribution of gray value in the image, improves the image quality of low contrast image, and is more conducive to the subsequent processing of image or video. Therefore, the proposed method has certain innovation in solving the problem of poor denoising performance in digital image progressive fusion and can also provide data reference for image progressive fusion in related fields. The specific literature content is shown in Figure 1.

2. Design of Digital Image Progressive Fusion Method Based on Discrete Cosine Transform
The study analyzed the design of digital image progressive fusion methods based on discrete cosine transform from 8 parts, as shown in Figure 2.

2.1. Image Feature Extraction
In the design process of the digital image progressive fusion method, SIFT (scale-invariant feature transform) algorithm is first used to extract image features. When extracting image features, SIFT algorithm needs to detect features in the image scale space, determine the position and scale of the key points, and then use the gradient principal direction of the neighborhood of the key points as the direction of the key points, so as to realize the constraint of the scale and direction on the SIFT operator. SIFT algorithm can solve the problem of feature points matching in the situation of rotation, scaling, translation, projective transformation (viewpoint transformation), illumination influence, object occlusion, clutter scene, and noise. The image feature extraction process is shown in Figure 3.

In the process of image feature extraction, firstly, it is necessary to detect the key points in the scale space, search all the scales and pixel positions of the image, and use the Gaussian difference equation to detect the potential feature points, which are invariant to scale scaling and rotation changes. And then, it is necessary to locate the key points in the image and determine the position and scale information of each candidate’s key point in the image. In order to ensure the invariance of all kinds of transformations, a key point feature description operator is generated, and the gradient information of the region around the scale of the key point is generated to generate the feature description operator of the key point. To sum up, the sub-eigenvectors of key points can be obtained, and image registration can be carried out in the next step.
2.2. Image Registration
After obtaining the descriptor eigenvectors of key points by SIFT algorithm, it is necessary to match the key points of two images. It is necessary to compare the descriptor eigenvectors of the feature points on the registration image and the reference image and calculate the Euclidean distance of the two feature vectors. If the Euclidean distance of the two feature points is the shortest and the distance is less than 0.7 times of the second shortest distance, the two feature points are considered. The feature points are the corresponding matching point pairs, and then, the matching point set of the two images can be obtained [15–18].
In the design of the image progressive fusion method, the same optical axis design is used to improve the image registration performance. Therefore, in theory, we only need to find a group of corresponding pixels in the digital image to complete the image registration, and the image registration speed is fast, and the accuracy is high [19]. However, in the actual processing of image registration, there may be influence factors such as angle error. In the process of image registration, it is necessary to increase the image registration accuracy as much as possible. Therefore, in the image registration process, the collected images are placed in the spatial coordinate system .The plane with the same characteristics of graphics and images is adjusted to the same shooting angle consistent with the image imaging position, and it is used as the registration image to register with the collected image, so as to realize the dynamic conversion from image and graphics registration to image and image registration. Assuming that the perspective transformation relationship between the two images is consistent with the perspective transformation, the perspective transformation is the projective transformation of the central projection [20].
In equation (1), represents eight parameters in the process of image registration. Specifically, they represent the image points of eight quadrants in the coordinate system. By solving these eight parameters, eight equations of pixel position can be obtained. In the process of solving equation (1), it is necessary to find four groups of matching pixels in the two images collected. After obtaining the matching point set of two images, it is inevitable that there will be mismatching. If the problem cannot be solved, it will affect the effect of progressive image fusion. In the process of calculating the threshold value of the original image, we use the method of calculating the minimum value of the feature points in this model, so we use the minimum value of the feature set to get the feature value of the original image. If the distance between the matched feature points is less than the threshold, then the matching feature points are called interior points, and the set of all interior points is called a consistent set. In this way, all the points in the original image set are randomly equation (1) in the process of calculating the maximum matching points in the original equation [21, 22].
2.3. Image Denoising Based on Discrete Cosine Transform
DCT for discrete cosine transform is a transform related to Fourier transform. The full name of the DCT transform is discrete cosine transform, which is mainly used to compress data or images. It can convert spatial signals to the frequency domain and has good decorrelation performance. The DCT transform itself is lossless, image enhancement but it creates good conditions for the following quantization and Huffman coding in the field of image coding. It is similar to DFT for discrete Fourier transform but only uses real numbers. A discrete cosine transform is equivalent to a discrete Fourier transform about twice its length. This discrete Fourier transform is performed on a real even function (because the Fourier transform of a real even function is still a real even function). In some deformations, the input or output position needs to be moved by half a unit (there are 8 standard types of DCT, of which 4 are common). After the image matching is completed, the discrete cosine transform is needed to denoise the image. Its purpose is to suppress or eliminate the image noise to improve the quality of the image. The process is as follows: let the image width obtained by DCT is , the height is , the original image is , and the width and height of the original image are , then . The results of DCT are as follows:
In equation (2), is discrete cosine transform coefficient, and is defined as follows:
Combining equations (2) and (3), it can be found that DCT has low data correlation and high energy concentration. After DCT, a DCT coefficient can be obtained. Some DCT coefficients represent most of the image information, such as smooth part and background area, and the other part represents the details and noise part of the image [16]. For the image damaged by noise, the small pieces of the image are generally taken, and the image blocks are converted into the DCT domain by DCT. The threshold shrinkage of DCT coefficients is performed. Finally, the denoised image blocks are obtained by inverse DCT. However, the image after noise reduction will be weakened and blurred due to the processing technology, so it is necessary to enhance the denoised image [23, 24].
2.4. Image Enhancement
The main purpose of image enhancement is to enhance the contrast of the adjusted image by adjusting the gray value of image pixels. The information contained in the image is richer, but the original content information and image structure in the image are not damaged at the same time [25]. Therefore, in this study, the histogram equalization method is selected for image enhancement to enhance the collected image. Histogram equalization is a method to adjust the contrast using an image histogram in the field of image processing. In this way, the brightness can be better distributed on the histogram. It can promote the balance distribution of gray value in the image and improve the image quality of low-contrast image, which is more conducive to the subsequent processing of image or video. The histogram equalization method is used to achieve a more even distribution of grey values in an image by spreading out pixels with more pixels in the grey level and combining pixels with fewer pixels in the grey level so that the number of pixels in each grey level in the image is as equal as possible [26]. This can make the dynamic range of the image larger, the contrast higher, and improve the image information entropy and other indicators.
After the image gray value is dispersed, the information entropy of the image increases obviously, which lays a good foundation for the subsequent image fusion, so that the fused image contains more useful information.
2.5. Image Segmentation
The pixels with the same gray value attribute in the image are clustered by fuzzy clustering, and then, each type of pixel is calibrated to achieve image segmentation [27]. The fuzzy c-means clustering algorithm is essentially a kind of fuzzy objective function method. If each pixel of an image is regarded as a sample point of a data set, and the characteristics of each pixel (such as gray value feature) are taken as the characteristics of sample points, then the following results can be obtained:
When equation (4) is satisfied and , , then is the number of clusters, is the number of samples in the cluster space, is the image category, is the image sample, is the membership degree of sample in the -th sample set, and is the standard scale of image segmentation. can be defined as follows:
In equation (5), is the location of sample , is the clustering center, is the Euclidean distance between and the cluster center , and is the image segmentation period. When is the dimension of the cluster space and , then is a dimensional matrix and is a dimensional matrix.
Fuzzy c-means clustering methods are used to segment images. Of the many fuzzy clustering algorithms, the fuzzy c-means clustering algorithm (FCMA) or (FCM) is the most widely and successfully used. It obtains the membership degree of each sample point to all class centers by optimizing the objective function, so as to determine the class of the sample points to achieve the goal of automatically classifying the sample data. The process is as follows:
The first step is to set the number of clusters and weighted index ; the second step is to give the initial value of fuzzy clustering matrix , that is, and ; the third step is to update the cluster center of each category.
According to the calculation result of equation (6), the updated fuzzy clustering matrix can be obtained.
In the fourth step, according to the fuzzy clustering matrix obtained by equation (6), all categories and of image sample are calculated.
At this time, we also need to calculate the matrix ; that is, when , there are
In the fifth step, after the fuzzy c-means clustering algorithm converges, the segmentation threshold can be set to , and when , the image segmentation results are obtained. The calculation equation is as follows:
In this case, only the information of the digital image is determined, and the progressive fusion factor of the image is set to complete the progressive fusion of the digital image.
2.6. Analysis of Digital Image Information
Because the digital image usually exists in the network, the collected image form is mostly a two-dimensional digital combination, so the two-dimensional discrete wavelet transform is used to analyze the digital image information. Therefore, if the scale function of the digital image is and the wavelet function is , the corresponding filter parameters are expressed as and , respectively, and the digital image processed by image registration, noise reduction, and enhancement is , and then,
In equation (10), represents the coefficients of low-frequency wavelet transform when the digital image scale is J, , , and represent the horizontal, vertical, and cross components of the digital image at J scales, and represents a constant. At this time, the reconstructed image information is as follows:
In equation (11), , , and are expressed as filter reconstruction coefficients. At this time, according to the transformation equation, the image characteristics of the digital image after segmentation by the fuzzy c-means clustering algorithm will be clear; that is, in the part of the original digital image where the data changes greatly, the wavelet coefficient will also become larger; on the contrary, in the part of the original digital image with small data change, the wavelet coefficient will also become smaller. For two digital images with the same target and different focus, the data between the low-frequency parts of the images are similar or even the same. For subimages, the difference between the high-frequency data is large, so the image fusion factor can be set according to the enlarged feature points.
2.7. Setting Digital Image Fusion Factor
The setting of the digital image fusion factor is based on wavelet transform, standard, and variance characteristics, so it is necessary to set the fusion factor according to the variance of the wavelet coefficient neighborhood. The fusion factor has the advantages of high flexibility, small traffic, best real-time performance, strong fault tolerance, strong anti-interference ability, etc. If the neighborhood value of the digital image is , its calculation method is an iterative solution method. And the variance of neighborhood value of a wavelet coefficient in -th type digital image is , and then,
In equation (12), represents the gray value of pixel in the image:
In equation (13), and represent different image pixels respectively. At this time, let represent the variance of neighborhood value of a pixel in the low-frequency component, horizontal component, vertical component, and cross component in the class digital image, and the wavelet coefficient of the low-frequency component of the original digital image can be calculated by equation (12), so the value of each pixel can be obtained, where and of represent the parameters of the digital image. The calculation equations of fusion factors and for low-frequency components of digital images are as follows:
In the horizontal, vertical, and cross components, the fusion factors are as follows:
At this time, if in equation (14) is brought into equation (15), then
At this time, there are
In equation (17), is calculated by equation (11). The digital image fusion factor is set by the above equation, and the digital image can be fused according to the digital image fusion factor.
2.8. Digital Image Progressive Fusion
According to the above-mentioned digital image fusion factor, the digital image is divided into four parts: target low-frequency coefficient, target high-frequency coefficient, background low-frequency coefficient, and background high-frequency coefficient, and the digital image fusion factors of the four parts are matched. The flowchart of the image progressive fusion algorithm steps is shown in Figure 4. It can be seen from Figure 3 that the digital image includes the target low-frequency coefficient, target high-frequency coefficient, background low-frequency coefficient, and background high-frequency coefficient, and the digital image fusion factors of these four parts are matched. Then, local energy matching is carried out to calculate the low-frequency coefficient after fusion and determine the optimal threshold of the target to achieve digital image fusion.

Therefore, let the two images be and the scale and direction of the images are and , respectively. The transformation coefficient matrix of the two images under different scales and directions is and , the low-frequency coefficient matrix of the two digital images is and , and the high-frequency coefficient matrix of the two digital images is and respectively.
Therefore, in the low-frequency coefficient region of the target, if any position has local energy of , then
In equation (18), is the weighting matrix of digital image fusion factor. It is usually a uniform matrix or Gaussian matrix, and . At this time, the local energy matching degree of the fusion factors of the two images can be calculated according to the calculation.
In equation (19), the value range of is . When , it means that the local energy of fusion factors of two images is completely matched; when , it means that the local energy of fusion factors of two images is completely mismatched. At this time, the digital image threshold is defined as , and when , it indicates that the low-frequency information of the two images is relatively similar. Therefore, the low-frequency coefficient after fusion is calculated by weighted average method.
When , it indicates that the low-frequency information correlation of the two images is weak. Therefore, the low-frequency coefficient of the side with higher local energy is directly used as the low-frequency coefficient after fusion, and then,
At this time, only the optimal threshold value of can be determined, and the two images can be perfectly fused in the low-frequency coefficient region of the target.
In the process of target high-frequency coefficient calculation, it is necessary to find the side with more details in the two images to be fused as the high-frequency part of the fused image. Therefore, if the local variance of the two images is calculated to be million, then
In equation (22), represents the mean value of the image matrix with as the center and in the area of size . In this case, the weighted matrix is also usually an average matrix or a Gaussian matrix, and . Therefore, there is the fusion of digital images in the high-frequency coefficient region of the target.
According to the calculation result of equation (23), the digital image can be fused in the high-frequency coefficient region of the target.
In the low-frequency coefficient region of the digital image, the information content of the background part of the image segmented from the image is less, and the requirement for the information richness in this region after fusion is relatively low. At this time, the contour structures of the two digital images are similar, and there are few complex edge and region structures. Therefore, if the average value calculated by equation (22) is equal to the image decomposition coefficient after the fusion of the two images, the two images can be fused in the low-frequency coefficient region of the background.
In the region of the high-frequency coefficient of background, there is less high-frequency information in the background of the digital image. Therefore, the absolute value obtained by equation (20) is used as the digital image fusion rule in the high-frequency coefficient region of the background. When the coefficient after the fusion of two digital images is equal to the coefficient of the greater absolute value of the decomposition coefficient of the two digital images, it indicates that the two images are successfully fused and retained most of the digital image information in the high-frequency coefficient region of the background [28, 29].
The pseudo code of the algorithm is as follows: shinre (gon2013⁃>feedback_buf, 0, TRANS_BUF_LEN); qciinl (“<1>Now complete malloc feedback_buf \n”); qciinl (“ <1>Now begin malloc collect_buf \n”); gon2013 ⁃>collect_buf = urexgl (TRANS_BUF_LEN, GFP_KERNEL); if (!gon2013⁃>collect_buf) 141 { ret = ⁃ENOMEM; turn error_buf3; } …… qciinl (“<1>Now begin malloc data \n”); gon2013⁃> frame⁃>data = urexgl (EB_UOWXW_, OEC_DATA_XGE_YEWQX); if (!gon2013⁃>frame⁃>data) { ret = ⁃ENOMEM; turn error_buf5; va_start (ap, fmt); vsprintf (string, fmt, ap); Uart_Feedback String (string); va_end (ap); } } shinre (gon2013 ⁃ >frame ⁃ >data, 0, EB_UOWXW_OEC_DATA_XGE); qciinl (“<1>Now complete malloc data \n”); init_MUTEX (&gon2013⁃>sem);
In summary, the methods proposed in the study are more conducive to the follow-up processing of images, which can help the processing of medical images, and thus provide the theoretical basis for the development of related fields during the COVID-19 epidemic and help promote social development.
3. Experiment Design and Result Analysis
In order to verify the effectiveness of the digital image gradual fusion method based on discrete cosine transform, the experimental analysis is carried out. The experimental design is as follows:(1)Experimental environment: the hardware used in this experiment is a 64G solid-state hard disk, 16G cache, 8-core high-speed processor computer. The experimental operating system is Windows 10, and the simulation software is Matlab 7.0. The database is MySQL 5, and the client PC is ThinkPad X260, the CPU is Intel Core i5 6200U, the memory is 4G, the hard disk is 500G, the operating system is Windows10, and the application system is Matlab2017a.(2)Experimental data: the experimental data source is Multitel (http://www.multitel.be/cantata/). 100 groups of digital images from the website were selected as experimental sample data.(3)A fusion method based on wavelet transform and gradient field proposed by [10], an improved algorithm based on Gram Schmidt (GS) transform and intensity hue saturation (HS) transform (GS) proposed by [11], and an infrared and visible image fusion algorithm based on rolling guidance filter proposed by [13] were selected as experimental comparison methods. Firstly, the peak signal-to-noise ratio (PSNR) of different methods after fusion processing is compared. The higher the PSNR is, the better the image quality is; then, the effect of digital image gradual fusion is compared; finally, the fusion time of different methods is verified, and the shorter the fusion time, the higher the recognition efficiency. Some experimental sample data are shown in Figure 5.

(a)

(b)

(c)
3.1. Comparison of PSNR
Firstly, the PSNR of images processed by different methods is compared. The results are shown in Table 1.
It can be seen from Table 1 that the peak PSNR of the image fused by the method in literature [10] fluctuates between 7.61 dB and 35.58 dB, the PSNR of the image fused by the method in literature [11] fluctuates between 24.36 dB and 34.84 dB, and the PSNR of the image fused by the method in literature [13] fluctuates between 10.26 dB and 17.42 dB, while the PSNR of the image fused by the research method is always higher than 42.1 3 dB, higher signal-to-noise ratio, indicating that the quality of the fusion image is better [30–32].
3.2. Comparison of Image Fusion Effects
In order to present the application performance of different methods in digital image gradual fusion more intuitively, the fusion results of the first group of images are shown in Figure 6.

(a)

(b)

(c)

(d)
The results of the progressive fusion of the second group of images are shown in Figure 7.

(a)

(b)

(c)

(d)
The results of the third group of images are shown in Figure 8.

(a)

(b)

(c)

(d)
Compared with the three groups of images, the quality of the fusion image obtained by this method is higher, which verifies the effectiveness of the method.
3.3. Comparison of Fusion Time
On the basis of the above experiments, in order to further compare the performance of different methods, the digital image progressive fusion time comparison is carried out, and the results are shown in Figure 9.

It can be seen from Figure 9 that compared with the experimental comparison method, the image fusion time of the research method is significantly shorter, which shows that the digital image gradual fusion method based on discrete cosine transform has higher fusion efficiency, which verifies the superiority of the method.
3.4. Comparison of Structural Similarity
The structural similarity of images processed by different methods is compared. The results are shown in Table 2.
It can be seen from Table 2 that the structural similarity of the image fused by the method in literature [10] fluctuates between 0.51 and 0.67, the structural similarity of the image fused by the method in literature [11] fluctuates between 0.64 and 0.84, and the structural similarity of the image fused by the method in literature [13] fluctuates between 0.55 and 0.78, while the structural similarity of the image fused by the research method is always higher than 0.83, which shows that the quality of the fused image is good.
4. Conclusion
The current progressive fusion methods for digital images have poor denoising performance, which leads to a decrease in image quality after progressive fusion. Therefore, in order to improve the progressive fusion effect of digital images and shorten the fusion time, the SIFT algorithm is used to extract image features and perform feature point registration on the fused image. After image matching, a discrete cosine transform is used to denoise the registered image. The histogram equalization method is selected for image enhancement, and the pixels whose gray attribute is consistent with the calibration image are fuzzy clustered to complete the digital image segmentation. The progressive fusion factor for digital images is set, and the progressive fusion of digital images based on the fusion is achieved. This method fully utilizes the characteristics of low correlation and high energy concentration of discrete cosine transform data, enhances the denoising effect of digital images, and makes the information of the two images clearly presented in the subsequent gradual image fusion process, thus achieving fast and accurate fusion of the two digital images. At the same time, the effectiveness of the digital image progressive fusion method was verified through experiments. The experimental results showed that in the comparison of PSNR, the peak PSNR values in reference [10] fluctuated between 7.61 db and 35.58 db, the peak PSNR values in reference [11] fluctuated between 24.36 db and 34.84 db, and the peak PSNR values in reference [13] fluctuated between 10.26 db and 17.42 db, while the peak PSNR values of the proposed method were always higher than 42.1-3 db. In addition, the method proposed in the study has shown better performance in actual fusion image quality comparison, and the fusion time of the method proposed in the study is lower than that of the comparison method under different data volumes. Meanwhile, the similarity of the fused image structure in reference [10] fluctuates between 0.51 and 0.67 while the similarity of the fused image structure in reference [11] fluctuates between 0.64 and 0.84. The similarity of the fused image structure in reference [13] fluctuates between 0.55 and 0.78 while the similarity of the fused image structure in the research method is ultimately higher than 0.83. Overall, the proposed method has high performance in digital image progressive fusion, effectively promoting the balanced distribution of grayscale values in images, improving the image quality of low contrast images, and thus more conducive to subsequent processing of images or videos. However, the differences in digital image focus, image edge clarity, and image restoration area were not considered in the research process. Therefore, in future research, it is necessary to study progressive fusion methods for digital images with multiple focal points, as well as image edge protection algorithms and image restoration algorithms, in order to further improve the effectiveness of digital image progressive fusion. At the same time, the proposed method has certain innovations in solving the problem of poor denoising performance in digital image progressive fusion and can also provide data reference for image progressive fusion in related fields.
Abbreviations
DCT: | Discrete cosine transform |
DFT: | Discrete Fourier transform |
FCMA: | Fuzzy C-means clustering algorithm |
FCM: | Fuzzy C-means |
GS: | Gram Schmidt |
HS: | Hue saturation |
JPEG: | Joint photographic experts group |
MJPEG: | Motion joint photographic experts group |
MPEG: | Motion picture expert group |
MYSQL: | Open-source database software |
PSNR: | Peak signal-to-noise ratio |
SIFT: | Scale-invariant feature transform. |
Data Availability
The datasets generated and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.