Abstract
Image dehazing is one of the problems that need to be solved urgently in the field of computer vision. In recent years, more and more algorithms have been applied to image dehazing and achieved good results. However, the image after dehazing still has color distortion, contrast and saturation disorder, and other challenges; in order to solve these problems, in this paper, an effective image dehazing method is proposed, which is based on improved color channel transfer and multiexposure image fusion to achieve image dehazing. First, the image is preprocessed using a color channel transfer method based on k-means. Second, gamma correction is introduced on the basis of guided filtering to obtain a series of multiexposure images, and the obtained multiexposure images are fused into a dehazed image through a Laplacian pyramid fusion scheme based on local similarity of adaptive weights. Finally, contrast and saturation corrections are performed on the dehazed image. Experimental verification is carried out on synthetic dehazed images and natural dehazed images, and it is verified that the method proposed is superior to existing dehazed algorithms from both subjective and objective aspects.
1. Introduction
With the popularization and rapid development of computer technology, computer vision is widely used in various fields such as object detection [1–4], image segmentation [5, 6], and face recognition. Affected by smoggy weather, the images acquired by camera equipment usually show color shift, low visibility, and decreased contrast and saturation, which seriously affect the development of subsequent computer vision tasks. Therefore, dehazing the images is an important research direction in computer vision tasks. In recent years, many researchers have studied image dehazing algorithms from multiple directions. It is mainly divided into three parts: dehazing algorithm based on image enhancement, physical model, and deep learning.
The dehazing algorithm based on image enhancement improves the quality of the image by enhancing the contrast and strengthening the edge information and detail information of the image, but there is a phenomenon that the image information is lost due to excessive enhancement. This kind of method is mainly divided into two categories: global enhancement and local enhancement. Among the globally enhanced methods, there are algorithms based on histogram equalization, homomorphic filtering, and Retinex theory. In the localization enhancement method, the wavelet transform algorithm decomposes the image, and the image is localized through local features to make the image enhanced at multiple scales and then amplify the useful information [7].
The dehazing algorithm based on the physical model often relies on the atmospheric scattering model [8], which mainly focuses on the solution of the parameters in the model, and through the mapping relationship, the inverse operation is performed according to the formation process of the foggy image to restore the clear image. The atmospheric scattering model is the cornerstone of the subsequent physical model-based dehazing algorithm, and many researchers have carried out extensive and in-depth research on the basis of the atmospheric scattering model to continuously improve the level of image dehazing.
In recent years, the dehazing algorithm based on deep learning has shown better performance. At present, there are two types of deep learning-based dehazing algorithms that are widely studied: one is to use deep learning methods to estimate some parameters of atmospheric physical models to restore images [9] and the other is to use the neural network to directly restore the input foggy image to obtain the dehazing image, which is often referred to as end-to-end dehazing in deep learning [10, 11].
Different from the existing dehazing methods based on atmospheric scattering models, the proposed method adopts Laplace pyramid decomposition based on Laplace pyramid decomposition to retain the structural information of the image. In order to obtain a fog-free image, the area with the best visual quality is collected from each image for image fusion, and the color channel transfer algorithm is used to effectively retain the color information in the image.
The main contributions of this paper are as follows:(a)In order to prevent color dull and distortion that may occur in the image after dehazing, we propose a color transfer module to compensate for the color loss of the dehazing image. The color transfer module converts the data of the image from RGB space to lαβ space and then uses the color channel transfer between images to restore the color information of the dehazing image.(b)An image dehazing algorithm based on Laplace pyramid fusion scheme via local similarity of adaptive weights is proposed, which first artificially underexposes haggy images through a series of gamma correction operations. With a multiscale Laplace fusion scheme, multiple exposure images are combined into a fog-free result, extracting the best quality areas from each image and merging them into a single fog-free output.(c)In order to prove the dehazing performance of the proposed method, extensive experiments were carried out on the dataset of indoor/outdoor synthetic foggy images and natural foggy images, and better results were achieved in both subjective and objective aspects.
2. Related Work
Foggy images lead to blurry image details, low contrast, and loss of important image information, and preprocessing of foggy images can often improve dehazing performance. The literature [12] proposes color channel shifting, which utilizes a reference image from a source image to transfer information from an important color channel to an attenuated color channel to compensate for the loss of information. However, this method needs to be combined with other dehazing methods to improve the dehazing performance of these methods in special color scenes.
The establishment of the atmospheric scattering model [13] explains the formation process of images in foggy weather and lays a foundation for the subsequent defogging work [14, 15]. He et al. [8] proposed a dark channel prior principle (DCP) based on the atmospheric scattering model and using prior knowledge. In general, DCP has a good effect on dehazing natural scene images, but this theory is ineffective in bright areas, such as the sky, water, and the surface of white objects, resulting in inaccurate transmission rates calculated and excessive enhancement of the recovered image and a darker effect. After that, He et al. [16] proposed a guided filtering algorithm, which focuses on simple box blurring and will not be affected by the degree and radius of blurring, so the real-time performance has been greatly improved, which is a conformal filtering algorithm. In the field of image deraining and denoising, guided filtering can also achieve good results. Raanan [17] proposed a dehazing method based on color lines from the perspective of image color lines, assuming that the transmission in a local area is consistent, and the color lines in the nonfoggy area need to pass through the origin and move along the ambient light; this characteristic is used to estimate local transmission and global ambient light.
Compared with traditional methods, the methods of deep learning mainly learn the transfer rate by labeling and training datasets or directly learn the mapping of foggy images to the corresponding fog-free images. For example, Proximal Dehaze-Net [18] first designed these two priori iterative optimization algorithms using proximity operators and then expanded the iterative algorithm into a dehazing network by using convolutional neural networks to learn neighboring operators. DehazeNet [9] uses a deep architecture based on convolutional neural networks to estimate the transmission rate in atmospheric scattering models. Ren et al. [19] proposed a multiscale deep convolutional neural network for recovering foggy images. This process often wastes a lot of calculation time; if the depth estimation of the dehazing scene is not accurate, the image after dehazing is prone to artifacts in the edge area, or it appears as color distortion, affecting the visual effect. Zhang and Tao [20] proposed a multiscale convolutional neural dehazing network FAMED-Net with a global edge, which can quickly and accurately calculate haze-free images end-to-end at multiple scales. FFA-NET [21] is an end-to-end feature fusion attention network in which attention mechanisms focus on more efficient information. Hong et al. [22] proposed a knowledge distillation network (KDDN) that uses the teacher network as an image reconstruction task and enables the student network to simulate this process. LKD-NE [23] improves the performance of the convolution kernel by increasing the size of the convolution kernel to use a larger acceptance domain, thereby enhancing the effect of network dehazing. The deep learning-based dehazing method has shown excellent performance and has achieved great success. However, training deep learning models for good performance is cumbersome. Not only is a large labeled dataset required, but also the training process is time-consuming. Moreover, the debugging of models in deep learning is also relatively difficult, which increases the difficulty of work.
3. Proposed Method
In this paper, an image dehazing algorithm based on color channel transfer and multiexposure fusion is proposed as shown in Figure 1, which effectively restores the saturation and contrast information of the image while retaining the color characteristics of the image. The algorithm first uses k-means to cluster and color transfer the pixel intensity of the image in the lαβ color space. Second, guided filtering is introduced into the multiexposure image obtained by gamma correction, and the dehazing image is obtained by Laplace pyramid fusion. Finally, contrast and saturation are corrected by an improved adaptive histogram equalization and spatial linear saturation adjustment, respectively.

3.1. Improved Color Channel Transfer Method
In the process of image dehazing, in order to avoid the interference of a certain spectrum, the proposed method in this paper establishes a reference image by transferring the color channel of the input image, and the formula is as follows:where is the uniform grayscale image (50%) and is the detail layer of the input image, in order to calculate the significance mapping of the input image. We employ an effective technique proposed by Aganta [24] to introduce a bias against the dominant color between the feature map and the initial image, helping to restore the initial color. The detail layer is obtained by subtracting the Gaussian blurred image from the input image, as shown in the following equation:where is the image of the original image processed by a Gaussian kernel of 5 × 5, is the mean vector of the initial image, and is the L2 norm.
Color channel shifting is used for dehazing image preprocessing, with the most pronounced effect in extreme conditions such as multilight sources, underwater images, and night images. In order to improve the effect of preprocessing of color channel transfer on daytime images, this paper introduces the k-means algorithm to adjust the standard deviation of the source image and the reference image in the color channel transition and then cluster the pixel intensity of each image in the color space and finally use the Euclidean distance to determine the centroid between the two most similar images and only calculate the statistics in each region.
3.2. Gamma Correction
In computer vision, pixel intensity values are proportional to the exposure level, so gamma correction can adjust the image exposure level by using different coefficients [25]. The gamma correction is shown as follows:where and are the coefficients in gamma correction. When the coefficient , as shown in Figure 2, overexposure makes the hue of high-brightness objects in the image too bright, and the smoothness of the edges of the object is prone to degradation.

When the factor , as shown in Figure 3, the contrast of the underexposed image is enhanced, and more detail can be obtained in the image. Therefore, we choose values of 2, 3, 4, and 5 to artificially generate underexposed images.

3.3. Laplace Pyramid Decomposition and Energy Local Features
Laplace pyramid is a simple and effective, multiscale, multiresolution image processing method, which is based on the Gaussian decomposition of the image and contains the information difference between the adjacent layers of two Gaussian pyramids. The dehazing algorithm using Laplace pyramid fusion can better improve the dehazing effect [26] and contain higher spatial resolution and image detail, as shown in the following equation:where is the number of available images with different exposures in and is a well-exposed image produced by a combination of different correctly exposed regions in . Normalize the weights .
In the paper, a fusion method based on local energy features is used to assign the weight values in the Laplace pyramid. The local energy features are defined as follows:
For the position of the image at (i, j), the local energy of the point is the sum of squares of the pixel values in the m window centered on the point. Local energy features can effectively represent areas of an image with rich detail. In general, areas of the image that contain detailed details have a lot of energy. In the process of regional fusion, if the energy difference between the two is too large, it means that the matching degree is small, so we only choose the larger part. The specific steps are as follows:(a)Choose an appropriate threshold.(b)Calculate the local energy graph of each image after the decomposition of the Laplace pyramid.(c)Calculate the local covariance of the fused image to represent the similarity.(d)If the matching degree of this point is less than the threshold value, the graph with high energy of this point is selected and the rest are discarded.(e)If the matching degree of this point is greater than the threshold value, the weight is assigned according to the energy size. The weight with small energy is and the weight with large energy is .
3.4. Local Similarity Fusion Method Based on Adaptive Weights
The fusion method based on local similarity is a multiexposure image fusion method. According to this method, if two pixels have similar local neighborhood in different images, they can be regarded as the same pixel, and they can be fused into a high dynamic range (HDR) image [27, 28]. In this paper, a local similarity fusion method based on adaptive weights is adopted. By adding adaptive weights to the local similarity fusion method, the weights can be adjusted adaptively according to the gradient information of different pixels, so as to better balance the contributions of different images and make HDR images more balanced and natural.
The details of the algorithm are as follows:(1)For each pixel, the Manhattan distance metric method is selected to calculate its local neighborhood in multiple images, and the gradient information of pixel value is used to calculate the weight, so as to better preserve the image details(2)The mean square error method is used to calculate the similarity of each pixel’s local neighborhood in different images(3)The pixel with the highest similarity in different images is selected for fusion, and the weighted average method is used to obtain the final pixel value
The adaptive weighting method takes into account the gradient information of each pixel when calculating the weight. Supposing that the gradient value in the Nth image is , then the weight calculation formula of this method is as follows:where represents the first image, represents the position of the pixel, and represents a total of N images. is a small positive number, which is used to avoid the case of zero divisor. is a hyperparameter that controls the degree of nonlinearity of the weight, and represents the gradient value of the ith image at position j. The greater the final weight , the greater the contribution of the ith image in position j.
On the basis of local similarity fusion, adaptive weight method can be introduced to further improve the fusion effect. In this method, gradient information is used to calculate the weight in order to better preserve the image details. In addition, the fusion method based on local similarity can be combined with other extension methods, such as multiscale fusion and local tone mapping, to further improve the fusion effect.
3.5. Multiscale CLAHE Method
In order to further retain more detailed information of the dehazed image, this paper uses the CLAHE algorithm to process the dehazed image. In CLAHE, multiscale processing can further improve its enhancement effect. By analyzing the image at different scales, extracting feature information at different levels can effectively improve the contrast and details of the image after defogging, while avoiding excessive noise enhancement [29].
The details of CLAHE are as follows:(1)The original image is divided into multiple scales, which can be layered using methods such as the Gaussian or Laplacian pyramid. At the bottom of the pyramid, the size of the image is the largest and more detail can be obtained, while as the number of layers increases, the size of the image gradually decreases and the details gradually become blurred:where is the original image, is the ith subimage in the jth scale, and is the Gaussian kernel function of scale I and subimage j.(2)CLAHE processing was carried out for each scale image. First, the image is divided into small blocks, and then, the pixels within each block are histogram equalized, and finally, the values of pixels within the block are interpolated:where is the cumulative distribution function of pixel values in the subimage and K is the maximum value of the histogram.(3)In the locally enhanced subimages, the boundaries of each subimage are smoothed using an interpolation method:where is the smoothed subimage, is the interpolation weight, and is the interpolation window.(4)The final enhanced image is obtained by combining the enhanced results of all scales:where, is the final enhanced image, is the number of scales, and is the number of neutron images at each scale.
3.6. Spatial Linear Saturation Adjustment
The multiscale CLAHE method can take into account the detailed information of images at different scales, making the contrast enhancement more balanced and natural. At the same time, multiscale processing can also avoid problems such as excessive enhancement or distortion that may occur during the processing of the CLAHE algorithm. However, multiscale processing will increase computational complexity and storage space, which is what we will address next. According to the CAP dehazing algorithm [15], it can be seen that, with the change of fog concentration, the brightness and saturation difference of the image also change. Based on this theory, Zhu [30] proposed a method to enhance image dehazing performance and robustness and balance its color saturation during the dehazing process, as shown in the following equation:where and , in which and are the brightness of pixel in fused image F and the brightness of pixel in foggy image, respectively; and where and are the saturation of pixel in the fusion image F and the saturation of pixel in the foggy image F, respectively; is the difference between brightness and saturation of the fusion image F; and is the difference between brightness and saturation of foggy image I.
4. Experimental Results and Analysis
4.1. Parameter Settings and Datasets
The experimental computer is configured with Intel (R) Core (TM) i7-10875UCPU@2.30 GHz 16.00 GB RAM. In the improved color channel transition algorithm, equation (5) uses a Gaussian kernel and takes the k value of 5 to initialize the cluster center to ensure that the cluster center is in the data space. During the gamma correction phase, the selection value of artificial exposure is fixed at .
It is difficult to collect real fog-free and contrasting foggy images in the research of the dehazing algorithm. In order to solve this problem, artificial synthesis of fog images is usually required. In this paper, artificial synthesis of fog data set [31] and fog images collected in real scenes are mainly used to test and compare the performance of this algorithm on outdoor images. contains 35 pairs of images with fog and corresponding outdoor images without fog (ground reality). The variation range of atmospheric light is 0.8∼1.0, and the variation range of scattering parameters is 0.04∼0.2. To compare with the previous state-of-the-art methods, we used , , , and indicators for comparision tests on a dataset containing 500 indoor images and 500 outdoor images.
4.2. Subjective Evaluation
In this part, the algorithms of [15], [26], [32], [33], [34], and [35] are compared with the proposed algorithm in this paper.
Compared with rows 2 and 7 of Figure 4, it can be seen that the method can show better dehazing performance in the mist area, but in rows 1, 10, and 11 of Figure 4, with the increase of fog concentration, the dehazing performance of the method gradually decreases, the texture details of white objects (row 9) become blurred, and some details in the image are difficult to read, such as the texture of branches (such as rows 3 and 4). From the second and ninth lines of Figure 4, it can be seen that the method is accompanied by color distortion and loss of detail while dehazing, which reduces the visual effect of the image. The and methods can effectively reconstruct sharp images from foggy images. Through the sky area in rows 6 and 8 of Figure 4, the background color of the image after dehazing by the method is closer to the original image compared to the method. Both the and methods achieved better performance in detail visibility and preservation of fog-free areas, but the image after dehazing had a mutilated haze, resulting in an increase in color artifacts in the area where the house and sky were connected.

The algorithm proposed in this paper compensates for the loss between each channel through the color channel transfer method before dehazing and effectively reduces the interference between each channel, and the essence of the image is clearly restored after dehazing, and the buildings and vehicles in the distance are clearly visible and the details are obvious. Spatial linear saturation adjustment and contrast correction are applied to multiexposure image fusion, and the image after dehazing is more in line with human visual observation.
4.3. Objective Evaluation
In order to analyze the subtle differences in the images, this paper uses the PSNR [36], SSIM [37], FSIM [38], and GMSD [39] for objective evaluation.
Zhang et al. [38] proposed FSIM, arguing that the human visual system mainly understands images based on low-level features and combines phase consistency, color features, gradient features, and chromaticity features to measure the local structural information of images. GMSD was discovered by Xue [39] in 2014 which showed that gradient maps are sensitive to image distortion, and distortion images with different structures have different degrees of quality degradation, so as to propose an image full reference evaluation method, which has the characteristics of high accuracy and low amount of calculation.
PSNR evaluates image quality by calculating the pixel error between the original image and the dehazing image. The value is more significant when the error between the dehazed image and the original image is smaller. The calculation of PSNR is shown in the following equation:where MSE is represented by the mean squared error and is the maximum pixel value of the original image.
SSIM was used to measure the similarity between the original image and the dehaze image. SSIM uses the mean value to estimate brightness, standard deviation to estimate contrast, and covariance to measure structural similarity, as shown in the following equation. The more significant the SSIM value, the less distorted the image, indicating better results after dehazing.where and represent the means of x and y and and represent the variance of x and y, respectively. is represented as the covariance between x and y, and C1 and C2 are represented as constant coefficients.
FSIM is based on phase consistency and gradient amplitude. The larger the value is, the closer the dehazing image is to the original image. GMSD is designed primarily to provide credible evaluation capabilities and use metrics that minimize computational overhead.
We calculate the of different methods for processing images. In Table 1, it can be seen from Figure 5 that both and the proposed method can achieve good results in removing dense fog, and compared with , our proposed method can effectively remove dense fog while restoring the color information of the sky area. In addition, compared with other images, the method proposed in this paper has achieved better results.

The values for the image in Figure 5 are shown in Table 2. As can be seen from the table, , , and the proposed method obtain higher values. It can be seen from Table 2 that the value of the proposed method reaches 0.9073, which has the best performance. For the Tiananmen image in Figure 5, the value of the method in this paper is 0.9192, second only to CAP.
As shown in Table 3, the method proposed in this paper is superior to other dehazing methods in recovering image structure. This is because the multiexposure melting dehaze method fuses images with different exposure levels and better preserves the structural features of the image.
Table 4 shows the calculation results of values. It can be seen from the table that the dehazing image proposed by this method has a high similarity with the original haze-free image, and the score is greater than 0.90. This is because we use gamma correction to acquire images with different exposure levels and multiscale fusion using the classical Laplace pyramid method. The method proposed in this article attempts to obtain the best exposure for each area, so the score of the image is high.
5. Conclusion
In this paper, an artificial multiexposure image fusion algorithm for single image dehazing is proposed. First, the color channel transfer method based on k-means is used to compensate for the channel with serious information loss. Then, artificial gamma correction obtains a series of underexposed images and fuses them into dehazing images with the improved Laplace pyramid fusion scheme, and finally, in order to obtain better visual effects after dehazing, contrast and saturation correction are applied to enhance the dehazing images, so as to retain more image details. Through comparative experiments with other mainstream dehazing methods, the results show that the proposed method can obtain good dehazing effect in light fog and dense fog images, and the method achieves good results in various evaluation performance indicators. In future work, it is necessary to further optimize the complexity of the algorithm and improve the practicability of the algorithm. In addition, it is also possible to start with fog and haze images in various scenarios and perform targeted defogging processing to obtain better effects.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Authors’ Contributions
Shaojin Ma and WeiGuo Pan conceptualized the study; Huaiguang Guan performed data curation; Songyin Dai and Bingxin Xu performed formal analysis; WeiGuo Pan and Hongzhe Liu performed funding acquisition; Shaojin Ma and WeiGuo Pan performed methodology; Shaojin Ma and Cheng Xu provided software; Xuewei Li performed supervision; Shaojin Ma and Huaiguang Guan performed validation and visualization; Shaojin Ma wrote the original draft; Shaojin Ma and WeiGuo Pan wrote, reviewed, and edited the manuscript.
Acknowledgments
This work was supported by the Beijing Natural Science Foundation (4232026), National Natural Science Foundation of China (grant nos. 62272049, 62171042, 61871039, 62102033, and 62006020), Key Project of Science and Technology Plan of Beijing Education Commission (KZ202211417048), the Project of Construction and Support for High-Level Innovative Teams of Beijing Municipal Institutions (no. BPHR20220121), the Collaborative Innovation Center of Chaoyang (no. CYXC2203), and Scientific Research Projects of Beijing Union University (grant nos. ZK10202202, BPHR2020DZ02, ZK40202101, and ZK120202104).