Abstract
Dark channel prior (DCP) has been widely used in single image defogging because of its simple implementation and satisfactory performance. This paper addresses the shortcomings of the DCP-based defogging algorithm and proposes an optimized method by using an adaptive fusion mechanism. This proposed method makes full use of the smoothing and “squeezing” characteristics of the Logistic Function to obtain more reasonable dark channels avoiding further refining the transmission map. In addition, a maximum filtering on dark channels is taken to improve the accuracy of dark channels around the object boundaries and the overall brightness of the defogged clear images. Meanwhile, the location information and brightness information of fog image are weighed to obtain more accurate atmosphere light. Quantitative and qualitative comparisons show that the proposed method outperforms state-of-the-art image defogging algorithms.
1. Introduction
The image captured by the outdoor vision system in bad weather (e.g., foggy or hazy) usually has low contrast and fidelity due to the absorption and scattering of light by a large number of particles and water droplets suspended in the atmosphere during the propagation process. Moreover, most machine vision applications, such as image classification, image retrieval, and target recognition, are strongly depending on the definition of input images, so they are easily affected by the abnormal work of the outdoor vision system [1]. Therefore, it is of great significance to improve the existing image defogging technologies and ensure the stability of the outdoor vision system.
Currently, image defogging algorithms can be classified into deep learning-based methods and image processing-based methods. Deep learning-based methods [2–4] achieve the defogging purpose through defogging models, which are established by designing network models, optimizing algorithms, and performing end-to-end training on a large number of manually labelled image data sets. These methods can avoid the image preprocessing of artificial design features and get satisfactory defogged clear images, but they are very dependent on training data sets, and manual labelling features are time-consuming, and the results are subjective. Image processing-based methods can be further divided into image enhancement and image restoration. Among them, image enhancement-based defogging algorithms [5–7] take no consideration for the image degradation model but increase the contrast to obtain the visual defogged effect, resulting in distortion of defogged clear images. On the contrary, image restoration-based methods [8–13] consider the degradation process of foggy images, so the defogged clear images restored by these methods are closer to the real scene.
Among these image restoration-based defogging algorithms, the DCP-based defogging algorithm proposed by He et al. in [14] has achieved well-accepted defogging effect. However, the algorithm still has two main shortcomings. Firstly, the DCP is ineffective in large sky (bright) regions and nonsky regions that contain large white objects, which leads to over-saturation in these regions in the defogged clear images; secondly, the transmission map is estimated using window region, which results in obvious halo artifacts produced around the edges of object in the defogged clear images. In addition, many optimized algorithms are proposed to solve this problem, but they are either less accurate or time-consuming. Besides, defogged clear images are usually over-dark because of the nature of this method. To overcome the above shortcomings, this paper proposes a novel method based on DCP for achieving the excellent defogging effect. The major contributions of this paper are as follows:(1)Maximum operation is performed on the dark channels to increase the brightness of the defogged clear images and improve the accuracy of the dark channels around the object edges(2)A nonlinear weighted average is executed using the Logistic Function on two typical dark channels that are calculated by window regions with different radii (described as structure window and value window), respectively(3)Both location information and brightness information of foggy images are weighed to get a more accurate atmosphere light
The rest of this paper is organized as follows. In Section 2, the related work of image defogging is briefly reviewed, and Section 3 describes the DCP-based defogging algorithm in detail. Our proposed method is presented in Section 4. The experimental results and analysis are shown in Section 5. Finally, the conclusions are given in Section 6.
2. Related Work
Deep convolution neural network (CNN) has made great progress in many machine vision tasks, which drives researchers to apply deep learning network to image defogging. The deep learning-based defogging method has two mainstream designs: one is estimating the transmission map t (x) and the atmospheric light A by neural network and then removing fog from images combining with the atmospheric degradation model; the other is designing an end-to-end training model, which takes the foggy image as the input and directly outputs a defogged clear image. As the typical representative of the former design, Cai et al. [15] proposed a trainable end-to-end system called DehazeNet, which adopts CNN-based deep architecture. In addition, they also proposed a new nonlinear activation function (BReLU) in DehazeNet to further improve the quality of the defogged clear image. Xiao et al. [16] proposed a new haze layer-based single image defogging algorithm; firstly, it obtains a residual image by hazy layers through an end-to-end mapping from the original foggy images and then designs the CNN-based model to remove the residual image from the given hazy image to obtain a recovered defogged clear image. Different from the former design, the latter one attempts to avoid the time-consuming and well-known challenging estimation of t (x) and A. Li et al. [17] proposed an end-to-end model based on CNN called AOD-Net, which can directly restore from a foggy image. In addition, the characteristic of light-weight design makes it easy to be embedded into other models. Yin et al. [18] proposed a new variational color-transfer defogging model based on CNN, which identifies the optimal prediction in terms of mean and standard deviation from a true clean image while refining the textures and details of the recovered image. Zhang and Patel [19] proposed a novel end-to-end jointly optimizable defogging network, which directly embeds the atmospheric degradation model to estimate the transmission map, atmospheric light, and fogged image jointly. For ensuring the accuracy of estimation of transmission map, they also proposed an edge-preserving pyramid densely connected encoder-decoder network. Ren et al. [20] proposed an image defogging method based on gated fusion network, which is composed of an encoding network and a decoding network. Encoding network is used to encode the characteristic of haze image and various transformed images; decoding network is used to estimate the weight corresponding to these transformed images. Pang et al. [21] proposed a binocular image defogging network called BidNet, which uses the relationship and correlation between binocular images to complete the fog removal task.
Generally, deep learning-based methods can achieve high quality defogging results, but the essence of supervised learning determines that it heavily relies on the training data, and it is not adaptive to dynamic environment. In addition, the large-scale network model makes it difficult to be applied in the embedded system. Different from deep learning-based methods, image enhancement-based ones directly operate on the corresponding pixels according to the characteristics of the foggy image. The classical methods are based on image histogram equalization, wavelet transform, Retinex theory, and so on. The histogram equalization-based defogging algorithm [22, 23] makes the number of pixels in each gray level of the foggy image equal by replacing the original randomly distributed histogram of the image with the one of equal probability distribution in each interval and which can increase the contrast. Wavelet transform-based defogging methods [24, 25] obtain the defogged clear image by using the Mallat algorithm to decompose the image matrix and filtering noise signal according to the characteristics of wavelet decomposition coefficient. Since Edwin Land put forward the Retinex theory in 1964, it has been widely used in the field of image defogging, mainly including McCann Retinex algorithm [26, 27], center/surround Retinex algorithm [28, 29], and Retinex algorithm based on variational framework [30]. The core idea of the Retinex-based defogging method is decomposing the foggy image into incident components which mean the dynamic range of image color and reflected components which represent the internal properties of image color firstly, then removing the incident component, and retaining the reflected component as the finally defogging clear image.
Because of lack of consideration of the image degradation process, image enhancement-based defogging algorithms can cause the change of the original pixel structure of the foggy images, which result in the defogged images being accompanied by image distortion and color drift. Different from the former, image restoration-based ones estimate the necessary parameters in the image degradation model by putting forward some prior knowledge and then complete the defogging task. Park et al. [31] estimated the atmospheric light by quad-tree subdivision using transformed image after white balance processing and estimated transmission map by maximizing the objective function that consists of image entropy and information fidelity, respectively. Zhu et al. [32] proposed color attenuation prior and created a linear model for modeling the transmission map under this novel prior, and finally they learnt the parameters of the model with a supervised learning method. As a result, the depth information can be well recovered. Wang et al. [33] proposed a linear transformation-based fast algorithm for single image defogging by assuming that a linear relationship exists in the minimum channel between the hazy image and the haze-free image. Thanh et al. [34] considered an automatic and fast image defogging approach that is based on optimal color channels and nonlinear transformations in the color space of HSV. Kratz and Nishino [35] modeled the image with a factorial Markov random field to estimate the scene radiance more accurately, and Meng et al. [36] proposed an effective regularization defogging method to restore the foggy images by exploring the inherent boundary constraint.
As one of the typical image restoration-based image defogging algorithms, DCP-based one has been well-accepted by researchers because of the easy-implementation and the state-of-the-art results. However, owing to its inherent drawbacks (described in Section 1), a large number of improved algorithms [37–49] have emerged. For solving the complexity of the soft matting algorithm to improve the efficiency, He et al. replaced it with a smooth edge-preserving filter (guided image filter) in [37], which can not only refine the transmission maps but also greatly improve the efficiency, thus meeting the real-time requirements. Similar to the guided image filter, Wang et al. [38] and Li et al. [39] used bilateral filter, Gibson et al. [41] adopted the standard median filter, Tarel et al. [42] applied “median of median filter,” and Yu et al. [43] utilized the block-to-pixel interpolation method. Most of the abovementioned methods either introduce variable parameters, thus reducing robustness, or they adopt complex nonlinear operations, thus increasing the computational time. Consequently, there is still room for optimization in current defogging methods.
3. Problem Description
3.1. Atmospheric Degradation Model
The process of image degradation usually includes two aspects: one is the attenuation of scene radiation by its own scattering and the other is the introduction of noise by diffuse reflection of atmospheric light. Moreover, the degree of degradation usually increases with the distance between the scene radiance and the camera. The image degradation model derived from the atmospheric scattering model proposed by McCartney in 1976 [50] gives a detailed explanation for this process as follows:where is the degraded image, is the scene radiation representing the defogging image, is the pixel coordinate location, is the depth of the scene describing the distance of the scenery object and the observer, is the homogeneous medium attenuation coefficient of the atmosphere, and is a constant known as atmospheric light. As described, two reasons cause degrade to : direct attenuation (reducing the proportion of real scenes arriving at the camera, resulting in lower contrast) and indirect pollution of atmospheric light (introducing atmospheric light, causing color distortion). For restoring from , equation (1) is transformed to the following form:
According to equation (2), for calculating , we need not only but also and , where is modeled as , and it is a scalar value that represents the percentage of scene radiance arriving at the camera directly without being scattered. Clearly, this is an ill-posed problem, which can only be estimated by limiting some uncertain factors.
3.2. DCP-Based Defogging Method
In 2009, after observing large numbers of outdoor clear images, He et al. proposed dark channel priority theory (DCP) that in most of the nonsky patches, at least one color channel has some pixels whose intensities are very low and close to zero. With this prior, they estimate the depth information of the image and restore the clear image by the degradation model. In the DCP-based algorithm, calculating the dark channels on both sides of the degradation model is shown as the following equation:
According to DCP, the first term on the right side of equation (3) is equal to 0, so equation (3) can be simplified to the following equation:
After that, the transmission map can be obtained by simply transforming equation (4).where is dark channel, and it is modeled as , atmospheric light is calculated by the following process adopted by [14]: (1) the dark channels are sorted, (2) the brightest 10% pixels are extracted, and (3) the average intensities of these pixels are taken as .
Because there is atmospheric scattering even on sunny days, it is impossible to be completely clear when we observe distant objects. In addition, due to the existing atmospheric light, when we observe the image, it can present a certain hierarchy. Therefore, a parameter is introduced to equation (5) for retaining a small amount of atmospheric light.
Thus, we can obtain the defogged clear image by substituting into equation (1) as follows:where parameter is introduced and set to 0.1 as in [14] to increase the overall brightness.
4. The Proposed Method
In this paper, a new method is proposed to obtain dark channels by adaptive fusion of two typical channels calculated in a double scale window, thereby avoiding further refining the transmission map. In addition, a maximum filtering on dark channels is proposed to improve the accuracy of dark channels near the object boundaries and the overall brightness of the defogged clear images. Meanwhile, the location information and brightness information of foggy image are weighed to obtain more accurate atmospheric light.
4.1. Improving the Accuracy of DCP near Object Boundaries by Cascaded Min-Max
The range of observed from equation (6) is (0, 1), so it is easy to obtain inequality (8) by introducing into equation (7). Inequality (8) gives the fact that the defogged clear image restored by the degradation model is darker than the foggy image , which is called problem 1 .
Recently, most algorithms directly stretch the brightness of the restored defogged clear images to solve , which make the defogged clear images unnatural.
Because of the inherent limitation of window region operation, the necessary edge information is lost during the calculation of dark channels, thus leading to obvious halo artifacts, which is called problem 2 . To solve , various edge-preserving smooth filters are proposed. However, these filters either use complex nonlinear operations or introduce other variable parameters.
For , equation (7) gives the relationship between the brightness of the defogged clear image and the transmission map , that is, the larger , the greater the difference between and . Considering that directly comes from the dark channels, we execute the max operation on the dark channels as shown in equation (9) to indirectly reduce the value of . Because maximizing the dark channels is equivalent to increasing as a whole, it does not violate DCP.
For , in addition to filtering with edge-preserving smooth filter, simple preprocessing can be used for dark channels. It is well known that pixels around the objects usually have obvious intensity gradient differences. When the fixed window region is used, the pixel intensity on the darker side will be regarded as the dark channel, resulting in blurring of edge information. Moreover, the extent of blur is directly affected by the size of window region. According to this analysis, the following two ideas can be used to solve the : (1) the variable window region is adopted to calculate the dark channels; that is, the dark channels around the object boundary use a smaller window region to keep the edge information, and the nonedge region uses a larger window region to obtain more accurate dark channels; (2) the edge width is expanded to achieve or even exceed the size of the window region, to reduce the halo artifacts.
Consider that the cascade min-max operation is equivalent to the morphological open operation that can be used to expand the edge width of the image. Therefore, the second idea mentioned above can solve the solution of and in a unified way.
4.2. Obtaining the Dark Channels via Adaptive Fusion
According to the DCP, the larger the window region is, the more likely the dark channels exist. In this case, the more accurate the calculated dark channels are, the more natural the corresponding defogged clear image will be, but the image structure information will be reduced. On the contrary, the smaller the window region, the brighter the calculated dark channels, which leads to over recovery but reserves more texture information. Because most of the previous algorithms use a fixed window region to calculate dark channels, the defogging effect and structure information cannot be considered at the same time. Cascaded min-max operation on dark channels can only alleviate the problem slightly, the reason is that the appropriate window region is a variable parameter that should be determined by the image characteristics, and the cascaded min-max operation cannot cover all edge information. To solve this problem, we use two typical window regions to calculate the dark channels separately and then fuse the two dark channels by the Logistic Function with the property of smoothing and squeezing. Thus, this can improve the accuracy of the dark channels and retain the necessary edge information.
Here, the dark channel is modeled as a linear combination of value information and structure information, expressed as the following equation:where and represent the dark channels with high accuracy and more structure information, respectively, and and express the proportion of these two parts, respectively. According to the relationship analyzed above between the window region size and defog effect, we calculate via structure window with a small size and calculate via value window with a large size. It is shown that the larger the difference between and is, the more accurate is, and the larger the corresponding weight value should be. On the other hand, the larger the difference between the local mean square deviation (which can be used to represent the image structure information) of and is, the more the structure information is, the larger corresponding weight value should be. Lastly, the Logistic Function, which squeezes the input value to (0, 1), is used to calculate and for preventing the obvious discontinuity during the linear weighting process. Thus, and can be obtained by the following equations, respectively.where is the weight value of the difference between and and represents the weight value of the difference between the local mean square deviation of and . For ensuring and are on the same scale, we use the Logistic Function again to compares them with (0, 1). Meanwhile, we introduce the parameter to avoid excessive “squeezing.” Thus, we can get and by assignment operators, respectively, as follows:where and represent the local mean square deviation of and , respectively, which can be calculated by the following equations, where is the window region radius:
This proposed method makes full use of the smoothing and “squeezing” characteristics of the Logistic Function, which not only retains the more reasonable dark channels but also avoids the obvious discontinuity during the fusion process.
4.3. Estimating Atmosphere Light with Intensities and Locations
The method used to the estimate atmospheric light described in Section 2 fails to take into account the location of the sky area. So, when the nonsky region contains large white (bright) objects, the inaccurate atmospheric light will be calculated, which will lead to over recovery of the fog image.
In general, the probability that the sky area is close to the upper side of the image is higher. So, before sorting the dark channels, we use the row location to weight the dark channels, as described in assignment operator as follows:.where is the row of the current pixel, is height of the image, and is the dark channels of row of the image. Considering that the variation gradient of weight value is much larger than that of dark channels, for balancing the intensities and locations of pixels, we use the Logistic Function to “squeeze” the weight value. At the same time, we introduce the variable parameter to avoid over-compression. Therefore, assignment operator (17) can be optimized to assignment operator as follows:
As shown in Figure 1, the comparison results of the estimated sky regions are given before and after using the optimized algorithm. Before the optimization, white swans are mistakenly used as the sky area to estimate the atmosphere light so as to get a large atmosphere light. As the result, the defogged clear images become over-dark as a whole, resulting in oversaturation in nonsky regions containing large white objects. On the other hand, the optimized algorithm comprehensively considers the intensities and locations of the sky area to alleviate this problem.

(a)

(b)

(c)
5. Results and Discussion
This section presents qualitative and quantitative comparisons of the proposed method with classic defogging works [13, 14, 22, 37] on a set of well-known benchmark real-world fog images (they are characterized by large depth, large depth and sharp edge, and large sky region, respectively) and synthetic images where the ground truth solutions are known. All algorithms are implemented in the Anaconda navigator 1.9.12 environment on ThinkPad T460 (Core i7, 8 GB RAM, 250 GB SSD) PC. The main parameters in the proposed method are as follows. The radii of structure window and value window are 21 and 5, respectively. Correspondingly, the radii of the window region for calculating the local mean square deviation of dark channels are also 21 and 5. The variable parameter and is 2 and 15, respectively. The main parameters of the other three algorithms used for comparison take the optimal parameters set in the corresponding literature.
5.1. Qualitative Comparison
5.1.1. Qualitative Comparison on Real-World Images
Figure 2 shows the qualitative comparison of defogged results of images with large scene depth under different defogging algorithms. It can be seen that all of these methods can restore the foggy images to a certain extent. The defogged images restored by Thanh et al.’s method show obvious color drift, such as the houses in the first foggy image, the sky area of the second image, and the white cloud in the third image. Tarel et al.’s algorithm recovers distant objects better, but there is a serious distortion problem for the region with more texture, which can be clearly seen from the area where the sky and mountains are connected in the third foggy image and the houses area in the second foggy image. In addition, the restored images become dark as a whole. Fattal et al.’s algorithm gets better defogged images, but it shows the problem of excessive recovery in large white areas. Compared with the former three methods, He et al.’s algorithm and our algorithm can obtain more reasonable defogged images, but the defogged images restored by He et al.’s algorithm cause oversaturation at the sky region. Instead, our algorithm retains the color characteristics and alleviates oversaturation; the defogged images are more natural and vivid.

(a)

(b)

(c)

(d)

(e)

(f)
To verify that the proposed algorithm can effectively improve the accuracy of dark channels around the edges and eliminate the halo artifacts, we select three benchmark images with larger depth and sharp edges as experimental analysis (as shown in Figure 3), where in order to observe the experimental results more clearly, we enlarge the main edge area of the images. It is observed that Tarel et al.’s method does not effectively remove the fog at the edges of the foggy images. On the contrary, Fattal et al.’s method performs excessive defogging operations around the edges, resulting in the loss of some the edge information. Thanh et al.’s method can effectively preserve the edge information while achieving defogging purpose, but because of the serious color drift, a lot of texture information around the edges has been lost. He et al.’s and our algorithms can obtain visually acceptable defogged images, while the edge information of that can be retained, but it should be pointed out that this is at the cost of reducing the defogged effect at the edge.

(a)

(b)

(c)

(d)

(e)

(f)
Because DCP is invalid in sky (bright) areas and nonsky with large objects, the corresponding regions in the defogged images are oversaturated and the defogged images are over-dark in total. The images shown in Figure 4(a) contain either large white objects or sky areas. From Figure 4(e), it is evident that Tarel et al.’s results are ideal on the whole; however, because of lack of a certain hierarchy, they look unnatural (see the duck image and the mountain range image). In addition, some halo artifacts on some regions such as sky area exist. Fattal et al.’s results have the problem of over brightness and lose a lot of texture information, especially on the large white areas (see the red border area in Figure 4(d)). Similarly, from Figure 4(b), we note that Thanh et al.’s method stretches the contrast of foggy image excessively, which removes fog to a great extent but also causes serious color drift. Compared with Fattal et al.’s, Thanh et al.’s, and Tarel et al.’s results, He et al.’s results show that the defogging effect is obvious and more natural, but their method reduces the overall brightness of the images (see from the duck image and the red border area in mountain range image in Figure 4(b)) and presents some halo artifacts in the large white sky area such as the red border area in mountain image in Figure 4(b). Fortunately, as shown in Figure 4(f), the results of our method keep the ideal defogging effect of He et al. while, to a certain extent, the image brightness is increased and the halo artifacts existing in the large white area are alleviated. It should be noted that the proposed method is still based on DCP, so it can not fundamentally eliminate the invalidity of DCP in some special areas, but we have alleviated these problems to a great extent.

(a)

(b)

(c)

(d)

(e)

(f)
5.1.2. Qualitative Comparison on Synthetic Images
For solving the problem that the comparison of defogging methods is not straightforward and nor objective, we test the five algorithms including the proposed one on the stereo images where the ground truth images are known.
Figure 5(a) shows the foggy images which are synthesized from the haze-free images with known depth maps [51]. The results of the five algorithms are shown in Figures 5(b)–5(f). Figure 5(g) gives the ground truth images for comparison. These fog-free images and their corresponding ground truth depth maps are taken from the Middlebury stereo datasets [51–55]. It is obvious that Thanh et al.’s results are quite different from the ground truth images as the results are much darker, and many color distortions exist on the whole of the three images in Figure 5(b). Compared with the results of Thanh et al., the brightness of Fattal et al.’s results is excessively improved, which results in serious loss of the texture information of the brightness region of the original images (see the sky area of the park image and the building image and the road area of the bus stop image in Figure 5(d)). By observing the images in Figure 5(e), we can find that Tarel et al.’s results lack a sense of hierarchy, so they look unnatural. He et al.’s results are more similar to the ground truth images but still show some inaccuracies (see Figure 5(c)). Note that the sky areas in the park image and the building image are bluer than they should be. In contrast, our results do not have the problem of oversaturation and maintain the original colors of the objects (see Figure 5(f)).

(a)

(b)

(c)

(d)

(e)

(f)

(g)
According to the above comparative analysis, we can conclude that the defogged effect achieved by our algorithm reaches or even exceeds the existing classic algorithms, which proves the effectiveness of our algorithm.
5.2. Quantitative Comparison
To make the experimental results more convincing and objective, we compare and analyze them from two aspects: image quality and real-time performance of defogging.
5.2.1. Quantitative Comparison on Real-World Images
Because the real-world images lack ground truth, we must use the blind/referenceless image quality assessment metrics. The NIQE [5] is a well-known metric. The NIQE metric was proposed by Mittal et al., and the metric is based on the principle of the certain regular statistical properties of the natural image. This metric compares a given image to a default model computed from image of natural scenes. So, this metric is more effective on natural images (real images). The NIQE is handled via three steps:(1)Developing a natural statistics scene model from the input natural image(2)Extracting statistical features from corpus of natural image(3)Mapping features extraction to the quality score that is called NIQE index
The smaller value of NIQE, the better perceptual quality. Note that, perceptual quality may be not same to quality assessment by human vision.
Figure 6 shows average NIQE score comparison for all defogging methods with the selected images. As can be seen, the NIQE score of the proposed method is the lowest followed by Fattal et al.’s method. The difference of average NIQE score of our method and Fattal et al.’s method is small, but the differences of visual defogging results are much.

5.2.2. Quantitative Comparison on Synthetic Images
Different from the real-world, synthetic images have clear reference images. For the objective evaluation of image defogging, mean square evaluation (MSE) and structural similarity (SSIM) [43, 56] are employed for objective comparison. SSIM compares the restored clear image and clear image in three aspects of intensity, contrast, and structure. The larger the SSIM is, the higher the similarity is, and the more structure information is retained. MSE reflects the mean square error of the two images; the lower the MSE, the higher the similarity.
To facilitate expression, we name the images in Figure 5 from top to bottom as yellow duck, bus stop, park, and building, and then we calculate the final results shown in Figure 7. It should be noted that MSE is calculated by the norm function provided by NumPy, and SSIM is calculated by the structural similarity function provided by scikit.

(a)

(b)
It can be seen that since He et al.’s guiding filter uses a linear transformation of the input defogged image to approximate the coarse transmission map, its refined transmission map contains more fine details. Hence, the defogged images lack contrast and yield a lower SSIM and a higher MSE value. On the other hand, the proposed method considers both the accuracy and the structure of the dark channels, and it can handle the sharp depth changes to preserve the edges of objects. As a result, a higher SSIM and a lower MSE are produced compared with He et al.’s method. In addition, Tarel et al.’s method uses median filter to achieve the smooth and edge preserving for getting atmospheric veil interface, so its results rank third in SSIM, but at the same time, it lost texture information inevitably, which leads to the results lagging behind Fattal et al.’s method in MSE. Thanh et al.’s method takes no consideration of the image degradation model, so their results achieve the lowest SSIM and highest MSE. Thus, it can be seen that the proposed method achieves a remarkable defogged effect while maintaining the similarity of the original image.
5.2.3. Computational Complexity
To verify the speed advantage of the proposed method, various images with different sizes were tested, and to ensure the fairness of the comparison, all programs of the different methods are run in the Python environment, the program code running for image defogging is performed five times to find the average time. Table 1 and Figure 8 show the running time of the different methods and the increasing speed of running time changing with the image size, respectively. From Table 1, it is evident that He et al.’s method has the highest operational efficiency on a single image, which mainly owes to replace the complex software matting with linear guidance filter. On the contrary, the execution time of Thanh et al.’s method is quiet slow; in addition, as the size of the image increased, the computational complexity of Fattal et al.'s method Tarel et al.’s method, and Thanh et al.’s method rapidly increased. It is pointed out that the symbol “‐‐” in Table 1 stands for “out of memory.” Compared with the three methods, complexity of the proposed method is O (nm), where m is determined by the number of pixels in the window region and n is the total number of pixels of the whole image. In general, m is much smaller than n, it can be regarded as a constant, so the proposed method can be approximately linear complexity, and it can guarantee real-time performance on relatively small-scale images. However, compared with He et al.’s method, the proposed method is two times slower, which results in two kinds of dark channels which are calculated and used. In addition, neither our method nor He et al.’s can guarantee the real-time performance for large-scale images, so further optimization is needed.

6. Conclusions and Future Work
After deeply analyzing the DCP-based defogging algorithm, this paper first points out its main drawback (artificial halos caused by the operation in a fixed window region; DCP is invalid for sky (bright) areas and nonsky areas with white objects; restored clear images become over-dark). And then the causes are discussed in detail. At last, three optimized processes are implemented on the dark channels. They are cascade min-max, an adaptive weighting algorithm combined with characteristics of the Logistics Function and atmosphere light estimation method combined with location information. The proposed method achieves or even exceeds the existing classic algorithms.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work was supported by the Key Research and Development Program of Anhui Province of China (T04a05020094, 904a05020091, 04a05020091, 04a05020092, and 04a05020093), the Scientific Research Project of Chaohu University (XLY-201611), the Teaching and Research Project of Chaohu University (ch18jxyj18), and the Applied Curriculum Project of Chaohu University (ch18yygc13).