Abstract

During rainy times, the impact of outdoor vision systems gets considerably decreased owing to the visibility barrier, distortion, and blurring instigated by raindrops. So, it is essential to eradicate it from the rainy images for ensuring the reliability of outdoor vision system. To achieve this, several rain removal studies have been performed in recent days. In this view, this paper presents a new Faster Region Convolutional Neural Network (Faster RCNN) with Optimal Densely Connected Networks (DenseNet)-based rain removal technique called FRCNN-ODN. The presented involves weighted mean filtering (WMF) is applied as a denoising technique, which helps to boost the quality of the input image. In addition, Faster RCNN technique is used for rain detection that comprises region proposal network (RPN) and Fast RCNN model. The RPN generates high quality region proposals that are exploited by the Faster RCNN to detect rain drops. Also, the DenseNet model is utilized as a baseline network to generate the feature map. Moreover, sparrow search optimization algorithm (SSOA) is applied to choose the hyperparameters of the DenseNet model namely learning rate, batch size, momentum, and weight decay. An extensive experimental validation process is performed to highlight the effectual outcome of the FRCNN-ODN model and investigated the results with respect to several dimensions. The FRCNN-ODN method produced a higher UIQI of 0.981 for the applied image 1. Furthermore, on the applied image 2, the FRCNN-ODN model achieved a maximum UIQI of 0.982. Furthermore, the FRCNN-ODN algorithm produced a higher UIQI of 0.998 on the applied image 3. The simulation outcome showcased the superior outcome of the FRCNN-ODN (Optimal Densely Connected Networks) model with existing methods in terms of distinct measures.

1. Introduction

Image acquisition in rainy times frequently encounter considerable reduction of scene perceptibility and unpleasantly disturb the efficiency of several image processing processes such as video surveillance and smart vehicles. Removal of raindrops from the rainy image becomes an essential process and finds it available in several application areas [1, 2]. Numerous single image rain removal techniques existed in the earlier times. Previous studies have utilized distinct filtering techniques for the image decomposition process namely guided filtering and L0 smoothening filtering and afterward reconstructed the rain-free image. In recent times, several research works have discovered the physical characteristics of rain and backdrop layers and formulized them to distinct earlier terms for removing raindrops.

Gaussian mixture model (GMM) and discriminative sparse coding (DSC) are employed for modeling rain layers. In addition, joint convolution analysis and synthesis sparse representation (JCAS) are the representative samples which explains the rain and backdrop layers with accurate mathematics-based approaches. They are found to be effective in particular cases. These techniques could not effectively adaptable to the rainy images with complex shapes of raindrops and backdrop scenarios. Presently, deep learning (DL) models are applied to remove rain drop process [3]. Many of the techniques are repeatedly necessitate recollecting massive training data with rain-free images for learning the nonlinear mapping among the rainy and rain-clean images in a dedicated way. Some of the important techniques involved are spatial attentive network (SPANet), DerainNet, multistream dense network (DID-MDN), depth attentional feature network (DAFNet), progressive image deraining network (PReNet), deep detail network (DDN), etc.

Although achievement of victory in specific context, several existing DL techniques to remove raindrop from single images poses several drawbacks. Particularly, the efficiency of the techniques is principally relying on the quality and quantity of recollected training data that comprises rain-free images pretending the network input and output. The training data includes probably a common range of raindrop shapes to cover the continually existing testing process. At the same time, the nonlinear mapping representation using the deep network model brought robust fitting ability to approximate massive structures of rain exist in the training data over the classical model-based approaches. At the same time, the complex network appearance inclines to perform repetitiveness in defining raindrops in the training data and unnecessary flexibility on adjusting with the new input rainy images. A DL technique accomplishing better training performance does not accurately filter the raindrops from real time images with complicated structures of raindrops which are not available in the training data.

The contribution of the study is given here. This paper develops a new Faster Region Convolutional Neural Network (Faster RCNN) with Optimal Densely Connected Networks (DenseNet)-based rain removal technique called FRCNN-ODN. A weighted mean filtering (WMF) technique is applied to perform the denoising process. The weighted mean filter varies from the average filter in that the mean value was calculated by repeating indicated pixels inside a neighborhood a provided number of times. A deep neural network is an ANN with numerous levels between hidden layer and output layer. Various types of neural networks occur, and they all start sharing the very same components: neural cells, synaptic, weight training, biases, and functional areas. Given that the median filter estimates the median value of a neighborhood rather than the mean filter, it has two significant advantages over the mean filter: because the median is a more robust average than the mean, a single pixel in a neighborhood that is exceptionally unrepresentative will have a little effect on the median value of the neighborhood. Besides, the DenseNet model is utilized as a baseline network to generate the feature map. Furthermore, sparrow search optimization algorithm (SSOA) is applied to choose the hyperparameters of the DenseNet model namely learning rate, batch size, momentum, and weight decay. Using the Sparrow Search Optimization Algorithm (SSOA) requires few considerations. A number of optimization jobs outperform it. Problems with the algorithm are numerous such as inaccurate search and extended reach times. Following its introduction, the sparrow search algorithm was extensively tested. They changed it to make it better. In order to optimize search results, SSOA uses the same method as sparrows to find food. It has fewer adjustment parameters and requires less programming. It is also highly user-friendly. It used tent mapping based on random variables to improve each sparrow’s sequence. So, it optimized the tents with tent perturbation and Gaussian mutation. The sparrow then discovered a way. The person on the opposite side is a fan because they are the first to learn about a sparrow. They hunt, follow, scan, find food, and inform your companion. Their “foraging area” is usually quite small. There are not many things to consider while using the Sparrow Search Optimization Algorithm (SSOA). On many optimization tasks, it outperforms other techniques. The algorithm is flawed in many complex optimization issues such as low search accuracy and long reach times. The sparrow search algorithm was intensively investigated and employed following its introduction. They improved it and made it more useful by changing it. A comprehensive simulation analysis is carried out to highlight the proficient results analysis of the FRCNN-ODN model and investigated the results with respect to several dimensions. In short, the paper contribution can be listed below:(i)Present a novel rain removing technique using the FRCNN-ODN model by incorporating the features of Faster RCNN and DenseNet models;(ii)Apply WMF technique as a denoising process and thereby enhance the image quality;(iii)Present the DesneNet model as a shared network to generate the feature map;(iv)Employ SSOA for hyperparameter optimization of the DenseNet model;(v)Validate the results in terms of MSE, PSNR, and SSIM.

Depending upon the morphological component analysis, a raindrop removal process is presented as a signal decomposition issue [4]. Afterward, the researchers used a bilateral filter, dictionary learning, and sparse coding for acquiring low- and high-frequency components for obtaining the clear image outcome. Besides, Wang et al. [5] employed a screen blending technique and presented to make use of highly discriminative codes on a learnt dictionary for sparse approximation of rain and backdrop layers. For representing raindrop orientation and scaling of raindrops, Liu et al. [6] developed a GMM-oriented patch prior. Through the application of sparsity and gradient statistics of rain and backdrop layers, 3-regularization terms are defined in [7] for progressive extraction of raindrops. Another sparse representation is presented for representing images in large-scale structure and synthesis sparse representation for describing image fine scale texture. In the meantime, a new method is developed to define the backdrop and rain layers, correspondingly. Although it performs well in particular cases, they are not adequate to properly operate in real time distinct shapes of raindrops owing to the simple and subjective considerations.

Fu et al. [8] initially developed the DerainNet for the prediction of rain-free images. To ease the training procedure, Fu et al. [9] presented the DDN technique to eliminate rain content in high frequency parts (HFP). Next, Paulraj [10] developed a conditional generative adversarial deraining network. By the consideration of the complicated nature of raindrops, the researchers in [11] additionally combined a remaining aware classification task and realized a rain density-aware multistream dense network. Moreover, the widely utilized rain method is redeveloped in [12] and created a multitask framework to cooperatively study the binary rain drop mapping, appearing raindrops, and rain-free backdrop. To obtain a high visual quality, the researchers have furthermore presented a detailed protection phase. In recent techniques, by means of repeated unfolding of the shallow ResNet with a recurring layer, Liu et al. [13] proposed a simplified baseline network, PReNet. Also, there are few studies generally focused on discovering the rain imaging techniques and generate few additional real-life rainy/background image sets.

The histogram of drop orientation and GMM is presented in [14] for detecting and extracting the rain layer, correspondingly. A deraining approach depends on error minimize among the frame and phase congruency is established in [15] for detecting and removing the raindrops. They perform better, although they need temporary video data. In [16], a general rank method was developed, where raindrops are considered as low rank. They executed the method in one image and the arrangement of images is characterized by the correlation of raindrops in a spatiotemporal way. For deraining, Wang et al. [17] applied a guided kernel on the texture element. This technique studied a mapping function among the rain images on detailed elements and raindrops.

Chen et al. [18] developed a deraining neural network which frequently erases the raindrops and growths. Even though it has obtained better outcomes in heavy rain, it led to oversmoothed in some cases. In [19], Zhang et al. has made a generative technique with an alternate update of generator and discriminator are utilized to produce the rain-free images. Multiscale discriminator is utilized to leverage characteristics from various scales; however, the introduced model included the undesired haze to the images. In addition, a 2-branch network supported by the rain technique cooperatively learned the method variables that were introduced for removing raindrops and haze like effects. It has tuned the rain-free results for controlling the degree of haze impact.

Deng et al. [20] presented the combination of the high frequency detailed layer content of the image and degenerating the negative remaining data are efficient for the deraining method. It has reconstructed the structure effectively; however, it is ineffective in case of conceivable texture. Subbulakshmi and Prakash [21] projected a deep decomposition composition network (DDCN), a single-image deraining model. The decomposition section split the rain image to clean backdrop and rain layers. The composition network recreates the input by separating the rain-free image and rain transformation.

3. The Proposed FRCNN-ODN Model

The working process of the FRCNN-ODN model for rain removal is discussed here. Initially, the input image is fed into the WMF technique to remove the noise and enhance the image quality. Then, the Faster RCNN model is applied to detect the rain which contains the RPN and Fast RCNN model. Followed by, the DenseNet model is applied for the generation of feature map and the optimal hyperparameters are chosen by SSOA. The detailed working operation of these processes is discussed in the subsequent subsections.

3.1. WMF-Based Denoising Technique

The median filter is commonly employed to denoise an image owing to the effective smoothening outcome on noise with long-tailed probability distribution and effective preservation function on the image detail. However, the filtering window sizes include an essential impact to denoise the outcome of the classical median filter.

With smaller sizes of the filtering windows, the details are effectively stored with a low denoising outcome whereas the large size of the filtering window results in improved denoising outcome with insufficient details saved. For addressing these issues, the WMF technique is developed based on the idea of replacing the present pixel with the weighted median of adjacent pixels in the local window, while the present pixel undergoes replacement with the weighted median of its nearby pixels in a local window. It holds the unique features listed below.(1)The filtering kernel is nonseparable.(2)It could not be estimated through interpolation or down sampling.(3)None of the reiterative solutions exist.

So, the WMF is able to remove the noise proficiently from the noisy image whereas not robustly blur the edges of the image. This feature makes the WMF technique works well for rain removal process [22]. For determining the noisy images, a 3 × 3 window is utilized for sliding over the images. Consider as the value of the intermediate pixel of a local window , afterward, every pixel in the window is defined by

For computing the average values of every pixel in the window ,

Assume and are the maximum and minimum pixel values in the local window , at that time for the pixel value of the middle pixel , if , or or , then, the pixel is considered as the noise pixel. In addition, the denotes the threshold that can be computed as

3.2. Rain Detection and Removal Process

Faster RCNN is commonly used as a detector that utilizes RPN as a region proposal approach, and Fast RCNN as detector. It is an extended version of RCNN in such a way to achieve quick and accurate performance. It makes use of CNN to generate the object proposals known as RPN, as the traditional RCNN and Fast RCNN employ Selective Search at the beginning, called RPN. Alternatively, the RPN in Faster RCNN makes use of a baseline network (DenseNet) as a feature extractor for the generation of feature maps. Afterward, the feature map gets divided into several squared tiles and slides a small network across all the tiles. The small network will allocate a set of object confidence scores and coordinate points of bounding box in all the positions of the tiles. Here, DenseNet is applied to extract features and Fast RCNN to detect rain. It employs CNN to generate object proposals. The RPN applies the input feature map as input and generates a collection of object proposals, and object confidence score as the end output. The processes involved in the Faster RCNN are listed below:(i)Firstly, image is given as input to the DenseNet (CNN) to generate the feature map of the respective image.(ii)Next, RPN is utilized on feature map offering the object proposals with objectness score or value.(iii)Afterward, the RoI pooling layer is included to the proposals resulting in each proposal to an identical size.

Followed by, the proposals are fed into FC layer which comprises softmax and linear regression layers for classification process. Later, it generates the bounding box for the raindrops that exist in the input image.

3.2.1. Region Proposal Networks (RPNs)

Here, the RPN is moved by offering an image as input that is considered as the backbone of CNN. At that time, the given input image is reformed, and therefore the short view is 600 px and the long one is <1000 px. Therefore, the resulted features of the backbone network (H × W) are low compared to the input image. The 2 backbone networks employ DenseNet for the forthcoming rounds. It defines the two series of pixels that exist in the backbone output features corresponding to two points 16 pixels separately in the input image. In case of every point in the output feature map, the network gets aware of the position and size of the objects. It is performed through the fixation of a group of “anchors” on the input image to all the positions of output feature map from the backbone network. Every anchor represents the likely objects in distinct sizes and aspect ratios at respective positions. Next, totally 9 feasible anchors in three distinct aspect ratios and three various sizes are located on the input image for a point A on the outcome feature map.

Primarily, a 3 × 3 convolution with 512 units is applied to the backbone feature map resulting in a 512-d feature map in case of all locations. Followed by, the 2 layers make use of identical process such that 1 × 1 convolutional layer with 18 units are employed to classify objects, whereas 1 × 1 convolutional with 36 units are utilized for bounding box regressor. Therefore, the 18 units in the classification model segment offer a resultant size of (H, W, 18). Afterward, the result is applied to obtain the possibility of having object in each feature map within 9 anchors at varying points.

3.2.2. Fast RCNN as Detection Network

The Fast RCNN holds CNN with target final pooling layer which gets swapped using “ROI pooling” layer whereas the last FC layer undergoes replacement with 2 branches namely (K + 1) category softmax layer and category specific bounding box regression branches.(i)As said earlier, the input image is fed into the CNN model to achieve the feature map of 60, 40, and 512. Also, the main reason for using RPN to generate proposals is to distribute the weights between the RBP and Fast RCNN.(ii)At that time, bounding box proposals gained from the RPN are given to extract features and are activated through ROI pooling layer.

3.2.3. DenseNet Architecture

It is developed from the classical version of ResNet that comprises of the building blocks Identity where additive is integrated to the earlier layer. Here, additive merge is applied to learn the residuals namely errors. It offers the integration of the outcome obtained from the earlier layers despite employing totaling process. Consider an image which is fed as input to the CNN which has layers. It performs a non-linear transformation process in which the represents layer index. denotes the composite functions. Figure 1 shows the structure of DenseNet-169.

The output of the th layer is denoted as . The FFNN connects the outcome of the th layer as input to (l + 1)th layer which results to generate a layer transition: . ResNet includes a skip connection that will bypass the nonlinear transformation by the use of an identity function:

A major benefit of ResNet is that the gradient control is straightforward from the present to earlier layers, and it is obtained by the use of an identity function. Therefore, the integration of the identity function and the end output delays the data flow, which can be improved by the use of distinct connectivity patterns. The layers are linked to one another with no interruptions. At last, the th layer attains the feature map of earlier layer, , as input:where defines the integration of the feature maps generated in layers .

3.3. Parameter Optimization Using SSOA

To choose the hyperparameters of the DenseNet model namely learning rate, batch size, momentum, and weight decay, the SSOA is applied with an intention of increasing the detection rate.

The SSOA is a population-oriented optimization algorithm which is inspired by the foraging and antipredative nature of sparrow colonies. At the foraging procedure of sparrows, the colony gets partitioned to finder and entrant. The finder holds optimal fitness value and offers foraging regions and direction for the whole colony of sparrows. At the same time, the entrant utilizes the location of the finder to consume food, if the colony of sparrows identifies a danger and the alarm. During the searching procedure, the finder with optimal fitness value is provided priority to get food. Since the finders hold the role of searching for food and directs the movements of the entire population, it identifies the food in a broad range over the other sparrows. At every round, the position of the finder gets updated as given as follows:withwhere denotes the present round, indicates the sparrow count, is the variable dimensions, is the maximum round number, denotes the location of the th sparrow at round , is the arbitrary number, indicate the alarm value and is the safety threshold, is a matrix that every factor is 1, and is an arbitrary number. When , it is represented that the foraging situation is harmless, whereas implies that every individual has encountered predators and then, the sparrows are required to fly rapidly to other safer regions.

Based on equation (6), it is found that when , the subsequent generation of finder moves over the presented position. Equation (7) revealed the difference in the finder location’s value range.where indicates the number of rounds, and denotes the range of value distinction of the finder location.

While x gets superior, y slowly narrows gradually in (0, 1) to around (0, 0.3). If x is lesser, the possibility of y taking on a value closer to 1 is superior, and as improves, the distribution of the values of gets more even. So, if , the value range difference of all dimensions of sparrow is obtaining lower. These search approaches create the SSOA widely able to local search, however, it is also causing a tendency to decrease as local optimum solutions in the last rounds.

The remaining of the sparrows are entrants that observe the finders regularly [23]. After they noticeable the finder has to initiate optimal food, it is left their present location as well as fly to optimal seeking areas. A position of the entrants is upgrading as under:where refers the select individual location, that is, the present best location; implies the present global worst location; represents a matrix that all the factors inside are allocated 1 or in arbitrary. If , it proposes that the th entrant is seeking near the optimal position, when , it implies that the th entrant through the worst fitness is required to fly to other places for food. Figure 2 demonstrates the flowchart of SSA technique.

During the colony, every sparrow has the scouting as well as early warning method. In general, the sparrows which are conscious of hunter account to 15–30% of the colony. It can be defined as:where implies the arbitrary step length control coefficient. If fi > , the person is at the population’s periphery and is vulnerable to natural enemies. If fi = , the individual is in the population’s center. The sparrow needs to be close to other individuals at this time to minimize the likelihood of being taken.

4. Experimental Validation

Extensive experimental validation of the FRCNN-ODN model was carried out to determine the effectual performance of the FRCNN-ODN model interms of distinct aspects. The presented FRCNN-ODN model is tested using a set of rainy images and some sample images are depicted in Figure 3.

Figure 4 visualizes the results analysis of the FRCNN-ODN model with respect to rainy and rain removed images. Figures 4(a), 4(c), and 4(e) illustrate the input rainy images and the equivalent rain-free images generated by the FRCNN-ODN model is displayed in Figures 4(b), 4(d), and 4(f).

Table 1 provides a detailed comparative result analysis of the FRCNN-ODN with existing methods with respect to distinct measures [24, 25].

Figure 5 demonstrates the PSNR analysis of the FRCNN-ODN with existing methods on the different sets of images 1–3. The figure depicted that the FRCNN-ODN model has obtained maximum PSNR value on all the applied images 1–3. For instance, on the applied image 1, the FRCNN-ODN model has resulted in an improved PSNR of 32.899 dB whereas the DID, DSC, LP, UGSM, TIP, CVPR, KGCNN, and DDCN models have demonstrated reduced PSNR of 22.409 dB, 22.302 dB, 22.227 dB, 22.763 dB, 18.045 dB, 23.325 dB, 27.851 dB, and 30.276 dB, respectively [2629]. Similarly, on the applied image 2, the FRCNN-ODN model has obtained a maximum PSNR of 34.853 dB whereas the DID, DSC, LP, UGSM, TIP, CVPR, KGCNN, and DDCN models have demonstrated reduced PSNR of 25.294 dB, 22.219 dB, 23.034 dB, 20.626 dB, 19.215 dB, 23.781 dB, 31.231 dB, and 33.750 dB respectively. Simultaneously, on the applied image 3, the FRCNN-ODN model has accomplished a higher PSNR of 34.860 dB whereas the DID, DSC, LP, UGSM, TIP, CVPR, KGCNN, and DDCN models have demonstrated reduced PSNR of 22.359 dB, 22.519 dB, 22.393 dB, 22.382 dB, 18.793 dB, 24.013 dB, 28.031 dB, and 33.451 dB, respectively [3032].

Figure 6 showcases the SSIM analysis of the FRCNN-ODN with existing techniques on the distinct set of images 1–3. The figure exhibited that the FRCNN-ODN model has reached superior SSIM value on all the applied images 1–3. For instance, on the applied image 1, the FRCNN-ODN model has resulted in an improved SSIM of 0.974 whereas the DID, DSC, LP, UGSM, TIP, CVPR, KGCNN, and DDCN models have outperformed minimum SSIM of 0.889, 0.863, 0.889, 0.909, 0.836, 0.902, 0.957, and 0.963, respectively [3335]. Likewise, on the applied image 2, the FRCNN-ODN model has obtained a maximum SSIM of 0.971 whereas the DID, DSC, LP, UGSM, TIP, CVPR, KGCNN, and DDCN models have demonstrated reduced SSIM of 0.851, 0.598, 0.780, 0.641, 0.648, 0.674, 0.957, and 0.962, correspondingly. At the same time, on the applied image 3, the FRCNN-ODN model has accomplished a higher SSIM of 0.973 whereas the DID, DSC, LP, UGSM, TIP, CVPR, KGCNN, and DDCN models have showcased minimal SSIM of 0.865, 0.846, 0.894, 0.913, 0.872, 0.902, 0.966, and 0.964, correspondingly [3638].

Figure 7 illustrates the FSIM analysis of the FRCNN-ODN with existing models on the different sets of images 1–3. The figure portrayed that the FRCNN-ODN approach has obtained higher FSIM value on all the applied images 1–3.

For instance, on the applied image 1, the FRCNN-ODN algorithm has resulted in an increased FSIM of 0.972 whereas the DID, DSC, LP, UGSM, TIP, CVPR, KGCNN, and DDCN models have demonstrated reduced FSIM of 0.927, 0.926, 0.919, 0.933, 0.911, 0.921, 0.957, and 0.963, respectively. In line with, on the applied image 2, the FRCNN-ODN model has attained a maximum FSIM of 0.975 whereas the DID, DSC, LP, UGSM, TIP, CVPR, KGCNN, and DDCN manners have demonstrated reduced FSIM of 0.924, 0.835, 0.872, 0.824, 0.851, 0.882, 0.970, and 0.962, correspondingly. Followed by, on the applied image 3, the FRCNN-ODN model has accomplished a higher FSIM of 0.982 whereas the DID, DSC, LP, UGSM, TIP, CVPR, KGCNN, and DDCN models have outperformed lower FSIM of 0.813, 0.904, 0.806, 0.784, 0.757, 0.846, 0.930, and 0.978, respectively.

Figure 8 displays the UIQI analysis of the FRCNN-ODN with existing methods on the various set of images 1–3. The figure showcased that the FRCNN-ODN approach has achieved maximum UIQI value on all the applied images 1–3. For instance, on the applied image 1, the FRCNN-ODN method has resulted in a superior UIQI of 0.981 whereas the DID, DSC, LP, UGSM, TIP, CVPR, KGCNN, and DDCN methodologies have showcased reduced UIQI of 0.967, 0.953, 0.911, 0.911, 0.830, 0.875, 0.978, and 0.979, respectively. Also, on the applied image 2, the FRCNN-ODN model has achieved a maximum UIQI of 0.982 whereas the DID, DSC, LP, UGSM, TIP, CVPR, KGCNN, and DDCN techniques have demonstrated reduced UIQI of 0.813, 0.904, 0.806, 0.784, 0.757, 0.846, 0.930, and 0.978, correspondingly. Besides, on the applied image 3, the FRCNN-ODN algorithm has accomplished a higher UIQI of 0.998 whereas the DID, DSC, LP, UGSM, TIP, CVPR, KGCNN, and DDCN models have demonstrated reduced UIQI of 0.975, 0.992, 0.987, 0.985, 0.969, 0.994, 0.997, and 0.997, correspondingly.

5. Conclusion

This paper has presented a novel rain removing technique using e FRCNN-ODN model by incorporating the features of Faster RCNN and DenseNet models. Initially, the input image is fed into the WMF technique to remove the noise and enhance the image quality. Then, the Faster RCNN model is applied to detect the rain which contains the RPN and Fast RCNN model. The RPN generates high quality region proposals that are exploited by the Faster RCNN model to detect raindrops. Followed by, the DenseNet model is applied for the generation of feature map. Finally, to choose the hyperparameters of the DenseNet model namely learning rate, batch size, momentum, and weight decay, the SSOA is applied with an intention of increasing the detection rate. A comprehensive simulation analysis is carried out to highlight the proficient results analysis of the FRCNN-ODN model and investigated the results with respect to several dimensions. The obtained simulation values ensured that the FRCNN-ODN model has surpassed the existing methods interms of MSE, PSNR, and SSIM. The FRCNN-ODN approach produced an improved UIQI of 0.981 for the applied picture 1. Additionally, the FRCNN-ODN model achieved a maximum UIQI of 0.982 on the applied picture 2. Additionally, the FRCNN-ODN algorithm achieved a higher UIQI of 0.998 on the applied image 3. In future, the presented FRCNN-ODN model can be extended by the use of haze removal techniques. Further development of the FRCNN-ODN model with the aid of a haze removal algorithm and the application of haze removal approaches is anticipated in the future. The haze removal algorithms are that the light scattering through haze particles degrades the visual quality of an image, and haze removal algorithms are employed to improve this quality. This paper investigates a variety of dehazing approaches that are commonly employed in the field of image processing. Comparing single image dehazing to other novel methods, the results are significantly superior.

Data Availability

The manuscript contains all of the data.

Conflicts of Interest

The authors declare that they have no conflicts of interest.