Abstract
Aiming at the problems of insufficient image contrast in three-dimensional reconstruction of UAV in low illumination environment and the unstable iteration times of the RANSAC algorithm in the feature matching process, real-time matching method of UAV aerial image is proposed. First, a new image enhancement algorithm is applied to the image to enhance its quality and visibility. Second, the enhanced fast algorithm in ORB extracts the feature points from the preprocessed image, and cross-matching performs rough matching. Finally, the PROSAC algorithm solves the homography matrix by selecting the highest quality interior points from the extracted feature points. To improve the matching accuracy, some exterior points that do not conform to the geometric characteristics of the image are removed based on the homography matrix and the set mismatch threshold. The results show that the improved ORB algorithm is applied to the low illumination environment of UAV aerial photography, the image matching accuracy in 3D reconstruction is improved, and the correct matching rate tends to 97.24~99.39%. The relevant research findings and conclusions provide a fast and effective method for UAV image matching in different low illumination environments.
1. Introduction
Using the images taken by UAV to restore the three-dimensional information of the scene has always been a hot issue in the image field. The image contains a variety of information, so the most basic link to restore the real 3D scene from the image taken by UAV is image matching. At present, image matching is currently used extensively in target tracking [1–4], 3D reconstruction [5–8], visual SLAM [9–11], UAV obstacle avoidance navigation [12, 13], and land surveying and mapping [14]. However, one of the common situations in real life is in a low-light environment photographed by UAV. This type of low illumination image cannot obtain the true color information and texture details of the object, resulting in a degraded image with low image quality, color distortion, signal-to-noise ratio, and contrast, resulting in a poor image matching effect, and cannot be applied to real-time 3D map construction of UAV, image matching, and subsequent 3D mapping [15]. As a result, the enhancement of low illumination images taken by UAV has important research significance for image matching.
Many researchers have proposed image enhancement algorithms to improve image quality in a low illumination environment to address the low visibility of low illumination images. Liao et al. [16] proposed a fusion algorithm of fast guided filtering, single-scale Retinex, and multichannel color-preserving enhancement. The algorithm converts the low illumination image into YUV color space and estimates the Y component using the acceleration guided filter and finally converts the color space between YUV and low illumination image, enhances the feedback of UV color component, and effectively processes the details and color of low illumination image. Lore et al. [17] verified that the stacked sparse noise reduction auto-encoder trained by synthetic data could enhance and denoise low illumination noisy images based on LLNet and S-LLNet network structures. Xu and Jung [18] proposed a brightness adaptive image enhancement algorithm based on Retinex theory, which decomposes the image into illumination and reflection layers and uses brightness adaptive processing to deal with the illumination layer that leads to loss of detail. First, the adaptive Gaussian algorithm is used to obtain the illumination layer filter used to remove the halo defect. The illumination of the illumination layer is then removed using the brightness adaptive Retinex (MSR) process while the details are preserved. Finally, the image’s contrast is enhanced. Fu et al. [19] improved the method of Xu et al. First, adaptive smoothing is used to estimate the illumination component in the image, and brightness adaptation is used to obtain the brightness difference (JND). The illumination attenuation factor extracted from JND is then calculated, and MSR enhancement details are carried out on this basis. Finally, an adaptive gamma correction with weighted distribution is introduced to perform contrast enhancement. Wei et al. [20] proposed a structure-aware total variation constraint for depth image decomposition based on the Retinex model, generated an enhancement network in low light, and trained the synthesis network end-to-end to achieve a good enhancement effect. Gu et al. [21] proposed a method for image enhancement at low light levels, which inputs the image into the scene according to the brightness similarity and enhances the image by using the transmission estimation strategy of the scene and the total variational smoothing method. Although the above methods improve the image quality, there is still serious distortion when processing the real night image taken by the noisy UAV. As a result, the accuracy of image matching decreases, and the 3D information of image scene cannot be recovered in real time. Hossein-Nejad et al. [22] proposed an adaptive RKEM algorithm that considers the type and distortion of the image while adjusting the threshold to solve the problem of low matching accuracy of low illumination images. By determining the corresponding adaptive threshold for each type of image, the complexity of calculating redundant key points is eliminated, and the matching performance of the algorithm is improved. Jiang et al. [23] to simplify the initial match pairs, the AVT method is integrated with previous MST-Expansion algorithm, which is used to extract a match graph by analysing the image topological connection network. For match pair selection of oblique UAV images, the proposed method is an efficient solution. However, when the UAV is in the night environment, the image recognition is fuzzy, resulting in the incomplete construction of 3D map. Zhu et al. [24] proposed an improved method combining Harris corner detection, accelerated robust feature (SURF) feature detection (Harris + SURF) algorithm, and image enhancement algorithm to address the instability of image matching caused by the low quality and efficiency of images collected by illumination and complex terrain. The results show that the number of feature points increases, the original image is retained, and the image quality and matching accuracy are improved. However, the above image enhancement methods combined with the feature matching algorithm have a slow execution time and cannot meet the real-time requirements of image matching in UAV aerial photography restoration and 3D reconstruction.
To address the poor matching accuracy of low illumination images taken by UAV, a real-time matching method of low illumination images taken by UAV is proposed, which not only meets the requirements of real-time and efficiency but also improves the quality of low illumination images and provides a feasible method for restoring three-dimensional scenes. The work is as follows: (1)A novel adaptive weighted multiscale Retinex image enhancement algorithm is proposed to improve the quality of images taken in low illumination captured by UAV. The algorithm introduces an amplitude compensation mechanism, calculates the adaptive weight using both optimal and classical scale parameters, resolves the problems of data truncation and the poor global filtering effect caused by logarithmic change, and realizes the effect of image enhancement(2)Because the traditional ORB and RANSAC [25] algorithms are used to match image features, the RANSAC algorithm extracts a high proportion of interior points from the extracted feature point set and solves the optimal estimation model quickly. However, the number of iterations is unstable, and the efficiency of the RANSAC algorithm is low. Therefore, an improved asymptotic uniform sampling algorithm (PROSAC) is proposed based on the RANSAC algorithm. The PROSAC [26] algorithm performs semirandom sampling from the incremental optimal set compared with the RANSAC algorithm. It removes some external points that do not conform to the geometric features of the image, which saves the amount of calculation and improves the execution speed
The implementation steps of this paper are as follows: first, an improved fast detector is used to detect the corners of the image using the enhanced UAV aerial image. Then, a cross-matching algorithm is used to eliminate the feature points with excellent quality. Finally, the PROSAC algorithm filters the mismatch between feature points to improve the proportion of interior points. PROSAC algorithm ensures the correct matching rate and effectively improves the sampling efficiency and reduces the operation time. The results show that the real-time fast image matching method in UAV aerial photography low illumination environment has high execution efficiency and good robustness. Thus, combining the image enhancement algorithm and ORB algorithm applied to UAV front-end processing can improve the quality and matching accuracy of low illumination images.
2. UAV Aerial Photography Matching Algorithm
First, based on the theoretical basis of the Retinex algorithm, an improved MSR algorithm is proposed, which combines the adaptive weight MSR image enhancement algorithm with the ORB algorithm. Then, the weights are calculated according to the scale parameters of different low illumination images taken by UAV for enhancement. Second, the ORB algorithm is used to extract the enhanced image features, and the PROSAC algorithm is used to filter out mismatches and minimize redundant feature calculation. Finally, the computational efficiency of the algorithm is increased. Figure 1 illustrates the algorithm’s structure.

2.1. Traditional Retinex Algorithm
The traditional Retinex [27] algorithm models are classified as single scale (SSR) [28], multiscale (MSR) [29], and color restoration (MSRCR) [30]. The three models represent a unified process for solving the incident and reflection components using logarithmic operation on an image, reducing calculation and processing. Then, its calculation expression is
where denotes the enhanced output result. Gaussian surround filter is used in the three model algorithms to improve the execution efficiency of the algorithm. The corresponding SSR algorithm is
The multiscale MSR algorithm is to the weighted sum of the results through multiple-scale parameters, and its expression is
where is the th channel of the image, is the convolution operator, is the MSR output of the th channel, and th represents the three channels of R, G, and B colors, and and are the weights corresponding to the scale parameter and Gaussian surround filter, respectively. The calculation formula is
Among them, .
Where is the normalization factor, is the image detail and color information under different scale parameters. For the enhancement effect of different scale parameters, three scale parameters are generally selected: , , and .
While the MSR algorithm combines the advantages of various scales and significantly improves image detail and contrast, it does not address image color distortion in some images. Therefore, MSRCR is improved using MSR, and a color restoration factor is introduced to correct the color dilution of the enhanced image. The mathematical expression is among φ is a mapping function.
Although MSRCR is better than MSR in image color enhancement, it is still poor in color restoration due to many variable parameters.
2.2. Improved Multiscale Retinex Algorithm
When the traditional Retinex algorithm enhances a low illumination image, it adds a nonzero positive number to avoid the parameter of the logarithmic function being zero, preventing the direct calculation of pixels with zero-pixel values in the image and avoid the loss of feature information. However, data mutation occurs in the final stretch data stage, resulting in image distortion. The improved MSR algorithm modifies the traditional logarithmic function first by introducing the variable parameter compensation factor, and then, the corresponding change calculation formula is
In a certain range, increasing the tends to improve the image processing effect. When it continues to increase, the enhancement effect remains relatively constant.
The image is divided into the flat region and nonflat regions based on local characteristics, and the flatness index used to distinguish different regions is
where is a local area, is the standard deviation of the region, and control the flatness index coefficient range, and and Ψs satisfy the inverse relation.
For enhancing the image details of the nonflat region of an image, the image is divided into subblocks , and the optimal scale parameters of the Gaussian surround filter are calculated according to the flatness index in each subblock. The corresponding calculation formula is as follows: among
The obtained optimal scale parameters have a range of [15, 250], and the weight is determined by the adaptation of the optimal scale parameters corresponding to the image details. The following is the calculation formula:
Among them are . The greater the weight, the better the scale parameter; the other two scale parameters are image detail and contrast. Then, the MSR algorithm of adaptive weight can be expressed as follows:
where represents the enhancement effect of different color channels, is the total number of image blocks, is the number of scale parameters, and is the weight of the corresponding scale parameter. The greater the weight, the greater the enhancement effect when the scale parameters are closer to the optimal parameters of the subblock.
Finally, avoid overenhancement in the brightness area, resulting in color distortion and blurred details in the overbright area. Image normalized brightness calculation and the calculation formula are as follows:
The final enhancement result can be expressed as follows:
where is the enhanced result of the improved algorithm and is the original image. The improved MSR algorithm is compared to the traditional MSRCR algorithm for low illumination image enhancement in various environments. The results are shown in Figures 2–4.

(a)

(b)

(c)

(a)

(b)

(c)

(a)

(b)

(c)
In the UAV aerial image enhancement comparisons in Figures 2–4, (a) is the original test image, (b) is the enhancement effect of traditional MSRCR, and (c) is the enhancement effect of improved MSR. As seen from the enhancement results of three different scenes in the figure, while the traditional MSRCR algorithm improves image quality and color information, color correction remains distorted. The improved MSR image enhancement algorithm restores the image color part and processes the image details and image noise, and the overall image quality is clearer.
3. ORB Algorithm
The ORB (Oriented FAST and Rotated BRIEF) algorithm is divided into two parts: feature point extraction and description. Feature point extraction is based on the improved oFAST [31] detection operator of FAST to detect feature points. In contrast, feature point description is calculated using the extremely fast rBRIEF [32] binary descriptor algorithm to improve the speed of image feature extraction. Among them, oFAST determines the corner primarily by comparing the difference between the surrounding and neighborhood pixels. In the process of corner extraction, nonmaximum suppression is used for screening to avoid the phenomenon of “pushing,” and the Harris response value is calculated.
Fast constructs a scale pyramid to down sample the image and realizes scale invariance by calculating different scales. For nondirectionality, it is implemented by intensity centroid. The steps of the gray centroid method are as follows:
Step 1. In the neighborhood image block, the moment of the image block is where is the gray level of the image.
Step 2. The centroid of the image block can be found through the moment, and the centroid is represented by :
Step 3. Connect the geometric center and the centroid of the image block to obtain a direction vector; the direction vector can then be defined as: The above calculation greatly improves the robustness of fast corner representation between different images.
3.1. BRIEF Descriptor
BRIEF is a binary descriptor generated by randomly selecting points and comparing the relationship between their pixel sizes. Each pair of descriptor vectors is composed of a series of 0 and 1 values, making storage simple and quick. Then, the calculation steps of the brief descriptor are as follows:
Step 1. First, Gaussian blurs the smooth image to enhance the image antinoise and make the descriptor more robust.
Step 2. Using image integration, the size near the feature points is , and a pair of random points is generated inside it.
Step 3. Taking the random point as the center, compare the two . The gray value in the subwindow is binary coded.
Then, the calculation formula is as follows:
Among them, is a binary descriptor, randomly set a pair of points .
3.2. Feature Matching
Feature matching is a critical step in resolving image data association. However, some scenes contain many repeated textures, making it difficult to distinguish between feature descriptions, resulting in false matching during feature matching. Therefore, cross matching is introduced into feature matching to perform two feature matching to filter out the false matches generated by the traditional brute force matcher (BF). The Hamming distance [33] is used to determine the distance between each pair of feature points. The pose relationship between image pixels is determined using the XOR operation to improve the correct matching rate.
3.3. PROSAC Algorithm Eliminates Mismatches
The traditional RANSAC algorithm is inefficient when filtering out mismatched pairs due to its unstable iteration times and slow execution speed. Therefore, the PROSAC algorithm is used to eliminate false matching and improve matching accuracy. The PROSAC algorithm presorts the sampling points based on their quality, filters them with a high degree of confidence within the specified iterative threshold, and then marks them as sampling points (internal points). Then, multiple two-dimensional point pairs within the sample points are calculated to determine the optimal homography matrix (homography matrix). The transformation matrix between the two planes is determined using the minimum mean square error method. Finally, the optimal model parameters are obtained. The essence of the algorithm is to estimate high-precision parameters from a large number of data sets containing external points to eliminate mismatches. Figure 5 shows the random sampling process of the PROSAC algorithm.

PROSAC algorithm can efficiently eliminate mismatched points in the process of image matching, so as to obtain high-precision matching interior points. In image matching, the ratio of Euclidean distance is established for each pair of feature points, which is calculated as follows:
where is the minimum Euclidean distance and is the subsmall Euclidean distance. The smaller the ratio, the smaller the distance between them, and the higher the quality of feature point matching. To judge the matching quality, a quality factor is introduced, namely:
The quality factor is introduced into the feature point classification to improve the feature point matching quality, reduce the number of iterations, and reduce the operation time and complexity of the algorithm. The algorithm steps are as follows.
Step 1. Set the initial value of the number of iterations, the upper limit of the number of iterations, the threshold of the number of interior points, and the error limit range of judging the interior points. The initial value is set to 0.
Step 2. Calculate the minimum Euclidean distance of feature points and solve the ratio of Euclidean distance. The quality factor is calculated to measure the quality of the matching point pair.
Step 3. Sort the sample data in descending order according to the matching quality and then select high-quality data.
Step 4. Feature matching is carried out from the selected sample data, the matching points with high matching quality are selected, and the homography matrix is calculated. The projection points corresponding to the unselected alternative points are calculated, and then, the error between them is calculated. If the calculated model parameter error is less than the number of set error range, it is the internal point.
Step 5. If the calculated number of interior points is greater than the set threshold number of interior points, the number of interior points will be updated; otherwise, the number of iterations will be increased by 1 and return to Step 3.
Step 6. After updating the number of interior points, calculate the homography matrix again and obtain new interior points.
Step 7. If the number of iterations is less than the set maximum number of iterations, return to Step 3. On the contrary, the model does not comply.
Figure 6 illustrates the PROSAC algorithm’s flow chart.

4. Experiment and Analysis
4.1. Simulation Environment
Ubuntu 16.04 (Windows 10 i7-6700hq, CPU@2.60GH) is used as the experimental software environment. The experiment is designed as a simulation comparison. The improved ORB algorithm is superior in image matching under various low illumination environments of UAV aerial photography, comparing the traditional ORB algorithm and the SURF algorithm. Three groups of low illumination images in a complex environment with an image size of 640 480 are selected for simulation. These three groups of images are the video of outdoor low illumination image environment taken by UAV aerial photography under different exposure conditions. The experimental image is a group of pictures adjacent to two time series randomly selected from the video as the experimental object, with different global scene, local scene, and environment texture.
4.2. Experimental Results and Analysis
The algorithm uses a 0.7 times Hamming distance threshold to evaluate the SURF, ORB, and improved ORB algorithms in Ubuntu 16.04 using various low illumination images with resolutions of 640 380. The algorithm execution time, the number of false matches, and the matching accuracy are all calculated. The simulation results are evaluated using the above indicators, to verify the efficiency and feasibility of the real-time matching method of UAV aerial low illumination image. The test results are shown in Figures 7–9 and Tables 1–3 of performance comparison of three algorithms.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)
For Test 1, a single roof scene with low illumination is photographed by UAV, and two images of adjacent frames in time sequence are selected for experiments. The efficiency of method in this paper is verified by setting up comparative experiments. The experiment is as follows. Figure 7 shows Test 1. The SURF+RANSAC, ORB+RANSAC, and ORB+PROSAC algorithms are combined with MSRCR and the improved MSR algorithm, respectively, to compare the feature matching of houses and cars in the local scene of UAV aerial photography. Through the comparison between the experimental results in Figure 7 and the algorithm performance in Table 1, for local scene feature points, this method significantly improves the image quality, detail texture, and color restoration and greatly reduces the false matching during feature matching, and the feature matching rate of the image reaches 97.24%.
For Test 2, a group of low illumination global scenes were taken by UAV, including the features of white clouds, houses, and trees. Similarly, two images of adjacent frames in time series are selected for experiment. Through comparative experiments, it is verified that the proposed method is effective to enhancing the details of object features after enhancing low illumination images and improves the accuracy of feature matching. The experiment is as follows. Figure 8 shows Test 2. The SURF+RANSAC, ORB+RANSAC, and ORB+PROSAC algorithms are combined with MSRCR and improved MSR algorithms, respectively, to compare the feature matching of houses in UAV aerial photography global scenes. By comparing the matching results in Figure 8 with the algorithm performance in Table 2, after enhancing the image, the details and colors of houses, white clouds, and trees in the image are well restored, and the feature points in the image are added. The matching accuracy of this method is 98.67%.
For Test 3, a group of low illumination complex scenes were photographed by UAV, including different characteristics of cars, houses, trees, pedestrians, and motorcycles. Two images with a phase difference of 5 frames in the time series are selected for the experiment. Through comparative experiments, the effectiveness and efficiency of this method for target feature detail enhancement after low illumination image enhancement are verified. The test is shown in Figure 9 and Table 3, which verifies that the improved MSR image enhancement algorithm combined with ORB+PROSAC enhances each feature in the image. During feature matching, the false matching and accuracy are significantly improved, the number of matching interior points is greatly increased, and the matching accuracy rate is 99.39%.
The conclusion is drawn by comparing the feature matching results in Figures 7–9 and the algorithm performance in Tables 1–3 as follows: (1)The traditional ORB algorithm extracts more internal points, has fewer false matches, and takes less time to execute than the SURF algorithm when matching image features in a low illumination environment of UAV aerial photography. The improved ORB algorithm enhances the image’s quality and color information. The improved ORB algorithm outperforms the traditional ORB and SURF algorithms in terms of extracting the number of points in the image, reducing false matches, and increasing the correct matching rate(2)The traditional ORB+RANSAC algorithm in matching the low illumination image taken by UAV has a correct matching rate of 15.86 to 23.81% higher than the SURF+RANSAC algorithm. Compared to the traditional ORB and SURF algorithms, the ORB+PROSAC algorithm improves the correct matching rate by 5.82~29.63%, and the accuracy tends to 86.77~91.09%(3)When MSRCR image enhancement algorithm is used, the experiments show that the matching accuracy of SURF+RANSAC, ORB+RANSAC, and ORB+PROSAC algorithms is significantly higher than that of nonenhanced low illumination images. Through the performance comparison table of the three algorithms, it is concluded that the three algorithms improve by 3.1 ~ 13.05%, 6.64~7.87%, and 1.82~5.54%, respectively, in different three groups of tests. When the improved MSR algorithm is used to enhance the low illumination image, through the matching test results of three different scenes and Tables 1–3, it is concluded that the matching accuracy of ORB+PROSAC is 3.85~17.93% higher than that of SURF+RANSAC and ORB+RANSAC, and the matching accuracy of the method proposed in this paper tends to 97.24~99.39%. Compared with MSRCR image enhancement algorithm, the improved MSR image enhancement algorithm improves 3.47~13.93%, 5.31~5.84%, and 3.87~6.36%, respectively, in different three groups of tests. Experiments show that improved ORB algorithm proposed in this paper is applied to the real-time matching method of UAV aerial image, which improves the low visibility and detail definition of low illumination image and improves the image feature recognition and correct matching rate
In conclusion, the method in this paper outperforms the traditional algorithm regarding feature matching, running time, and matching rate, making it a feasible method of image matching in UAV aerial photography low illumination environment.
5. Conclusion
In order to address the issue of poor image quality in a low-light environment of UAV, which leads to a poor image matching effect, this paper first proposes an improved Retinex image enhancement algorithm to enhance and preprocess this type of image. Second, the PROSAC algorithm is introduced into ORB to find the best quality interior point, solve the homography matrix, and eliminate false matching. Finally, the improved ORB algorithm is used for the image matching experiment in UAV 3D reconstruction. The main conclusions are as follows: (1)An improved MSR algorithm is proposed based on the traditional Retinex model theory. First, the variable parameter amplitude compensation factor is added to modify the logarithmic function, followed by the image being divided into blocks and the optimal scale parameters being calculated based on the characteristics of each image. Second, the adaptive weight is determined through the optimal scale parameters in the subblock. Finally, the image brightness is maintained by normalization to obtain a high-quality image(2)The essence of the PROSAC algorithm is to find points with high confidence and mark them as interior points to estimate the model. Reduce the number of iterations, execution time, and computational complexity of the algorithm. Experiments show that the performance of the PROSAC algorithm is better than the RANSAC algorithm(3)The results of different three groups of aerial photography low illumination image matching experiments indicate that the image matching efficiency increases when the low illumination image is enhanced. The effectiveness of real-time fast image matching method for UAV aerial photography enhancement in different low illumination environments is verified, which provides an effective and feasible method for subsequent 3D reconstruction and target tracking
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
Acknowledgments
The authors would like to thank the National Natural Science Foundation of China (Grant Nos. 51909245 and 51775518), the Grant from the Open Foundation of Key Laboratory of Submarine Geosciences, MNR (Grant No. KLSG2003), the Natural Science Foundation of Shanxi Province (Grant No. 201901D211244), the High-level Talents Scientific Research of North University of China in 2018 (Grant No. 304/11012305), the Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi (Grant Nos. 2020L0272 and 2020L0292), and the Natural Science Foundation of North University of China (Grant No. XJJ201908).