Abstract
As the interest in probing deep space increases, it is necessary to enhance the autonomous navigation capabilities of the spacecraft. Since traditional navigation methods rely on ground-based radiometric tracking, the vehicle has a significant communication delay resulting in no ability to handle unexpected situations on time. Image-based optical navigation allows interplanetary spacecraft to determine their orbits autonomously. This paper explores how to accurately extract optical observations from the original images to perform autonomous navigation. First, we introduce a simple and efficient idea to locate valuable contours of the celestial body based on gradient variations. Then, we establish a rough estimation with RANSAC to remove the outliers around the edges. Next, we propose a refined estimation based on the hybrid genetic algorithm to precisely estimate the navigation observations. Lastly, numerous experiments have confirmed that our method achieves outstanding accuracy and robustness.
1. Introduction
In March 2019, the U.S. released the Artemis program for returning to the moon. The program is aimed at achieving a permanent, sustainable human presence in cislunar space and preparing for crewed missions to Mars. During this period, the program will land the first woman and the next man on the moon in 2024. To ensure the security of the astronauts, crewed missions need to be able to operate independently in case of a loss of communication with the ground. It means the vehicle must perform navigation functions independently of the ground. Therefore, Artemis I, which is uncrewed, has been designed with an optical navigation (OpNav) system. The OpNav system can execute fully autonomous navigation, allowing the crew to return safely to Earth without ground station assistance.
The OpNav system is one of the most important navigation systems currently available in deep space. It can autonomously extract OpNav observations from the images taken by the onboard camera to determine the spacecraft’s position. Many OpNav measurement types are commonly used for autonomous navigation. These measurements include the line-of-sight (LOS) vector, star horizon, star occultation, and apparent diameter. The idea of extracting the LOS vector is conceptually simple. If the centroid of the celestial bodies can be calculated correctly in the image, then the LOS direction from the onboard camera to the celestial bodies is also known. Consequently, the navigation performance of the OpNav system is directly influenced by the accuracy of the navigation observations.
Celestial geometry model fitting is one of the fundamental image processing problems in OpNav systems. In order to improve the performance of ellipse fitting, some significant works have been released in this domain. The random sampling consensus method (RANSAC) has attracted the attention of many scholars due to its simple principle. Sugaya [1], Xie and Ohya [2], and Mai et al. [3] proposed their respective two-step RANSAC algorithms: (1) dividing the edge point pixels into multiple groups, (2) performing ellipse fitting for each group separately, and finally selecting the optimal ellipse model. However, these approaches are very sensitive to noise and computationally inefficient, wasting onboard computational resources. In order to improve computational efficiency, Kwon et al. [4] introduced a fast RANSAC ellipse detection method that uses the geometric properties of three points. The approach uses normal and differential equations to solve for elliptic parameters, which require three points based on their locations and edge angles. The problem is that the edge angle relationship of the three points is unknown and needs to be calculated so the algorithm’s computational complexity is not reduced. Fraga and Dominguez [5] presented a robust RANSAC fitting method without eliminating outliers. The author uses the sum of orthogonal distances (instead of the sum of the squares of the distances) to reduce the influence of noise and utilizes differential evolution to solve the nonlinear problem. Nevertheless, when the outliers are gathered near the edges of the ellipse, the accuracy of the ellipse fitting will be severely degraded. To decrease the impact of noise, Yu et al. [6] and Shao et al. [7] proposed their outlier elimination algorithms for RANSAC ellipse fitting. Based on the algebraic distance between edge pixel points, they constructed the proximity matrix to remove outliers. However, the authors only performed outlier elimination experiments and did not test the performance of the ellipse fitting. For hardware acceleration, Liu and Yang [8] suggested a real-time ellipse-fitting approach based on the GPU, which provides an idea for implementing the RANSAC algorithm in engineering. The disadvantage is that the performance of the RANSAC fitting algorithm is very sensitive to noise.
The optimization problem is an old topic, and the solutions include heuristic algorithms and clustering algorithms. Heuristic algorithms solve the optimal global solution based on computational experience, including genetic algorithms [9], ant colony optimization algorithms [10–12], and neural network algorithms [13]. The essence of the heuristic algorithm is a greedy strategy, and a stochastic process is added to avoid falling into a local optimum. The clustering algorithm is an unsupervised learning method and a common data analysis technique used in many fields [10–12]. The idea of the clustering algorithm is to gather similar elements into a class [10–12]. For our application, the model of a celestial body is equivalent to the individual of the algorithm, and the points used to compute the model are equivalent to the individual’s genes. Therefore, the genetic algorithm is most suitable to be applied to improve the performance of centroid estimation.
To optimize the performance of the OpNav system, we propose a high-precision centroid estimation algorithm to compute the optical observations from the original images. Our approach includes the following innovations: (1) A fast and efficient idea is proposed to identify valuable contours of the celestial body based on edge gradient. (2) Rough estimation is executed using the RANSAC to remove the outliers. (3) The hybrid genetic algorithm conducts refined estimation to improve the accuracy of centroid estimation. The remainder of this paper is organized as follows. Section 2 introduces the fundamentals of OpNav. Section 3 describes the methods for estimating navigation observations. Section 4 presents the simulation and analysis. Finally, Section 5 gives our conclusions.
2. Fundamentals of OpNav
2.1. Model of the Observed Object
The mathematical model of the celestial body is a triaxial ellipsoid. So many works using the triaxial sphere as the model will have significant errors [14–16]. In this paper, the navigation object is modeled as a triaxial ellipsoid, forming a two-dimensional ellipse in the image plane (see Figure 1).

(a)

(b)
The general representation of the ellipse is [17]: where the describes an ellipse if , and the is the horizon point of the celestial body, and the are the coefficients of the elliptical model. It can be given in a matrix as
It can also be written down as where and . When all points are on the ellipse, there is . Thus, the problem is converted to
Once the optimal ellipse model is found, we can solve other essential parameters of the ellipse, such as the center coordinates , semimajor axis , semiminor axis , and orientation ,
2.2. LOS Vector Model
The optical sensors of the Artemis I include 2 star trackers and one optical camera. The star tracker is mainly applied to determine the spacecraft’s attitude and will not be discussed here. This paper focuses on processing optical observations captured by an optical camera. Four coordinate frames will be used to identify the orbit information in the OpNav system. The data can be converted between the four coordinate frames by means of rotation matrices. We need to compute the LOS observation from the original image to perform autonomous navigation.
The imaging principle of a monocular optical camera is the pinhole model. If denotes the focal length of the camera, the relationship between the point in the camera frame and the point in the imaging plane is given by
The LOS observation from the camera to the raw image can provide critical orientation information for the OpNav system, as shown in Figure 2. If the optical axis of the camera is aligned with the origin of the image plane, the LOS direction can be expressed by [18] where is the LOS vector in the camera frame . Then, according to the rotation relation, we can easily access the LOS direction in the inertial frame : where is the conversion matrix from the camera frame to the body frame, and is the conversion matrix from the body frame to the inertial frame.

3. Methods for Estimating Navigation Observations
In estimating the models of the celestial bodies, the RANSAC method tends to fall into local optima and is very sensitive to anomalous data. The genetic algorithm is suitable for such nonlinear problems, so we propose a hybrid genetic algorithm to improve the performance of centroid estimation. The following section first describes the improvement of the estimation method using a hybrid genetic algorithm.
3.1. Improved Estimation Method Based on Hybrid Genetic Algorithm
In the initialization process, models are considered as the initial population and . The is the set of edge points randomly selected from the sample space , which is used to calculate the model of the celestial body. For simplicity, is abbreviated as . The number of inliers included in the model can indicate the performance of the estimation. It means that the precision of an ellipse model can be evaluated in terms of the number of inliers . Hence, we define the fitness evaluation function as follows: where is the reward function to evaluate the number of inliers, and is the distance scale function to assess the distribution of outliers around the model. In the distance scale function, is the total distance error of the model deviation from each point in the sample space , where the minimum value is 0 and the inflection point is set to . If , it is evident that the current model is the best solution. When the proportion of inliers contained in the model in the total sample is ninety-six percent, we consider the model to be very close to the celestial body model. Hence, the termination condition of the genetic algorithm can be obtained by calculating the corresponding and .
In the selection operation, we set the probability of an individual being selected to be proportional to their fitness. This strategy can automatically identify the better individuals and does not discard any individuals in the population. In this application, the probability of an individual is set as follows: where and denote the factors of the reward function and the distance scale function, respectively. The and denote the values of the reward function and the distance scale function, respectively. We can control the properties of the selection operation by adjusting two function factors.
The crossover of two chromosomes is manipulated to evolve better individuals. Parental chromosomes are mated to produce two offspring chromosomes randomly. We place a crossover factor to control the crossover node of genes and its range to be . In each rendezvous, the parental chromosomes mate only once.
To prevent the algorithm from converging to a local optimum, we introduce the annealing mutation operator, a two-way and probabilistic operation. The factor is set to control the probability of performing the mutation operation, and its range is . If a mutation is executed, we will evaluate the outcome according to the improved Metropolis criterion. When , we will count the reception probability based on the reward function. Based on the characteristics of the task, we improve the Metropolis criterion as follows:
After the annealing mutation operation, the algorithm can effectively avoid convergence to a local optimum.
The above are the critical steps of the hybrid genetic algorithm, and the complete procedure is shown in Figure 3. The threshold and tests are shown in Section 4.1. The following section presents the complete image processing pipeline to extract navigation observations, which includes our improved hybrid genetic algorithm.

3.2. Image Processing Pipeline for Extracting Navigation Observations
This section describes an image processing pipeline for estimating the centroids of the celestial bodies from single-channel grayscale images. In this application, the moon’s images are considered processing targets captured by the onboard optical camera.
In the first step, we set an adaptive threshold to binarize the images. Since the background (deep space) is much darker than the moon, the grayscale distribution of the foreground (the moon) can be easily calculated. We determine the adaptive threshold by computing the maximum variance of the background and foreground. As shown in Figure 4, our method is not easily affected by brightness and contrast.

(a)

(b)

(c)
In the second step, we propose open (○) and close (●) operations to exclude interference objects generated by high contrast, such as background stars, inconsistent lighting conditions, and irregular textures. This procedure helps the algorithm to find the target quickly to reduce the computational effort. Different thresholds have a severe impact on the performance of the operation. It is worth noting that the threshold mainly affects the performance of removing stars’ contours, and the threshold chiefly controls the operation of removing irregular textures from the lunar surface. After many tests (tests will be presented in Section 4.2), the two thresholds of and are set to cancel the interference areas (see Figure 5). It can be seen that the operation effectively suppresses the false contours and does not distort the lunar outlines. The next step will remove the remaining false contour, which is larger.

(a)

(b)

(c)
In the third step, we locate all the edges in the grayscale image and find the outer contour of the observed object. Based on the grayscale distribution, the Canny algorithm is applied to calculate the gradient (magnitude and direction) to detect the lunar margins. Assuming the optical camera is aimed at the moon, we can identify the planetary contours by calculating the area. The performance of the edge detection operation is shown in Figure 6.

(a)

(b)

(c)
In the fourth step, we divide the moon’s contour into feature curves to identify valuable profiles. As the light conditions of the observed object, we locate the valuable arc by setting , as shown in Figure 7 for the blue and red curves. To discover the valuable edges (blue curve), we develop a quick and efficient way: (1) Finding the edge with the steepest gradient variation on both sides ( and ) and with maximum distance. (2) Splitting the moon’s contour into two parts (blue and red arcs). (3) Calculating multiple sets of distances ( and ) to determine the valuable margins. Finally, we output the image coordinates of the blue arc employed to estimate the ellipse model. Compared with other author’s method of cutting area [15], our method has lower computational complexity and higher accuracy.

The fifth step performs rough estimation using the RANSAC method to remove the outliers. In OpNav tasks, the raw image can easily introduce various noises in some cases, such as platform vibrations or signal interference. Noise near the celestial limbs cannot be eliminated by using edge detection algorithms alone. Therefore, we propose a rough estimation using RANSAC to eliminate the spurious edges. Based on the image coordinates, we estimate the ellipse model by RANSAC for a rough estimation. Then, we set a threshold to control the performance of removing outliers.
For a smaller , the sample tends to form strong connections, and the outliers are easily ignored; for a larger , the inliers are easily misjudged as outliers. Hence, we perform six comparison trials to find the optimal value (see Figure 8). In the tests, we vary the threshold value and keep other experimental conditions constant. It can be found that the best performance of the outlier rejection is achieved when .

In the sixth step, we perform refined estimation using the improved hybrid genetic algorithm to extract the moon’s centroid. The maximum number of iterations is set to where is the probability that the model meets the requirements and is the probability that the inliers are selected from the sample. After many iterations, we can use Equation (7) to calculate the LOS observation for optical navigation based on the solved optimal model and the navigation camera’s parameters.
To exhibit the performance of the image processing pipeline, the complete flow of our method takes a very dark moon as the scene, as shown in Figure 9.

(a)

(b)

(c)

(d)

(e)

(f)

(g)
4. Simulation and Analysis
4.1. Test
Since the parameters and have significant effects on crossover and annealing mutation operations, respectively, we perform this test to determine the optimal and . First, the inlier function calculated under parameters is set as where denotes the model computed under parameters and , represents the number of inliers of the model, and is the number of iterations. Then, the average performance curve is defined as where is the size of the population, representing models during each iteration. The function denotes the average performance of the models in each iteration. We need to consider the impact of both parameters because they are coupled. Finally, the relative goodness curve is defined to express the performance graph where represents the minimum value of the number of inliers during the historical iterations, and means the maximum value of the number of inliers during the historical iterations. In the test, we set the parameter range as . As shown in Figure 10, the value of rises from 0.3824 in the 15th iteration to the maximum value of 0.6384. Consequently, the optimal parameters are found to be and . In this case, the hybrid genetic algorithm prefers to evolve outstanding individuals.

(a)

(b)
4.2. Test
In open and close operations, appropriate thresholds and can effectively remove the interference contours without destroying the celestial limbs. Therefore, we build a dataset of 30 lunar images to test the optimal threshold. The dataset contains various scenarios, such as lunar images with irregular textures on their surfaces and many stars in the field of view. It is worth noting that the threshold mainly affects the performance of removing stars, and the threshold chiefly controls the operation of eliminating irregular textures from the lunar surface. As a result, we need to perform two experiments to determine the best performance for open and close operations. Figure 11 shows two lunar images randomly selected from the dataset for testing.

(a)

(b)
First, experiments are conducted to determine the optimal threshold . The stars are very far from the spacecraft, resulting in the small sizes of the stars in the images. Thus, we take three relatively small thresholds of , , and to remove the stellar contours. The results are shown in Figure 12. When the threshold is small, the operation cannot eliminate all the stellar contours. But when the threshold is large, the operation would damage the moon’s limbs. So, the optimal threshold is allowing the algorithm to efficiently cancel the stellar contours from the field of view without disrupting the moon’s limbs.

(a)

(b)
Next, we carry out tests to identify the optimal threshold . Since the relative motion between the camera and the moon, the irregular textures of the lunar surface in the images are not regular. We need to predict the proper threshold depending on the size of craters on the lunar surface. In this application, the thresholds are set to , , and to verify the performance of the operation (see Figure 13). After many trials, it is undesirable to sharply increase to eliminate irregular textures, as the moon’s limbs would be destroyed. To protect the limbs of the navigation object, we assign the optimal threshold to be .

(a)

(b)
4.3. Centroid Estimation Test
To test our method, we conduct a series of experiments on a dataset produced by the software Celestia v1.5.3, the authoritative astronomical software developed by NASA. The dataset consists of 100 synthetic grayscale lunar images. The camera’s distance from the observed object varies from 20,000 km to 330,000 km. And considering different lighting conditions and observation locations, the dataset contains various experimental scenarios, such as a bright full moon and a dim partial moon (see Figure 14).

(a)

(b)

(c)

(d)

(e)
Our method estimates 100 lunar images in the dataset to calculate the accuracy of the centroid extraction. For each image, we repeatedly compute 100 times to gather the mean error and standard variance of the centroid estimation. The qualitative results of some samples are shown in Figure 15. However, the pixel error of centroid extraction cannot directly express the accuracy of optical navigation. We need to compute the LOS direction used for OpNav according to Equation (7). In this application, we suppose that the camera parameters are the focal length mm, the field of view , and the scale factor pixel/mm and pixel/mm. The quantitative statistics of some samples are given in Table 1. LOS pointing accuracy indicates that our method can meet the needs of optical navigation in deep space.

(a)

(b)

(c)

(d)

(e)
This section tests the image processing performance of the proposed method on real image data. In the test, lunar images are regarded as experimental objects taken from the International Space Station (ISS) at 420 km from the Earth. It should be noted that the presented approach can be applied to other ellipsoidal celestial bodies since the algorithm proposed in this paper considers the ellipsoid as a mathematical model. To enrich the diversity of the sample, the experiment is executed on three types of lunar images: full moon, half moon, and crescent (Figure 16). The method in this paper can still estimate lunar models accurately on real data.

(a)

(b)

(c)
To test our method’s accuracy and robustness, we add disturbances to the original images, including Gaussian noise, transmission interference, and motion blur (see Figure 17). These disturbances are designed to reproduce realistic scenarios that spacecraft may encounter in deep space, such as cosmic rays, channel noise, and geometric distortion. Our method is executed on these noisy images to estimate the lunar model, as shown in Figure 18.


In this section, we compare the performance of our method and the least squares method for estimating the models of the navigation object. The two methods perform three experiments with the noisy dataset, respectively. For each noisy image, the two methods are repeatedly performed 50 times to collect the centroid estimation’s mean error and standard variance (see Figures 19–21). When the noise intensity of the transmission interference is low, the two methods perform similarly. However, with increasing noise intensity, our method exhibits better robustness than the LS method. Motion blur noise can seriously undermine the performance of two methods among the three disturbances. It prompts us to prevent motion blur noise in deep space missions, such as platform vibration and camera shutter speed. In short, the comparison experiments indicate that our method generally has better centroid estimation accuracy and robustness than the LS method.

(a)

(b)

(a)

(b)

(a)

(b)
The Gaussian noise is introduced to mimic disturbances in the image collection process, such as channel noise and camera noise. We control the Gaussian noise intensity by varying the standard variance from 0 to 1. Figure 19 displays the results of the methods performed on noisy images.
There are many cosmic rays in deep space, which are electromagnetic radiation from the depths of the universe and have a high energy level. We simulate the effect of cosmic ray interference by artificially corrupting the image. We present different levels of interference textures in the image to simulate the effects of cosmic rays (see Figure 20).
During the maneuvering process, geometric distortion can seriously degrade the imaging quality of the optical system. Geometric distortion can be caused by various circumstances, such as platform vibration and the speed of the camera shutter. Therefore, we introduced motion blur ranging from 1 to 10 pixels to simulate the effects of geometric deformations. The estimation is shown in Figure 21.
This section quantifies the time spent on the critical steps of our approach. The experiments are conducted on a Windows system platform with a six-core processor and 8G RAM. After 50 experiments, it can be seen from Table 2 that the most time-consuming step is the refined estimation which accounts for almost 56.81% of the total time ( ms). Specifically, the rough estimation operation takes about 216.55 ms, and the refined estimation operation costs about 428.05 ms. The algorithm will be more efficient if the performance of the hardware device is improved.
5. Conclusions
In this paper, we propose a robust image-processing pipeline to estimate the models of celestial bodies. We improve the accuracy of centroid extraction through rough and refined estimation operations. After performing many trials repeatedly, we analyse the accuracy of pixel-level centroid extraction and LOS direction under different conditions. In addition, we assess the robustness of our method to various perturbations and display the outcomes of processing numerous noisy images with Gaussian noise, data corruption, and motion blur. We conclude that a simple and efficient optical observation extraction method has been discovered.
Data Availability
We are very regretful that the data of the manuscript is not available.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work is supported by the Shanghai Sailing Program (No. 21YF1446000). I would like to thank Baojun Lin, Yingchun Liu, and Xia Lin of the Shanghai Engineering Center for Microsatellites for their discussions and contributions to this work.