Abstract

This paper studies the staring imaging attitude tracking and control for satellite videos based on image information. An improved temporal-spatial context learning algorithm is employed to extract the image information. Based on this, a hyperbolic tangent fuzzy sliding mode control law is proposed to achieve the attitude tracking and control. Furthermore, the hyperbolic tangent function and fuzzy logic system are introduced into the sliding mode controller. In the experiments, the improved temporal-spatial context learning algorithm is applied for the image information of the space target video sequence captured by Jilin-1 in orbit, where the image information is used as the input of the control loop. Moreover, the proposed method is realized through simulation. Besides, the image change caused by attitude adjustment is achieved successfully, and the target imaging can be located in the center of the image plane to realize the gaze tracking control of the space target effectively.

1. Introduction

With the rapid development of remote sensing technology, video satellites have attracted much attention due to their continuous observation ability [18]. As a new type of Earth observation satellite, video satellites can employ the payload on the satellite platform to be pushed scan imaging with the help of the orbital motion of the satellite. Note that video satellites are to adjust the attitude real-timely and to obtain the dynamic information of the target area continuously in the process of staring imaging, so the optical axis of the optical load points at the target at all the time. In Ref. [9], Liu et al. introduce the gaze imaging technique, where the process generally refers to the staring attitude control. In addition, video satellites can use agility and attitude control technology to realize continuous imaging of ground targets. Due to this reason, compared with the traditional Earth observation satellites, video satellites have widely applied in many fields, such as vehicle real-time monitoring [10, 11], rapid response to natural disaster emergency [12], and major engineering monitoring [13].

Up to now, there are mainly two types of staring imaging satellites in orbit, including the satellites in the geostationary orbit and video satellites in the low orbit [1416]. Figure 1 shows the schematic diagram of the ground gazing attitude control of video satellites. Furthermore, the attitude control system in the video satellites can adjust the attitude in real-time so that the optical axis of the optical sensor always points to the ground target area for continuous photography. Staring imaging is the main working mode for video satellites [17, 18]. In essence, although the staring imaging control problem is a dynamic attitude tracking problem, it is difficult to ensure the high stability of the optical axis of the satellite optical sensor to the observed object.

In the last decades, the staring image attitude control for video satellites can be regarded as a spacecraft attitude tracking and control problem [1921]. Much research work has been done on the attitude tracking control for the satellites. Meanwhile, some controllers have been employed in satellites attitude control, such as sliding mode controller, robust controller, and intelligent controller [1934].

So far, compared with attitude tracking and control for spacecraft and satellites, several satellite staring attitude control methods have been introduced to achieve real-time tracking [2432]. For instance, Lian et al. [27] investigated the small satellite attitude problems for staring operation. Liang et al. [28] designed the fuzzy logic control law for the staring imaging satellite attitude problem in LEO, which has a quick response and excellent robustness. In Ref. [29], Chen et al. present a quaternion-based PID feedback tracking controller with the gyroscopic term cancellation, where some desired target can be tracked on the Earth. Chen et al. [30] investigated a staring imaging attitude controller based on double-gimbaled control moment gyroscope (DGCMG), which is simple and effective for agile small satellites. In Ref. [31], Li and Liang proposed a robust finite-time controller aiming at satellite attitude maneuvers to demonstrate the robustness of some typical perturbations such as disturbance torque, model uncertainty, and actuator error. Li et al. [32] implemented the neural network controller for staring imaging, where the real-time performance can be achieved.

However, the above staring attitude controller does not consider the image information directly. Moreover, the image information is separated from the attitude tracking controller. Besides, we note that in the satellite staring mode, the optical axis of the camera should point fixedly to the target for a long time. Meanwhile, both the video satellites and the object may be moving in the inertial coordinate system. In this way, the relative velocity and position may be changing over time.

In essence, visual information is introduced into the closed-loop control, which is commonly known as visual servo, and is first applied in the field of robots [3538]. Recently, robot visual servo has achieved numerous results in both theory and practical application. We note that robot visual servo is generally divided into two structures: the position-based visual servo and the image-based visual servo. The position-based servo should calibrate the internal parameters of the camera to determine the relative attitude between the target and the camera coordinate system, which may increase the amount of calculation of the system. On the contrary, the image-based visual servo directly uses the visual feature error of the target in the phase plane and takes the controlled object and the visual system as a whole.

Due to the above reasons and advantages, visual information was initially introduced into the control closed loop of satellites and spacecraft. Therefore, an improved temporal-spatial context learning (ISTC) algorithm is employed to extract the image information in this paper. Based on this, a hyperbolic tangent fuzzy sliding mode control law (HTFSMC) for small video satellites is designed to achieve the attitude tracking and control. In special, the related coordinates are defined for attitude transformation. Subsequently, the sliding mode tracking controller is presented based on the image information from satellite videos. Furthermore, the hyperbolic tangent function and the fuzzy logic system are employed in the sliding mode controller.

In summary, the contributions of this paper are threefold.(1)In this paper, an ISTC algorithm is employed to obtain the image information. Hence, the visual information can be employed effectively in the visual tracking control based on spatial moving images, instead of cumbersome, complex camera internal parameter calibration, and accurate information of target and camera motion.(2)Based on the image information, this paper proposed the HTFSMC, where the hyperbolic tangent function and fuzzy logic system are introduced into the sliding mode controller.(3)In the experiments, the image information of the space target video sequence captured by Jilin-1 in orbit is used as the input of the controller. The control part is realized through simulation. Besides, the image change caused by attitude adjustment is achieved successfully, and the target imaging can be located in the center of the image plane to realize the gaze tracking control of the space target effectively.

The rest of this paper is arranged as follows. In Section 2, the staring imaging attitude dynamics model is described in detail. In what follows of this section, the sliding mode controller and fuzzy sliding controller are presented for staring imaging attitude tracking of small satellite videos. In Section 3, the experiment results and some discussion are introduced. Finally, Section 4 concludes this article.

2. Materials and Methods

In this paper, in order to extract the image information, the ISTC is employed for moving object tracking. Based on this, an attitude controller based on image information feedback is designed to realize the gaze tracking control of moving targets. The structure diagram is shown in Figure 2. In a nutshell, this method is to conform the centroid coordinates and to calculate the position deviation from the image center. Therefore, the cumbersome process of camera calibration and relative pose estimation can be avoided so that the computation cost is reduced.

2.1. An Improved Spatio-Temporal Context Learning Algorithm

For moving target video tracking, the local context is the background of the target and an certain area nearby. In fact, there is a strong space-time relationship in the local scene around the target between consecutive frames. According to the space-time relationship between the target and its surrounding area, the spatio-temporal context (STC) learning algorithm is constructed a spatio-temporal context model for the target and the nearby area based on the gray features of the image. Moreover, the confidence map of the target is to be calculated, and the maximum-likelihood probability in the confidence map can be found as the estimated target position. Therefore, the ISTC algorithm is described in detail.

The confidence map of the target position can be set as . The luminance feature set of is defined aswhere and are the luminance feature and the image intensity at position , respectively. denotes the context area of position . In the following, can be used to calculate the confidence map as follows:where is the conditional probability, which can represent the spatial relationship between the target position and its the context information. In addition, is the prior probability, which can model the appearance of the local context. Furthermore, can be defined aswhere is the relative distance and direction function between the target position and its local context information. Subsequently, can be also defined aswhere is a weight function, is a normalized constant, which makes range from 0 to 1 in (4), and is a scale parameter. Hence, the confidence map in (2) can be rewritten aswhere is the convolution operator and is an important shape parameter. Fast Fourier transform is utilized simultaneously on both sides of the equation for (5). Therefore, (5) can be updated aswhere is the fast Fourier transform (FFT) and denotes the element-wise product. Subsequently, can be obtained aswhere is the inverse FFT. In this way, based on (7), the spatio-temporal context model can be derived aswhere is the spatio-temporal context model at the -th frame in (7). Hence, can be also obtained by (7). Thus, the confidence map at the -th frame is expressed as

The confidence map is maximized, so the location of the target can be obtained as

Since the target attitude may be changed in the process of target movement, the size of the target may be also changed. Besides, the background information may be different in each frame. Therefore, the scale update strategy can be employed for the target, which is given as

However, in Equation (11a), the denominator may be close to zero so that the results of moving object tracking may occasionally lead to overfitting. Due to this reason, an improved scale update strategy is introduced to avoid an abrupt change based on a penalty term :where is a constant. In this way, the updated scale can be rewritten as

2.2. Staring Imaging Attitude Dynamics Model
2.2.1. The Definitions of the Related Coordinate Systems

In this paper, some related coordinate systems are shown in Figure 3. The inertial coordinate system of Earth is defined as , where the coordinate origin is located at the center of mass of the Earth. The -axis is on the equatorial plane, which is pointing at the vernal equinox of the time. The direction of -axis is the consistent with the Earth rotation axis. The -axis is in the equatorial plane and meets the right-hand orthogonal reference. The satellite body coordinate system is defined as , where the center of mass of the satellite is the origin of the coordinate system . The image coordinate system is . The camera coordinate system is . The image pixel coordinate system is .

2.2.2. The Attitude Solution Based on Satellite Images

In this paper, we assume that the camera coordinate system coincides with the satellite coordinate system . The unit vector in the direction is . As shown in Figure 4, the coordinates of target P is set as in the pixel coordinate system . In the satellite coordinate system , the target line of sight direction can be described aswhere is the coordinate of the target in the satellite coordinate system . The focal length of the spaceborne camera is f and the pixel size is l.

The purpose is to ensure the target can image in the center of the image. Hence, it is necessary to coincide with and . In the process of staring tracking, the images are required to remain stable and no rotation, which is convenient for image observation and analysis. Staring tracking imaging is the process of controlling to track .

In the satellite coordinate system , the following assumptions are given as

First, we rotate around the axis so that is coincided with . The rotation angle is expressed as

Then, we rotate around the axis so that is coincided with . The rotation angle is demonstrated as

The attitude quaternions are defined aswhere is the vector part, and is the scalar part. The quaternion is obtained as the equation (19) through rotating around .

Through rotating around , the quaternion is obtained aswhere is the rotation multiplication operator of quaternions. Thereby, the expected attitude error quaternion is expressed as

Therefore, the expected attitude quaternion is relative to the Earth inertial system can be expressed aswhere is attitude quaternion of satellite body coordinate system relative to the Earth inertial system.

The attitude kinematics equation is shown aswhere is an antisymmetric matrix, and is an identity matrix. Then, the following equation is verified as

Using the equation (23a), the expected angular velocity is inversely solved as

The expected attitude error angular velocity is given as

If is any quaternion, we can obtainwhere is the attitude matrix determined by .

2.3. Sliding Mode Controller

In this paper, based on the actuator of three orthogonal mounted reaction flywheel, the satellite attitude dynamics equation is given aswhere is the inertia moment of the satellite, is angular velocity of satellite body coordinate system, h is the angular momentum of the flywheel, u is control torque, and d is external disturbance torque.

In this paper, the sliding mode function is designed aswhere , . If , the angular velocity of the system and the state of the angular velocity can be tracked.

The approach function method is used to obtain the sliding mode control law.

The exponential reaching law is used aswhere , , and . The control quantity u is obtained aswhere is controller parameters. We notice that the chattering of sliding mode controller is mainly caused by and . Let . In order to reduce chattering, is rewritten aswhere the inflection point of hyperbolic tangent function is determined through the value of .

2.4. Stability Analysis

In order to ensure that the state of the system move from any initial point to in a finite time based on the designed sliding mode controller, the following assumptive conditions are given:(1)Supposed is bounded, is boundary of .(2)Supposed .

The stability of the system is proved as follows. Lyapunov function is constructed as

The derivative of the (33) is represented aswhere if and only if . is a seminegative definite function. Therefore, the system is convergent by using sliding mode control.

2.5. Fuzzy Logic System

Fuzzy logic system (FLS) is consisted of fuzzy rule base, fuzzy rule base, fuzzy inference engine, and defuzzifier, as shown Figure 5. In this paper, we assume that , and are input and an output, respectively. Furthermore, the fuzzy rule base is composed of rules, expressed aswhere . Thereby, FLS can be employed to simplify fuzzy rules as mapping from fuzzy input sets to the fuzzy output set , denoted by . In this way, (35) can be rewritten as

The membership function can be utilized to describe aswhere . Hence, (37) can be rewritten aswhere represents that multiple antecedents are juxtaposed and connected with t norm.

is a fuzzy set where p input of is given, and the membership function is defined as

According to each fuzzy rule, a fuzzy set about the set Y is given as

Meanwhile, based on commutativity of t norm, we can obtain the membership function in (41)

Singleton fuzzifier is employed into (41) and (41), which can be rewritten as

Due to centroid defuzzifier, FLS output can be expressed aswhere is output fuzzy set, and is the clear output.

2.6. Fuzzy Sliding Mode Controller

The input and output fuzzy sets of the system are defined aswhere is the input, and the variation of is represented as the output. In Equations (44a) and (44b), are negative large, negative middle, negative small, zero, positive small, positive middle, and positive large, respectively. Therefore, the following seven rules are designed as(i)R1 : If is , THEN is .(ii)R2 : If is , THEN is .(iii)R3 : If is , THEN is .(iv)R4 : If is , THEN is .(v)R5 : If is , THEN is .R6 : If is , THEN is .R7 : If is , THEN is .

Besides, Figure 6 shows the input/output membership function of fuzzy control system. According to the value of , the centroid defuzzifier is employed and the value of can be obtained as . Meanwhile, can be rewritten as

Hence, the fuzzy sliding mode controller can be designed as

3. Results and Discussion

In this section, in order to verify the performance of the proposed method, the numerical simulations are conducted for the video satellite, where Jilin-1 is selected. The experiments are implemented in Matlab R2018b and NVIDIA GeForce GXT 2080Ti GPU. Firstly, the ISTC algorithm is employed to extract the image information. As shown in Figure 7, the results of moving target tracking by ISTC are presented. Meanwhile, based on this, the traditional sliding mode control and the HTFSMC are design comparative experiments. The initial condition of the simulations is demonstrated in Table 1. Besides, we can see that the image size is 4000 pixels × 4000 pixels, which is very large.

According to the above simulation parameters, we assume that the pixel coordinate system of the target located at (0, 510) in the initial time. Meanwhile, the traditional sliding mode control and the proposed controller are applied to obtain the variation curve of output torque, attitude angle, and angular velocity.

In Figure 8, we can see that and are used to represent the output torque in the x and y directions and can converge after about 40s based on the sliding mode controller. However, in Figure 9, and can converge after 25s based on the HTFSMC. Compared with the sliding mode control, the proposed controller can improve the convergence speed. Besides, is used to represent the output torque in the z direction and can converge faster after about 5s for these two controllers.

In Figure 10 and Figure 11, for the traditional sliding mode controller and the HTFSMC, can converge after about 30s and 20s, respectively. can converge after about 40s and 20s, respectively. can converge after about 20s.

In Figure 12 and Figure 13, for the traditional sliding mode controller and HTFSMC, can converge after about 30s and 23s, respectively. can converge after about 40s and 27s, respectively. can converge after about 30s and 23s, respectively.

In Figure 14, the image information of the space target video is used as the input of the control loop. The image change caused by attitude adjustment is simulated. The image is scaled to 2000 pixels 2000 pixels, which represents the satellite visual field and shown in black. Besides, we design that the size of the actual image is 4000 pixels 4000 pixels. Meanwhile, the satellite visual field is embedded the actual image. The red is demonstrated the target and the green box visual field center. It is can be seen that the attitude control based on image information feedback can be achieved by the proposed controller. Moreover, the target imaging can be located in the center of the image plane to realize the gaze tracking control of the space target effectively. Accordingly, Figure 15 shows the trajectory of the target in image plane based on fuzzy sliding mode control. In addition, we can see that the position of the visual field center is (2000,2000). At first, the target position is not in the visual field center. The target imaging is located in the center based on the feedback control of image information. Figure 16 shows the optical axis pointing error converging to 0 after about 60s based on the proposed controller.

Figure 17 shows the simulation results of moving target gaze tracking in Jilin-1 video, in which the image changes caused by attitude adjustment are simulated and the moving target is the airplane. The moving airplane is not in the visual field center at the initial moment. The airplane is imaged in the visual field center based on the proposed controller of image information. Accordingly, Figure 18 shows the trajectory of the target in image plane based on fuzzy sliding mode control. Moreover, Figure 19 shows the optical axis pointing error converging to 0 quickly based on the proposed controller. It means that the gaze tracking of space moving target is effectively simulated.

4. Conclusions

The staring imaging attitude tracking and control for satellite videos based on image information is studied in this paper. An ISTC algorithm is designed to obtain the image information. Based on this, we introduced a HTFSMC law to achieve the attitude tracking and control. Furthermore, the hyperbolic tangent function and fuzzy logic system are introduced into the sliding mode controller.

In the experiments, the image information of the space target video sequence captured by Jilin-1 in orbit is used as the input of the control loop. The control part is realized through simulation. Compared with the traditional sliding mode controller, the image change caused by attitude adjustment is achieved successfully and quickly based on the proposed controller, and the target imaging can be located in the center of the image plane to realize the gaze tracking control of the space target effectively. In the future work, the space target video sequences will be used as the input of the control loop directly.

Data Availability

The data used to support the findings of the study are available from the corresponding author upon request.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported in part by the NSFC (62133001 and 61520106010) and the National Basic Research Program of China 973 Program (2012CB821200 and 2012CB821201).