Abstract

It has been discovered that image motions and optical flows usually become much more nonlinear and anisotropic in space-borne cameras with large field of view, especially when perturbations or jitters exist. The phenomenon arises from the fact that the attitude motion greatly affects the image of the three-dimensional planet. In this paper, utilizing the characteristics, an optical flow inversion method is proposed to treat high-accurate remote sensor attitude motion measurement. The principle of the new method is that angular velocities can be measured precisely by means of rebuilding some nonuniform optical flows. Firstly, to determine the relative displacements and deformations between the overlapped images captured by different detectors is the primary process of the method. A novel dense subpixel image registration approach is developed towards this goal. Based on that, optical flow can be rebuilt and high-accurate attitude measurements are successfully fulfilled. In the experiment, a remote sensor and its original photographs are investigated, and the results validate that the method is highly reliable and highly accurate in a broad frequency band.

1. Introduction

For the remote sensors in dynamic imaging, one important technology is image motion compensation. Actually, to determine image motion velocity precisely is a very hard problem. In [1, 2], optical correlators are utilized to measure image motion in real time based on a sequence of mild smeared images with low exposure. This technique is appropriate to the situations in which the whole image velocity field is uniform. Some other blind motion estimation algorithms in [35] have been used to image postprocessing, which can roughly detect inhomogeneous image motion, but lack real-time performance because of complexity. As for space imaging, in order to avoid motion blurring, image motion velocity needs to be computed in real time according to the current physical information about spacecraft’s orbit and attitude motion, which can be obtained by the space-borne sensors, such as star trackers, gyroscopes, and GPS. Wang et al. developed a computational model for image motion vectors and presented error budget analysis in [6]. They focused on the small field of view (FOV) space cameras which are used in push-broom imaging with small attitude angles. In that situation, the nonlinearity of image motion velocity field does not appear significantly. However, for others with larger FOV, image motion velocity fields are definitely nonlinear and anisotropic because the geometry of the planet will greatly modulate the moving images. Under the circumstances, the detectors need to be controlled separately to keep the time series synchronized with the instantaneous image velocities.

The time-phase relations between the photos belonging to different detectors are affected by optical flows, which are uniquely determined by the behavior of image velocity field in a specific period. Some phenomena about moving image variation and distortion due to optical flow have been discovered [710]. References [7, 8] reported the camera on Mars Reconnaissance Orbiter (MRO) in High Resolution Imaging Science Experiment (HiRISE) missions of NASA. It takes pictures of Mars with resolutions of 0.3 m/pixel resolving objects. Fourteen staggered parallel CCDs are overlapped with 48 pixels at each end to fulfill the entire field of view. Although adjacent detectors overlap with equal physical pixels, yet their lapped image pixels are not equal and varying with time, because spacecraft jitters cause undulating optical flows within the interlaced areas [8]. In addition, we found that when large FOV remote sensors perform stereoscopic imaging with large pitch angles, the lapped images belonging to marginal detectors are bound to exceed or lose several hundred pixels compared to their physical overlaps. Furthermore, the unexpected quantity decreases significantly for the detectors mounted at the central region of the focal plane.

Although nonuniform optical flow brings many troubles in image processing such as registration, resample, interconnection, and geometrical rectification, it permits us to measure the spacecraft attitude motion with very high accuracy in a broad bandwidth. It is nearly impossible for conventional space-borne sensors to realize the target. Precision attitude motion measurement is very useful for remote sensing image processing, especially for image restoration from motion blurring as studied in [11, 12]. Associating the measurement and optical flow models, the dynamic point spread functions (PSF) are able to be estimated to be set as the convolution kernels in nonblinded deconvolution algorithms.

The behavior of optical flow characterizes the entire two-dimensional flow field for an image's motion and variation. In [13] optical flow estimation based on image sequences of the same aurora to determine the flow field will provide access to the phase space, the important information for understanding the physical mechanism of the aurora. For the purpose to improve the accuracy of optical flow estimation, a two-step matching paradigm for optical flow estimation is applied in [14]; firstly the coarse distribution measuring of motion vectors is done with a simple frame-to-frame correlation technique also known as the digital symmetric phase-only filter (SPOF), and after that subpixel accuracy estimation result is achieved by using the sequential tree-reweighted max-product message passing (TRW-S) optimization. Similarly, Sakaino overcame the disadvantages in optical flow determination when moving objects with different shapes and sizes move against a complicated background; the image intensity between frames may violate the common situation image brightness constancy to image brightness 5 change models as constraints in regular situations [15]. However, unlike continuous image sequences, if we merely obtained several images of the identical moving objects captured by different detectors with long intervals, the former techniques do not work well for optical flow estimation for lacking of the information of imaging process of the instrument.

In this paper, a new optical flow inversion method is proposed for precise attitude measurement. Unlike the situations in [1315], the image sequences of video do not exist for the transmission type remote sensors, instead of the image pairs of the same earth scene which are captured by different TDI CCD detectors in push-broom fashion. The time intervals between the independent image formations corresponding to the overlapped detectors are much more than the interval between sequential frames in video, for which, the frame rates usually exceed tens of frames per second (fps). However, we can model optical flows based on the working mechanism of the instrument and image processing techniques rather than estimating from frame sequences of a specific detector. The contents of this paper are organized as follows: in Section 2, an analytical model of image motion velocity field is established, which is applicable to dynamic imaging for three-dimensional planet surface by large FOV remote sensors. The phenomenon of moving image deformation due to optical flow is investigated in Section 3. Based on rough inversion of optical flow, a novel method for dense image registration is developed to measure the subpixel offsets between the lapped images which are captured by adjacent detectors. In Section 4, an attitude motion measuring method based on precise optical flow inversion is studied, and the results of the experiment support the whole theory perfectly.

2. Image Velocity Field Analysis

Suppose that a large FOV camera is performing push-broom imaging to the earth; the scenario is illustrated in Figure 1. The planet's surface cannot be regarded as a local plane but a three-dimensional ellipsoid since it may greatly influence the image motion and time-varying deformation when complicated relative motion exists between the imager and the earth.

In order to set up the model of space imaging, some coordinate systems need to be defined as follows.(1): the inertial frame of the earth. For convenience, here we choose frame. The origin is located at the earth center.(2): the frame of camera. Axis is the optical axis, and origin is the center of the exit pupil.(3): the orbit frame. Axis passes through the center of earth, and axis is perpendicular to the instant orbit plane.(4): the body frame of the satellite.(5): the frame of photography. The origin is the center of the photo. Axis points to the column direction, and axis points to the row direction.(6): the frame of focal plane. Axes and lie in the focal plane. They are, respectively, parallel to and . Axis coincides with the optical axis.(7): the frame of Terrestrial Reference Frame (TRF). Axis points to the North Pole, and axis passes through the intersection of Greenwich meridian and the equator.

According to Figure 1, is the ground track of the satellite and and are the ground traces corresponding to two fixed boresights in FOV, which are far away from if the imager holds a large attitude angle. Obviously, the shapes and lengths of and also have notable differences during push broom, which implies that the geometrical structure of the image is time varying as well as nonuniform. Furthermore, it can be discovered later that the deforming rates mainly depend on the planet's apparent motion observed by the camera.

Considering an object point on the earth, its position vector relative to is denoted as . As a convention in the following discussions, represents the vector measured in frame , and accordingly is the same vector measured in frame . We select one unit vector which is tangent to the surface of the earth at . Let be the position vector of relative to ; then and characterize the apparent motion of . Assume that the image point is formed on the focal plane with coordinates in frame . Generally, the optical systems of space cameras are well designed and are free from optical aberrations and the static PSF is approximate to the diffraction limit [16, 17]; thus following [18], we have where is the effective focal length, the lateral magnification of : , is the number of intermediate images in the optical system, and is the base of .

Let be the position vector of satellite relative to ; then . In imaging, the flight trajectory of the satellite platform in can be treated as Keplerian orbit, as illustrated in Figure 2. According to the orbit elements, , inclination; , longitude of ascending node; , argument of perigee; , semimajor axis; , eccentricity; , mean anomaly at epoch, we implement Newton-Raphson method to solve (2) and get the eccentric anomaly from the given mean anomaly , where , is the orbit period [11]: In frame , The coordinate transform matrix between and is For simplicity, we write .

In engineering, the coordinate transfer matrix also can be derived from the real-time measurements of GPS. Since the base vectors of frame in : , , and then : Associating the equation of boresight with the ellipsoid surface of the earth in yields Here,  km and  km being the length of earth's semimajor axis and semiminor axis. are the unit vectors of . We write the solution of (7) as. Hence, ,, where is the coordinate transformation matrix from frame to frame ; it is a constant matrix for fixed installation; is the attitude matrix of satellite; according to 1-2-3 rotating order, we have in which where , , and are in order, the real-time roll angle, pitch angle, and yaw angle at moment . The velocity of in can be written in the following scalar form: Thus, the velocity of image point of will be Substituting (2)–(9) into (10), the velocity vector of image point can be expressed as the explicit function of several variables; that is, For conciseness, this analytical expression of is omitted here.

The orbit elements can be determined according to instantaneous GPS data. Besides, they also can be calculated with sufficient accuracy in celestial mechanics [19]. On the other hand, the attitude angles , and can be roughly measured by the star trackers and GPS. Meanwhile, their time rates , , and have the following relations: , , and are the three components of the remote sensor's angular velocity relative to orbital frame , which is calibrated in frame . Those can be roughly measured by space-borne gyroscopes or other attitude sensors.

It is easy to verify from (11) that the instantaneous image velocity field on the focal plane appears significantly nonlinear and isotropic for large FOV remote sensors, especially when they are applied to perform large angle attitude maneuvering, for example, in sidelooking by swing or stereoscopic looking by pitching and so forth. Under these circumstances, in order to acquire photos with high spatial, temporal, and spectral resolution, image motion velocity control strategies should be executed in real time [20] based on auxiliary data which measured by reliable space-borne sensors [21, 22]. In detail, for TDI CCD cameras, the line rates of the detectors must be controlled synchronizing to the local image velocity modules during exposure so as to avoid along-track motion blurring; the attitude of remote sensor should be regulated in time to maintain the detectors push-broom direction aiming at the direction of image motion to avoid cross-track motion blurring.

3. Optical Flow Rough Inversion and Dense Image Registration

Optical flow is another important physical model carrying the whole energy and information of moving images in dynamic imaging. A specific optical flow trajectory is an integral curve which is always tangent to the image velocity field; thus we have Since (13) are coupled nonlinear integral equations, we convert them to numerical forms and solve them iteratively,

It is evident that the algorithm has enough precision so long as the step-size of time interval is small enough. It can be inferred from (13) that strong nonlinear image velocity field may distort optical flows so much that the geometrical structure of image may have irregular behaviors. Therefore, if we intend to inverse the information of optical flow to measure the attitude motion, the general formula of image deformation due to the optical flows should be deduced.

3.1. Time-Varying Image Deformation in Dynamic Imaging

Firstly, we will investigate some differential characteristics of the moving image of an extended object on the earth surface. As shown in Figure 1, considering a microspatial variation of along on the curved surface can be expressed as . Its conjugated image is We expand the term of : Taking derivatives with respect to variable for either part of (15), we have According to (16) we know that . On the other hand, the variation of can be expressed through a series of coordinate transformations; that is Notice that is a fixed tangent vector of earth surface at object point , which is time-invariant and specifies an orientation of motionless scene on the earth.

Consequently, where the coordinate transform matrix from frame to is Let be the angular rate of the earth and the longitude of on the earth; then the hour angle of at time is , in which, represents Greenwich sidereal time.

The microscale image deformation of the extended scene on the earth along the direction of during can be written as From (17), we have According to (16), (18), and (19) we obtain the terms in (22): Furthermore, if the camera is fixed to the satellite platform, then .

Consequently, (22) becomes For the motionless scene on the earth surface, is a time-independent but space-dependent unit tangent vector, which meanwhile represents a specific orientation on the ground. Moreover, the physical meaning of function is the image deformation of unit-length curve on the curved surface along the direction of in unit time interval. That is the instantaneous space-time deforming rate of the image of the object along .

Consequently, in dynamic imaging, macroscopic deformation on the moving image can be derived from the integral of in space and time. Referring to Figure 1, let be an arbitrary curve of the extended object on the earth, let be its image, let two arbitrary points , and let their Gaussian images . Let be a vector-valued function with variable (the length of the arc) which is time-invariant in frame and gives the tangent vectors along the curve.

So, the image deformation taking place during is able to be described as in which .

Now, in terms of (24) and (25), we can see that the image deformation is also anisotropic and nonlinear which depends not only on optical flow's evolution but also on the geometry of the scene.

3.2. Dense Image Registration through Optical Flow Prediction

As mentioned in the preceding sections, optical flow is the most precise model in describing image motion and time-varying deformation. On the contrary, it is possible to inverse optical flow with high accuracy if the image motion and deformation can be detected. As we know, the low frequency signal components of angular velocity are easier to be sensed precisely by attitude sensors such as gyroscopes and star trackers, but the higher frequency components are hard to be measured with high accuracy. However, actually, perturbations from high frequency jittering are the critical reason for motion blurring and local image deformations, since the influences brought by low components of attitude motion are easier to be restrained in imaging through regulating remote sensors.

Since (13) and (25) are very sensitive to the attitude motion, the angular velocity is able to be measured with high resolution as well as broad frequency bandwidth so long as the image motion and deformation are to be determined with a certain precision. Fortunately, the lapped images of the overlapped detectors meet the needs, because they were captured in turn as the same parts of the optical flow pass through these adjacent detectors sequentially. Without losing generality, we will investigate the most common form of CCD layout, for which two rows of detectors are arranged in parallel. The time-phase relations of image formation due to optical flow evolution are illustrated in Figure 3, where the moving image elements (in the left gap), (in the right gap) are captured firstly at the same time since their optical flows pass through the prior detectors. However, because of nonuniform optical flows, they will not be captured simultaneously by the posterior detectors. Therefore, the geometrical structures of photographs will be time varying and nonlinear. It is evident from Figure 3 that the displacements and relative deformations in frame between the lapped images can be determined by measuring the offsets of the sample image element pairs in frame .

Let be the relative offsets of the same object’s image on the two photos; they are all calibrated in or . We will measure them by image registration.

As far as image registration method is concerned, one of the hardest problems is complex deformation, which is prone to weaken the similarity between the referenced images and sensed images so that it might introduce large deviations from the true values or even lead to algorithm failure. Some typical methods have been studied in [2325]. Generally, most of them concentrated on several simple deforming forms such as affine, shear, translation, rotation, or their combinations instead of investigating more sophisticated dynamic deforming models. In [2630], some effective approaches have been proposed to increase the accuracy and the robust of algorithms according to the respective reasonable models according to the specific properties of objective images.

For conventional template based registration methods, once a template has been extracted from the referenced image, the information about gray values, shape, and frequency spectrum does not increase since no additional physical information resources would be offered. But actually, such information has changed when the optical flows arrive at the posterior detectors. Therefore, the cross-correlations between the templates and sensed images certainly reduce. So, in order to detect the minor image motions and complex deformations between the lapped images, high-accurate registration is indispensable, which means that more precise model should be implemented. We treat it using the technique called template reconfiguration. In summary, the method is established on the idea of keeping the completion of the information about optical flows.

In operating, as indicated in Figure 3, take the lapped images captured by the detectors in prior array as the referenced images and the images captured by posterior detectors as the sensed images. Firstly we will rebuild the optical flows based on the rough measurements of the space-borne sensors and then reconfigure the original templates to construct the new templates whose morphologies are more approximate to the corresponding parts on the sensed images. With this process, the information about imaging procedures is able to be added into the new templates so as to increase the degree of similarity to the sensed images. The method may dramatically raise the accuracy of dense registration such that the high-accurate offsets between the lapped image pairs are able to be determined.

In the experiment, we examined Mapping Satellite-1, a Chinese surveying satellite operating in 500 km height sun synchronous orbit which is used for high-accurate photogrammetry [31] whose structure is shown in Figure 4. One of the effective payload, three-line-array panchromatic CCD cameras has good geometrical accuracy whose ground pixel resolution is superior to 5 m, spectral range is , and the swath is 60 km. Another payload is that the high resolution camera is designed possessing Cook-TMA optical system which gives a wide field of view [16, 17] and the panchromatic spatial resolution can reach 2 m.

In engineering, for the purpose to improve the image quality and surveying precision, the high-accuracy measurements of jitter and attitude motion are very essential for posterior processing. Thus, here we investigate the images and the auxiliary data of the large FOV high resolution camera to deal with the problem. The experimental photographs were captured with 10° side looking. The focal plane of the camera consists of 8 panchromatic TDI CCD detectors and there are physical lapped pixels between each other.

The scheme of the processing in registering one image element is illustrated in Figure 5.

Step 1. Set the original lapped image strips (the images which were acquired directly by the detectors and without any postprocessing) in frame .

Step 2. Compute the deformations of all image elements on referenced template with respect to their optical flow trajectories.
We extract the original template from the referenced image denoted as , which consists of square elements; that is, . Let be its central element and the width of each element; here . Before the moving image was going to be captured by the posterior detector, in terms of (25), their current shapes and energy distribution can be predicted by the optical flow based on the auxiliary data of the remote sensor.
In order to simplify the algorithm, first order approximation is allowed without introducing significant errors. This approximation means that the shape of every image element is always quadrilateral. Linear interpolations are carried out to determine the four sides according to the deformations along the radial directions of the vertexes, as showed in Figure 5. The unit radial vectors are denoted by in frame : Suppose image point is the center of an arbitrary element in . Let be the area element on the earth surface which is conjugate to . The four unit radial vectors of the vertexes on , are conjugate to and tangent to the earth surface at . From the geometrical relations, we have where is the unit normal vector of at . We predict the deformations along during according to the measurements of GPS, star trackers, and gyroscopes, as explained in Figure 6. is the imaging time on prior detector and is the imaging time on the posterior detector, The shape of deformed image can be got through linear interpolation with

Step 3. Reconfigure referenced template according to optical flow prediction, and then get a new template .
Let be the deformed image of computed in Step 2. Let be the central element of ; integers and are, respectively, the row number and column number of . The gray value of each element in is equal to its counterpart in with the same indexes. In addition, we initialize a null template whose shape and orientation are identical to ; the central element of is denoted by .
Then, we cover upon and let their centers coincide; that is, , as shown in Figure 7. Denote the vertexes of as . Therefore, the connective relation for adjacent elements can be expressed by .
Next, we will reassign the gray value to in sequence to construct a new template . The process is just a simulation of image resample when optical flow arrives at the posterior detector, as indicated in Figure 3.
That is, Weight coefficient where is the area of the intersecting polygon of with .

Step 4. Computenormalized cross-correlation coefficients between and the sensed image, and then determine the subpixel offset of relative to the sensed image in frame .
Firstly, for this method, the search space on the sensed image can be contracted so much since the optical flow trajectories for the referenced elements have been predicted in Step 2 Assuming that the search space is , . When moves to the pixel on , the normalized cross-correlation (NCC) coefficient is given by where is the mean gray value of the segment of that is masked by and is the mean of . Equation (31) requires approximately additions and multiplications, whereas the complexity of FFT algorithm needs about real multiplications and real additions/subtractions [32, 33].
At the beginning, we take and compute the NCC coefficient. When is much larger than , the calculation in spatial domain will be efficient. Suppose that the peak value is taken at the coordinate in the sensed window. Hence we will reduce search space into a smaller one with dimension of which centered on . Next, the subpixel registration is realized by phase correlation algorithm with larger and to suppress the system errors owing to the deficiencies of detailed textures on the photo. Here we take . Let the subpixel offset between the two registering image elements be denoted as and in frame .
The phase correlation algorithm in the frequency domain becomes more efficient as approaches and both have larger scales [28]. Moreover, the Fourier coefficients are normalized to unit magnitude prior to computing the correlation so that the correlation is based only on phase information and being insensitive to changes in image intensity [27, 29].
Let be the 2D Discrete Fourier Transforms (DFT) of the sensed window; then we have Here
Cross-phase spectrum is given by where is the complex conjugate of . By inverse Discrete Fourier Transform (IDFT) we have Suppose that the new peak appears at , referring to [27]; we have the following relation: The right side presents the spatial distribution of the normalized cross-correlation coefficients. Therefore, are able to be measured based on that. In practice, constant , which tends to decrease when small noise exists and equals unity in ideal cases.

Step 5. Dense registration is executed for the lapped image strips.
Repeating Step 1~Step 4, we register the along-track sample images selected from the referenced images to the sensed image. The maximal sample rate can reach up to line-by-line. The continuous procedure is shown in Figure 8, in which, the image pairs are marked.
The curves of relative offsets in are shown in Figures 9 and 10.
Let be the column and row indexes of image elements on the referenced image, and let , be the indexes of the same elements on the sensed image. The total columns of each detector pix, and the vertical distance between the two detector arrays  mm. According to the results of registration, we get the offsets of images at th gap, (cross track), (along track) in frame , and  (mm) in frame Four pixels S11, S12, S31, and S32 are examined. Their data are listed in Table 1.
S11 and S31 are the images of the same object which was captured in order by CCD1 and CCD2 (Gap 1). S12 and S32 were captured, respectively, by CCD3 and CCD4 (Gap 3). Referring to the auxiliary data, S11 and S31 were captured at same time and S12 and S32 were captured at different time, which means that the along-track speeds of the two moving images were quite different. Moreover, the cross-track image offsets in Gap 1 and Gap 3 vary so much, which says that the optical flows were also distorted unevenly and deflects away from the along-track direction. On the other hand, it is has been discovered in Figures 9 and 10 that the fluctuation of image offsets taking place in Gap 1 is greater in magnitude than in Gap 3. All the facts indicate that the distorted optical flows can be detected from a plenty of image offsets. We will see later that the nonlinear distribution of the data strengthens the well-posedness of optical flow inversion algorithm.

4. Remote Sensor Attitude Motion Measurement

In this section, the attitude velocity of the remote sensor is going to be resolved by using optical flow inversion method. The results of dense registration are applied to produce conditions of fixed solution for optical flow equations.

4.1. The Principle of Optical Inversion

For clarity, in frame , the two coordinate components of image displacement of th sample element belonging to th lapped strip pair are written as , . From (13) and (25), it is easy to show that the contributions to optical flow owing to orbital motion and earth's inertial movement are of very slightly varying in short term such that the corresponding displacements can be regarded as piecewise constants .

Let be in order the two sequential imaging time of the th image sample on the overlapped detectors in th gap. They are usually recorded in the auxiliary data of the remote sensor. Hence, for every image element, the quantity of discrete status in optical flow tracing will be where is the amount of CCD gaps, is the amount of sample groups, and is the time step. We set samples with same index into the same group, in which the samples are captured by the prior detectors simultaneously.

We expand (11), substitute it into (14) and (13), and then arrange the scalar optical flow inversion equations in terms of the three axial angular velocity components , , and (the variables in the inverse problem), yielding the linear optical flow equations.

For the th group samples,

Suppose that the sample process will stop until groups have been founded. The coefficients are as follows: Here

As for the algorithm, to reduce the complexity, all possible values for the coefficients are stored in the matrixes and . The accuracy is guaranteed because the coefficients for the images moving into the same piece of region are almost equal to an identical constant in a short period, which is explained in Figure 11.

It has been mentioned that the optical flow is not sensitive to satellite’s orbit motion and earth rotation in a short term; namely, the possible values are assigned by the following functions:

Here is the number of constant-valued segments in the region encompassing all the possible optical flow trajectories. The orbital elements and integral step size are common to all functions. Furthermore, when long term measurements are executed, and only need to be renewed according to the current parameters.

The coefficient matrix of the optical flow equations for th group can be written as where .

Consequently, as we organize the equations for all groups, the global coefficient matrix will be given in the following form:

is a quasidiagonal partitioned matrix; every subblock has 2 rows. The maximal columns of are .

The unknown variables are as follows:

The constant are as follows:

has been measured by image dense registration. can be determined by auxiliary data of sensors. The global equations are expressed by As for this problem, it is easy to be verified that conditions.

  ,     easily meet well in practical works. To solve (44), well-posedness is the critical issue for the inverse problem. Strong nonlinearity and anisotropy of optical flow will greatly reduce the relevance between the coefficients in ; meanwhile it increases the well-posedness of the solution. The least-square solution of (47) can be obtained: The well-posedness is able to be examined by Singular Value Decomposition (SVD) to . Consider the nonnegative definite matrix whose eigenvalues are given in order: , where and are unit orthogonal matrices and the singular values are . The well-posedness of the solution is acceptable if condition number .

Associating the process of inverse problem solving in Section 4 with the process of preliminary information acquisition in Section 3, the whole algorithm for remote sensor’s attitude measurement is illustrated in the flow chart in Figure 12.

4.2. Experimental Results and Validation

In the experiment, 72940 samples on 7 image strip pairs were involved. Considering maintaining the values in and nearly invariant, we redistributed these samples into 20 subspaces and solved out the three axial components of the angular velocity. According to Shannon's sampling theorem, the measurable frequency is expected to reach up to the half of line rates of TDI CCD. For the experiment,  KHz. The ~ curves of ~ are shown in Figure 13.

In this period, , The signal of is fluctuating around mean value . It is not hard to infer that high frequency jitters were perturbing the remote sensor; besides, compared to the signals of and , the low frequency components in are higher in magnitude. Actually, according to the remote sensor, satellite yaw angle is needed to be regulated in real time to compensate for the image rotation on the focal plane such that the detectors can always scan along the direction of image motion. Based on the auxiliary data, the image motion velocity vector of the central pixel in FOV can be computed. So the optimal yaw motion in principle will be The mean value of . We attribute to the error of satellite attitude control.

In order to validate the measurement, the technique of template reconfiguration was implemented again to check the expected phenomenon that, based on the high-accurate information, the correlations between the new templates and should be further improved. In addition, the distribution of near is going to become more compact, which is easy to be understood since much more useful information about remote sensor's motion is introduced into template reconstructions and increases the similarities between the lapped images.

Unlike the processing in image dense registration, in the validation phase, larger original templates are selected. Let be the referenced image template which centered at the examining element, the new template reconfigured by rough prediction of optical flow, the new template reconfigured based on precision attitude motion measurement, and the template on sensed image which centered at the registration pixel. For all templates, . The distributions of the normalized cross-correlation coefficients corresponding to the referenced template centered on the sampled selected in row belonging to CCD with sensed image belonging to CCD are illustrated in Figure 14.

(a) shows the situation for and (b) for and , and (c) for and . The compactness of the data is characterized by the peak value and the location variances where and are, respectively, the column and row number of the peak-valued location.

In case (a), , standard deviation , and ; in case (b), , and ; in case (c),  ; however the variance sharply shrinks to . In Table 2, some other samples with 1000 rows interval are also examined. The samples can be regarded as independent to each other.

Judging from the results, the performances in case (c) are better than those in case (b) and much more better than those in case (a), since the precise attitude motion measurements enhance the precision of optical inversion so as to improve the similarities between the new templates and sensed images. Note that, although in case (b) the variance decreases slightly, as we have analyzed in Section 3.2, compared to case (a), the offsets of centroids from the peaks have been corrected well by the use of the rough optical flow predictions.

4.3. Summary and Discussions

In terms of the preceding sections, we can see that, comparing to ordinary NCC, the precision of image registration is greatly improved, since it is attributed to the assistance of the technique of template reconfiguration. Implementing the auxiliary data from the space-borne sensors to optical flow prediction, the relative deformations between the lapped image pairs can be computed in considerable accuracy. Afterwards, it will be used to estimate the gray values of the corresponding parts on sensed images and help us to construct a new template for registration. As we know, the space-borne sensors may give middle and low frequency components of imager's attitude motion in excellent precision. Thus, comparing to the classical direct template based registration algorithms, the similarity between the reconfigured template and sensed images may greatly increase. Furthermore, the minor deformations attributed to high frequency jitters can be detected by using subpixel registration between the reconfigured templates and sensed images. This point of view is the exact basis of high frequency jitters measurement with optical flow inversion.

5. Conclusion

In this paper, optical flows and time-varying image deformation in space dynamic imaging are analyzed in detail. The nonlinear and anisotropic image motion velocity and optical flows are utilized to strengthen the well-posedness of the inverse problem of attitude precise measurement by optical flow inversion method. For the purpose of determining the conditions of fixed solutions of optical flow equations, information based image registration algorithms are proposed. We apply rough optical flow prediction to improve the efficiency and accuracy of dense image registration. Based on the results of registration, the attitude motions of remote sensors in imaging are measured by using precise optical flow inversion method. The experiment on a remote sensor showed that the measurements are achieved in very high accuracy as well as with broad bandwidth. This method can extensively be used in remote sensing missions such as image strips splicing, geometrical rectification, and nonblind image restoration to promote the surveying precision and resolving power.

Conflict of Interests

The authors declare that they have no financial nor personal relationships with other people or organizations that can inappropriately influence their work; there is no professional or other personal interest of any nature or kind in any product, service, and/or company that could be construed as influencing the position presented in, or the review of, this paper.

Acknowledgments

This work is supported by the National High Technology Research and Development Program of China (863 Program) (Grant no. 2012AA121503, Grant no. 2013AA12260, and Grant no. 2012AA120603) and the National Natural Science Foundation of China (Grant no. 61377012).