photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras...

21
Photogrammetric processing of rover imagery of the 2003 Mars Exploration Rover mission Kaichang Di a , Fengliang Xu a , Jue Wang a , Sanchit Agarwal a , Evgenia Brodyagina a , Rongxing Li a, , Larry Matthies b a Mapping and GIS Laboratory, Department of Civil and Environmental Engineering and Geodetic Science, The Ohio State University, 470 Hitchcock Hall, 2070 Neil Avenue, Columbus, OH 43210, USA b Jet Propulsion Laboratory, California Institute of Technology, Mail Stop 125-209, Pasadena, CA 91109, USA Received 19 May 2006; received in revised form 25 July 2007; accepted 26 July 2007 Available online 12 September 2007 Abstract In the 2003 Mars Exploration Rover (MER) mission, the twin rovers, Spirit and Opportunity, carry identical Athena instrument payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper presents the photogrammetric processing techniques for high accuracy topographic mapping and rover localization at the two landing sites. Detailed discussions about camera models, reference frames, interest point matching, automatic tie point selection, image network construction, incremental bundle adjustment, and topographic product generation are given. The developed rover localization method demonstrated the capability of correcting position errors caused by wheel slippages, azimuthal angle drift and other navigation errors. A comparison was also made between the bundle-adjusted rover traverse and the rover track imaged from the orbit. Mapping products including digital terrain models, orthophotos, and rover traverse maps have been generated for over two years of operations, and disseminated to scientists and engineers of the mission through a web-based GIS. The maps and localization information have been extensively used to support tactical operations and strategic planning of the mission. © 2007 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved. Keywords: Mars rover; Topographic mapping; Rover localization; Incremental bundle adjustment; Tie point selection 1. Introduction On June 10 and July 7, 2003, the Mars Exploration Rover (MER) mission launched twin rovers, Spirit and Opportunity respectively, both carrying identical Athena instrument payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites (Squyres et al., 2003). Spirit and Oppor- tunity landed on the Martian surface on January 4 and January 25, 2004, respectively. Since then, the two rovers have been documenting the geology of the landing sites and gathering compositional, mineralogi- cal, and textural information about selected Martian soils and rocks (Squyres et al., 2004). With their extraordinary mobility, the MER rovers are capable of traveling over 100 m across the Martian surface per sol (a sol is a Martian day, which is about 39.5 min longer than an Available online at www.sciencedirect.com ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181 201 www.elsevier.com/locate/isprsjprs Corresponding author. Tel.: +1 614 292 6946; fax: +1 614 292 2957. E-mail address: [email protected] (R. Li). 0924-2716/$ - see front matter © 2007 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved. doi:10.1016/j.isprsjprs.2007.07.007 转载 http://www.paper.edu.cn 中国科技论文在线

Upload: others

Post on 25-Jan-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

Available online at www.sciencedirect.com

emote Sensing 63 (2008) 181–201www.elsevier.com/locate/isprsjprs

http://www.paper.edu.cn中国科技论文在线

ISPRS Journal of Photogrammetry & R

Photogrammetric processing of rover imagery of the 2003Mars Exploration Rover mission

Kaichang Di a, Fengliang Xu a, Jue Wang a, Sanchit Agarwal a,Evgenia Brodyagina a, Rongxing Li a,⁎, Larry Matthies b

a Mapping and GIS Laboratory, Department of Civil and Environmental Engineering and Geodetic Science, The Ohio State University,470 Hitchcock Hall, 2070 Neil Avenue, Columbus, OH 43210, USA

b Jet Propulsion Laboratory, California Institute of Technology, Mail Stop 125-209, Pasadena, CA 91109, USA

Received 19 May 2006; received in revised form 25 July 2007; accepted 26 July 2007Available online 12 September 2007

Abstract

In the 2003 Mars Exploration Rover (MER) mission, the twin rovers, Spirit and Opportunity, carry identical Athena instrumentpayloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper presents thephotogrammetric processing techniques for high accuracy topographic mapping and rover localization at the two landing sites.Detailed discussions about camera models, reference frames, interest point matching, automatic tie point selection, image networkconstruction, incremental bundle adjustment, and topographic product generation are given. The developed rover localizationmethod demonstrated the capability of correcting position errors caused by wheel slippages, azimuthal angle drift and othernavigation errors. A comparison was also made between the bundle-adjusted rover traverse and the rover track imaged from theorbit. Mapping products including digital terrain models, orthophotos, and rover traverse maps have been generated for over twoyears of operations, and disseminated to scientists and engineers of the mission through a web-based GIS. The maps andlocalization information have been extensively used to support tactical operations and strategic planning of the mission.© 2007 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rightsreserved.

Keywords: Mars rover; Topographic mapping; Rover localization; Incremental bundle adjustment; Tie point selection

1. Introduction

On June 10 and July 7, 2003, the Mars ExplorationRover (MER) mission launched twin rovers, Spirit andOpportunity respectively, both carrying identical Athenainstrument payloads and engineering cameras forexploration of the Gusev Crater and Meridiani Planum

⁎ Corresponding author. Tel.: +1 614 292 6946; fax: +1 614 292 2957.E-mail address: [email protected] (R. Li).

0924-2716/$ - see front matter © 2007 International Society for PhotogramAll rights reserved.doi:10.1016/j.isprsjprs.2007.07.007

landing sites (Squyres et al., 2003). Spirit and Oppor-tunity landed on the Martian surface on January 4 andJanuary 25, 2004, respectively. Since then, the tworovers have been documenting the geology of thelanding sites and gathering compositional, mineralogi-cal, and textural information about selected Martian soilsand rocks (Squyres et al., 2004). With their extraordinarymobility, the MER rovers are capable of traveling over100 m across the Martian surface per sol (a sol is aMartian day, which is about 39.5 min longer than an

metry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V.

转载

Page 2: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

182 K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

Earth day). As of April 17, 2006 (Sol 813 of Spirit, Sol793 of Opportunity), Spirit has traveled 6.16 km andOpportunity has traveled 7.12 km (actual distancestraveled instead of odometry measures); these farexceeded the planned traverses of 600 to 1000 m for anominal mission of three months.

To support science and engineering operations, it iscritical to localize the rovers and map the landing andtraversing areas with high accuracy (Arvidson et al.,2004). During the MER mission, The Ohio StateUniversity (OSU) team, in collaboration with JetPropulsion Laboratory (JPL) and the mission's scienceand engineering teams, has been routinely producingtopographic maps, rover traverse maps, and updatedrover locations to support tactical and strategic opera-tions. These maps and localization data were provided toMER mission scientists and engineers through a web-based GIS.

Among the various instruments on board the rovers,the Pancam (Panoramic Camera) (Bell et al., 2003) andNavcam (Navigation Camera) (Maki et al., 2003) stereocameras are the most important for high-precisionlanding-site mapping and rover localization. Thesetwo stereo imaging systems are mounted on the samecamera bar of the rover mast. The image sizes of boththe Pancam and Navcam are 1024×1024 pixels.Navcam has a stereo base of 20 cm, a focal length of14.67 mm, and an effective depth of field of 0.5 m toinfinity. Its best focus is at 1 m with a field of view(FOV) of 45°. Pancam has a wider stereo base (30 cm)and a longer focal length (43 mm), making it moreeffective for mapping medium-to-far objects in thepanoramic images. The effective depth of field for thePancam is 3 m to infinity and the FOV is 16°.

The rover localization and landing-site mappingtechnology is based on bundle adjustment (BA) of animage network formed by ground imagery, i.e., Pancamand Navcam stereo images. The developed incrementalbundle adjustment model supplies improved roverlocations and image orientation parameters, which areessential for the generation of high quality landing sitetopographic mapping products. The overall technologyis described in Li et al. (2004). Before the MERmission,the rover localization and mapping technology had beenextensively tested and verified with field test dataacquired on Earth and actual Mars data from the 1997Mars Pathfinder mission (Li et al., 2002; Di et al., 2002).Lander localization using rover panoramic images,orbital images, and descent images as well as radioscience based localization was performed shortly afterthe landing of the two MER rovers. Initial results oflander and rover localization, regional mapping using

orbital and descent images, and detailed landing-sitemapping using surface imagery are reported in Li et al.(2005, 2006), Li et al. (2007) and Di et al. (2005). In thispaper, a detailed description of photogrammetric proces-sing techniques for mapping and rover localizationusing surface imagery is given. Some new and updatedmapping and localization results are presented for bothlanding sites since our last publication. An additionalcomparison of the bundle-adjusted rover traverse withthe actual rover track of Spirit as observed from a mosaicof MOC NA (Mars Orbiter Camera Narrow Angle)images is also discussed.

2. Camera model and reference frames

Camera modeling and calibration are crucial forphotogrammetric processing and planetary applications.Gruen and Huang (2001) provided results of variouscamera calibration techniques for close-range cameras.Reviews of camera calibration in a photogrammetriccontext can be found in Clarke and Fryer (1998) andFraser (2001). The calibration of the MER cameras wasperformed at JPL (Maki et al., 2003).

Pancam and Navcam images use the CAHVORmodel and its simplified CAHV model (Maki et al.,2003; Alexander et al., 2006). Initiated by Yakimovskyand Cunningham (1978), the CAHV model representsthe transformation from the ground space to the imagespace by using four vectors (C, A, H, and V) whichdescribe the imaging geometry. Gennery (1992, 2006)extended the model to the CAHVORmodel by includingradial lens distortions with a vector O and a triplet R.He also provided a least-squares solution for cameracalibration. The two models have been used in planetaryexploration missions for modeling lander and roverimages. For example, in the 1997Mars Pathfinder (MPF)mission, the camera models were directly used in imagemosaicking and stereo calculation (LaVoie et al., 1999).A conventional photogrammetric model, which repre-sents transformation from the ground space to the imagespace using explicit interior and exterior orientationparameters, was also used for photogrammetric proces-sing of MPF lander images (Kirk et al., 1999).

TheCAHVORmodel is intrinsically different from theconventional photogrammetric model, but can be con-verted to the photogrammetric model with a sufficientaccuracy (Di and Li, 2004). The model is defined in arover frame in which the X-axis points forward, the Z-axispoints down, and the Y-axis is defined to form a right-handed system. The parameters of the CAHVOR modelare a part of telemetry data that are stored in the imageheader. To facilitate rover operations in an extended

Page 3: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

Fig. 1. Reference frames: site frame and rover frame.

183K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

landing site, a series of site frames (Fig. 1) are definedalong the traverse, starting with Site 0. TheX-axis of a siteframe points to local north. The Z-axis points down in thelocal normal direction. The Y-axis is defined to form aright-handed system. The relationship between twoadjacent site frames is determined by a 3D offset and aquaternion matrix and stored in a separate master file.Further, the origin and orientation of each rover frame,with respect to its current site frame, is defined by another3D offset with respect to the site frame and an associatedquaternion matrix. The first site frame (Site 0), which is atthe lander location, is also defined as the Landing SiteLocal (LSL) frame for mapping and rover localizationpurposeswithin the entire region of rover exploration. Siteframe parameters are regularly determined and updatedby an onboard Inertial Measurement Unit (IMU) andwheel-odometer that are infrequently supported by Sun-sensing techniques to improve azimuth observations (Liet al., 2005). Results of Visual Odometry (VO) for shortdistances where precise locations are required, forinstance, for instrument placement, are incorporated inthe telemetry data (Biesiadecki and Maimone, 2005;Olson et al., 2003).

In addition, epipolar images, also called linearimages in computer vision and planetary imageprocessing, are provided to facilitate stereo viewingand fast stereo matching (Alexander et al., 2006). Theyare sometimes called normalized images in photogram-metry (Schenk, 1999). When camera calibration isperformed with a high quality, there is practically noparallax in the vertical direction since lens distortion iscorrected as well. Therefore, through the linearizationprocess, the CAHVOR model is reduced to a CAHVmodel that does not contain lens distortion components.

Linear Pancam and Navcam images were used intopographic mapping and rover localization. Thenecessary first step is to convert the CAHV cameramodel to a photogrammetric model that has explicitorientation parameters and is commonly used fortopographic mapping and remote sensing (Di and Li,

2004). The camera model is then transformed from therover frame to its site frame and subsequently to the LSLby sequential 3D rotations and translations. The resultantimage orientation parameters are then used as initialapproximations in the incremental bundle adjustment.

From the CAHV model of a linear image, we cancalculate the position vector CR and rotation matrix MR

of the camera in the rover frame (Di and Li, 2004). Wecan directly compute the offset vector TS and therotation matrix MS from the rover frame to its currentsite frame using the quaternion information. Further-more, we calculate the offset vector T0 and the rotationmatrix M0 from the current site frame to the LSL framethrough sequential rotations and translations of allprevious site frames. In the end, we can calculate therotation matrix M of the camera in the LSL frame as

M ¼ MRMSM0 ð1Þand calculate the camera position vector C in the LSLframe as

C ¼ MT0 ðMT

SCR þ TSÞ þ T0: ð2ÞM and C of each image are then used in thephotogrammetric bundle adjustment and triangulationin the LSL frame. To ensure high precision, the fixedrelationship between the left and right cameras at thetwo ends of the stereo camera bar is represented asconstraints of the image orientation parameters in theincremental bundle adjustment (Di et al., 2004). That is,the differences of the relative positions and orientationsbetween the left and right cameras are fixed to reflect thefixed (or “hard”) baseline of the stereo camera bar of therover. The constraints are implemented as virtualobservations with very large weights.

Orbital images were available both pre- and post-landing. After the landing of the two rovers, the MarsGlobal Surveyor (MGS) Mars Orbiter Camera (MOC)aimed at the two estimated lander locations and took a high-resolution (1 m) cPROTO (compensated Pitch and Roll

Page 4: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

Fig. 2. Illustration of a rover traverse and the image network built as the Pancam and Navcam panoramas and traversing images are taken.

184 K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

Targeted Observations) image at the Gusev site on Sol 16and a ROTO (Roll-Only Targeted Observation) image atthe Meridiani site on Sol 8 (Malin, 2004). Each lander canbe seen from the corresponding high-resolution images.Through each lander's position, which is also its origin ofthe LSL, the LSL can be linked to the Mars body-fixedreference system that is used for orbital image mapping(Kirk et al., 2003; Shan et al., 2005; Li et al., 2004, 2005).

3. Rover localization based on bundle adjustment ofthe surface image network

As indicated above, onboard rover localization isprimarily performed by IMU, wheel-odometry, and sun-sensing technologies. In cases where the rover experi-ences slippage caused by traversing loose soil or steepslopes, particularly in a crater, the onboard visualodometry (VO) technique was applied (Biesiadecki andMaimone, 2005; Olson et al., 2003; Li et al., 2005). Inthis mission, VO has acquired consecutive Navcamstereo pairs (step b0.75 m) within short traversedistances (often b10 m). The VO algorithm estimatesthe rover motion by tracking interest points betweenconsecutive stereo pairs both in the 2D image space and3D ground space (Matthies, 1989; Olson et al., 2003;Cheng et al., 2006). In the MER mission, the onboardVO processing can take two to 3 min per image pair onMER's 20 MHz RAD6000 CPU, which reduces theamount of distance that can be driven each sol whenusing VO (Cheng et al., 2006). So VO was only enabledin the more slippery or uncertain terrains, e.g., on thecrater wall of Endurance Crater.

A typical drive distance within each sol is 20 m to50 m, occasionally around 100 m. A long drive mayconsist of a blind drive supported by priori visualizationand analysis of images offline on Earth and an autonavdrive supported by onboard rover navigation algorithms.The blind drive allows the rover to drive a distanceefficiently without consecutive images taken along thetraverse. This made the onboard VO algorithm or othersequence tracking algorithms inapplicable. The BAmethod builds an image network containing allpanoramas and traversing images along the traverse to

achieve a high-accuracy solution of rover positionsalong the entire traverse (Fig. 2). BA-based roverlocalization is performed on Earth. Whenever the rovermoves, the rover localization results are reported to theMER science and engineering teams and are used forplanning of next sol's rover traverse.

3.1. Image network construction

As shown in Fig. 2, panoramas and traversing stereoimages of Pancam and Navcam were taken at differentlocations. Pancam panoramas were acquired mainly atlocations where substantial science exploration activi-ties took place; Navcam panoramas were taken morefrequently for navigation and near-rover site character-ization. For localization purpose, traversing images(forward and backward stereo pairs) were often acquiredapproximately at the midpoint of a long drive (e.g., over70 m). The image network is constructed by linking thepanoramic and traversing images with automaticallyand/or manually selected tie points. The key to thesuccess of BA is to select a sufficient number of highquality well-distributed tie points that link the images toform the network. A systematic approach to automaticselection of tie points from the panoramic images takenat one position was developed (Xu et al., 2002; Di et al.,2002; Li et al., 2003; Xu, 2004). This tie point selectionmethod consists of four steps: interest point extractionusing the Förstner operator, interest point matching,parallax verification, and, finally, tie point selection bygridding. More details of the algorithms are presented inXu et al. (2002). Descriptions of interest point matchingand parallax verification will be given later in Section 5.In matching interest points between adjacent stereopairs, an initial Digital Terrain Model (DTM) isgenerated that can be used to predict the location ofconjugate points and to limit the search range in theimage space. Fig. 3 shows an example of automaticallyselected tie points at two adjacent stereo pairs. Fig. 3(a)and (b) are one stereo pair, and (c) and (d) are anotherstereo pair adjacent to the first pair. The blue crosses areintra-stereo tie points, which are the tie points within onestereo pair. The red crosses are inter-stereo tie points,

Page 5: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

Fig. 3. Automatically selected intra- (blue crosses) and inter- (red crosses) stereo tie points. Images (a) and (b) and images (c) and (d) are two stereopairs. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

185K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

which are the tie points between adjacent stereo pairstaken at different azimuth and tilt angles at the same site.This method has been successful in selecting tie pointswithin the same site. At most of the sites (N95%), tiepoints can be selected automatically.

Selection of cross-site tie points (tie points betweenimages taken at different sites, Fig. 2) is challenging and isperformed manually during the mission operations. The

difficulty comes from the significant differences inlooking angles, resolutions, and distances betweenadjacent sites. Fig. 4 shows an example of manuallyselected cross-site tie points. Fig. 4(a) and (b) show astereo pair at one site, and Fig. 4(c) and (d) illustrate astereo pair at an adjacent site, which is 38 m away. At thefirst sight, it is difficult to identify the same features. Toovercome the difficulty, a number of interactive tools

Page 6: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

Fig. 4. Manually selected cross-site tie points. Images of (a) and (b) are a forward looking stereo pair, and images (c) and (d) are a rear looking stereopair. Note that (a) and (c) are zoomed (200%) and (b) and (d) are of actual pixel size (100%). (For interpretation of the references to colour in thisfigure legend, the reader is referred to the web version of this article.)

186 K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

were developed to assist manual tie point selection. Forexample, the projection tool can project a feature point(e.g., Rock 1) fromone stereo pair (Fig. 4(a) and (b)) to theother ((c) and (d)) by using the initial image orientationparameters to give an approximate location of the featurein the other stereo pair. The orthophoto tool can generatetwo orthophotos (using the same initial orientationparameters and the initial DTM) of the two sites andoverlay them to identify the corresponding features. Theanaglyph stereo tool can help identify correspondingfeatures in three dimensions and locate the same point(e.g., a corner of a rock) on the features. Depending on thecharacteristics of the local terrain, an experienced operatorusually uses one tool or a combination of these tools toidentify corresponding features. In the end, the cross-sitetie points (crosses in Fig. 4) are selected manually.Overall, cross-site tie point selection remains a challeng-

ing task. In the ongoing research, new algorithms arebeing investigated to automate the process.

3.2. Rover localization by incremental bundle adjustment

Pointing information of images from telemetry dataincludes information that can be used to derive exteriororientation parameters. Such pointing informationwithin one panorama at one site is generally consistent.However, ground points of the same feature derivedfrom pointing information of images taken at adjacentsites have often significant inconsistencies that arecaused by factors such as wheel slippage, IMU angulardrift, and other navigation errors. Since these incon-sistencies appear to be systematic within two adjacentsites, a transformation (translation and rotation withoutscale change) is applied to the second site using cross-

Page 7: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

187K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

site tie points to ensure that better initial orientationparameters of the image network are achieved.

The unknowns of the incremental bundle adjustmentinclude exterior orientation parameters of all the images(Pancam and/or Navcam) involved in the image networkand the ground coordinates of all the tie points. Theaforementioned constraints representing the fixed relation-ship between the two stereo cameras at the ends of thecamera bar are also implemented. The image networkinvolved is a very special one that extends itself along thetraverse sol by sol as the rover moves along and takesimages incrementally. For a long traverse, the amount ofimages involved can be extremely large. And a strictsolution of the entire network with all the unknownsappears to be too computationally expensive on each sol ina mission operations scenario. Ideally, a sequentialcomputing process would allow addition or deletion ofobservations and unknown parameters at any location ofthe network, and consider subsequent alterations in thesolution vector, so that the final results are exactly the sameas a simultaneous solution (Gruen, 1985). Sequentialcomputation for on-line triangulation in photogrammetrywas researched in 1970's and 1980's. The most frequentlyused methods include the “Kalman-form”, TriangularFactor Update (TFU), and Givens Transformations. TheKalman-form updates the inverse of the normal equations(Mikhail and Helmering, 1973). The TFU method directlyupdates the upper triangle of the reduced normal equationsandwas found to performbetter than theKalman-formbothin computational time and storage requirements (Gruen,1982; Wyatt, 1982). Givens transformations provide adirect way of solving linear least-squares problems withoutforming normal equations (Blais, 1983). It was found to besuperior to the TFU method in terms of computationalperformance and ease of software implementation (Runge,1987; Holm, 1989). The sequential computational methodshave been applied in autonomous vehicle navigation androbot vision (Edmundson and Novak, 1992; Gruen andKersten, 1992; Kersten and Baltsavias, 1994).

Taking the nature of planetary rover operations and theadvantages of sequential computing techniques intoaccount, we developed an incremental bundle adjustmentmodel for this special application (Li et al., 2002; Maet al., 2001). The observation equations are composed oftwo parts:

Vm�1 ¼ Am�1Xm�1 þ Bm�1Ym�1 � Lm�1; Pm�1 ð3Þ

Vm ¼ BmYm þ CmZm � Lm; Pm ð4Þwhere Eq. (3) represents the linearized observationequations involving all images of previous m-1 sites

with inherited unknowns Xm−1 that contain orientationparameters of these images and coordinates of the tiepoints linking them. Similarly, Eq. (4) only involves newimages taken at the current site (the m-th site) withunknowns Zm unrelated to the images at the previoussites. The unknowns Ym−1 in Eq. (3) and Ym in Eq. (4)contain orientation parameters and coordinates of tiepoints involving overlapping images taken at the (m−1)thsite andm-th site. Pm− 1 and Pm are weight matrices of theobservations, respectively. The advantage of this newmodel is that the unknowns are separated into three parts:X represents the unknowns prior to the (m−1) site, Yexpresses the unknowns relevant to both the (m−1)th siteand them-th site, and Z is related to them-th site only. Forrover localization, the rover status vector informationincluding position and attitude is given in X for previouslocations and in Z for the current location, while Yestimates coordinates of the tie points that link imagesbetween these locations.

The above incremental bundle adjustment model isimplemented using the Kalman form by updating the co-factor matrix (Ma et al., 2001). The rover traverse is anopen-end traverse and the lander location, which is theorigin of the LSL frame, is fixed in the bundleadjustment. At each new rover location, all the unknownparameters are estimated. Those computed previouslyare updated as more images become available. A specialsoftware system was developed to reduce the amount ofcomputation by only dealing with images contributing tothe estimation of the tie point coordinates and orientationparameters of images at the new rover location and theoverlapping images. The solution should be very close toa simultaneous solution considering all the images at thesame time provided that there are sufficient well-distributed tie points to link the adjacent sites. Asdemonstrated in the orbit-ground data comparison, thisincremental adjustment model should provide the samesolution as a sequential bundle adjustment in Gruen andKersten (1992).

In the MER mission, a vast amount of image datahave been acquired. For Spirit rover, as of Sol 800, therewere over 500 rover locations (stops) and over 10,000images collected. In general, at each location there areover 300 tie points (inter- and intra- stereo, and cross-site). Thus, there are at least 150,000 tie points in theimage network by Sol 800 and the number is increasingas the rover drives along. Even an incremental bundleadjustment (or sequential computation technique)involves a very significantly large number of tie pointsand new and previous overlapping images. It requiresconsiderable manual measurements and computationaltime in daily operations. With limited manpower and

Page 8: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

188 K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

required timeliness of the mission, this makes the strictimplementation of the above incremental bundleadjustment model unmanageable. We decided to carryout the incremental bundle adjustment in two phases:a) During the mission operation period, BA is preformedin a stepwise manner to warrant timely results; namely,we fix the previous (m−1) location to estimate thecurrent (m) location; often, BA is performed in a lesssequential way where tie points can be found not onlybetween locations m and m−1 but also betweenlocations m and m−2 (e.g., this happens when therover makes a turn or short drives); in this case, BA fixeslocation m−2 and adjusts locations m and m−1; morerover locations can be involved in this alteredadjustment; the multi-site based consistency check andinternal covariance analysis discussed in the followingsections of this paper indicated that this method, whilereducing computational time and meeting the demand ofmission operations, can provide quality mappingproducts. b) An overall bundle adjustment involving

Fig. 5. Spirit rover traverse map up to Sol 700 with inset

the entire network with all images is planned at the endof the mission during the post-mission analysis period;at this time the rovers have been in operation over threeyears; we expect that at the end of the operations, theimage network may expand into a very sizable networkfor which the bundle adjustment algorithm and thesoftware system will need to be improved and optimizedin terms of computational efficiency and storagerequirements; A combined adjustment of orbital imag-ery and rover images is also planned.

Since no absolute ground control is available onMars at this time, it is impossible to evaluate theaccuracy of BA using the conventional method ofcomparing the bundle-adjusted positions with groundtruth. Instead, the accuracy of BA is estimated bychecking consistencies of post-BA positions in the 3Dground space and 2D image space. Firstly, for the samefeatures on the ground, their 3D ground positions can becomputed from various stereo pairs taken at differentrover positions along the rover traverse. The differences

describing details of Husband Hill and Inner Basin.

Page 9: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

Fig. 6. Spirit rover traverse (Sol 154 to Sol 670) in the Husband Hill area where the rover experienced a great deal of slippage. Blue line is the traverse as computed from telemetry data and red line isthe traverse as corrected by the bundle adjustment method. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

189K.Diet

al./ISP

RSJournal

ofPhotogram

metry

&Rem

oteSensing

63(2008)

181–201

中国科技论文在线 http://www.paper.edu.cn

Page 10: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

190 K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

among the computed 3D ground coordinates from thedifferent positions reflect the quality of the BA results.Secondly, for a ground point, its 2D image coordinatesin all images that contain the feature can be calculatedby back-projection from the ground space to the imagespace using the bundle adjusted orientation parameters.Differences between the back projected image pointsand the actual image points are used to compute a mean2D image coordinate difference. In both 3D and 2Dcases, the mean absolute difference of the coordinates isused as the precision measure. We use the BA result ofSite 6700 (Sol 155) and Site 6800 (Sol 157) at GusevCrater site as an example. The two sites are about 38 mapart. There are 38 images (19 stereo pairs) involved inthe stepwise BA, with seven manually selected cross-site tie points, and 99 inter-stereo and 218 intra-stereo tiepoints that are automatically selected. The adjustment ofthe two sites was achieved within 30 s of computationaltime using a desktop PC. Before BA, a 3D inconsistencyof 29 m and a 2D inconsistency of 2798 pixels were

Fig. 7. Vertical profile of the Spirit rover traverse (Sol 154 to Sol 670). Blueprofile as corrected by the bundle adjustment method. (For interpretation of thweb version of this article.)

estimated, respectively, by using the bundle adjustedimages of the first site (Site 6700) and the un-adjustedimages at the second site (Site 6800). After BA, the 3Dinconsistency is 0.8 m and the 2D inconsistency is1.2 pixels.

In addition, the internal precision of BA is repre-sented by the standard deviations of the estimates,including camera positions, rotation angles, and 3Dpositions of the tie points, which are calculated based onthe BA-derived covariance matrix. For the bundleadjustment of Sites 6700 and 6800, the mean standarddeviations of the camera positions are about 2 mm in X,Y, and Z directions; the mean standard deviations of thecamera rotation angles are 2′1ʺ, 1′19ʺ and 2′37ʺ for ω,ϕ, and κ, respectively; and the mean standard deviationsof the 3D positions of the tie points are 0.131 m, 0.134 mand 0.057 m in X, Y and Z directions, respectively. Theinternal precisions of BA at the rest sites along thetraverse are at the same level. In general, these internalprecision values indicate that the image network has a

line is the traverse as computed from telemetry data and red line is thee references to colour in this figure legend, the reader is referred to the

Page 11: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

191K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

good geometry and is well controlled by using the tiepoints. Compared to the result of 0.8 m 3D consistencydiscussed above, the internal precisions of about 0.1 mfor the tie points are also reasonable because thestepwise BA often adjusts the images of the last twosites only, while the 3D consistency may check pointsfrom more than two sites.

Fig. 8. Opportunity rover trav

In order to compare the rover traverses from thetelemetry with those from BA, rover traverse maps weregenerated whenever a new site was added and adjusted.Up to Spirit Sol 670, the accumulated difference betweenthe traverses from telemetry and the bundle adjustment is51.8 m, or 1.1% of the distance traveled from the landingcenter. The maximum is 2.7% (20.6 m over 775.7 m) on

erse map up to Sol 680.

Page 12: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

Fig. 9. Rover track visible on MOC NA image (marked in white for better visualization) and bundle-adjusted rover traverse (black line).

192 K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

Sol 106. Fig. 5 shows the bundle-adjusted Spirit traverseup to Sol 700 superimposed on a MOC base map.

To track rover positions and errors that occurred as therover climbed uphill Husband Hill, reached the summit,and descended to the south inner basin, we performed alocal comparison of rover traverses starting from Sol 154when the rover started climbing. Fig. 6 shows the 2Dtelemetry-derived traverse and the bundle-adjustedtraverse from Sol 154 to Sol 670, where the blue lineillustrates the traverse generated from telemetry and thered line shows the bundle-adjusted traverse. We observe

Fig. 10. Bundle-adjusted rover traverse on

a significant error accumulation. Sol numbers are alsoshown on the map. The locally accumulated differencebetween these two traverses at the end of the traverse is67.9 m, or 3.7% of the traveled distance of 1.85 km, withamaximum of 10.5% (56.6 m over 540.6m) observed onSol 337. This demonstrates that the BA was able tocorrect significant localization errors.

In addition to the rover traverse map, a vertical profilewas generated and expanded as the rover traveled. Fig. 7shows a local vertical profile of Spirit traverse from Sol154 to Sol 670. The horizontal axis of the figure is the

the georeferenced MOC NA image.

Page 13: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

193K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

distance traveled and the vertical axis depicts elevation.Note that the scales in horizontal and vertical axes aredifferent. Again, the blue line illustrates the profilecomputed from telemetry, while the red line shows theBA result. The accumulated elevation difference is36.6 m over a traveled distance of 1.85 km, which maybe mainly attributable to wheel slippage and IMU drift.

For the Meridiani Planum site, BA was conductedwithin Eagle Crater (up to Sol 62) where Opportunityrover landed. Typically, 2D consistency of BA is sub-pixel and the 3D consistency is 2 to 15 cm. The details ofthe Opportunity rover traverse computation up to Sol 62can be found in Li et al. (2005). The accumulateddifference reached 21 m, or 13% of the traveled distanceof 164.2 m. A maximum relative error of 21% occurredon Sol 56. The significant localization errors in thetelemetry data were caused mainly by wheel slippagewhen Opportunity traversed the crater wall on loose soiland steep slopes for 56 sols. This again demonstrated thatthe BAwas able to correct significant localization errors.Fig. 8 shows the overall Opportunity rover traverse (upto Sol 680) superimposed on a MOC base map.

On the way from Eagle Crater to Anatolia (Fig. 8), adata gap of 100 m made a BA-based traverse impossible.After this gap, an adjustment of the traverse was done bycomparison between features identifiable from bothground images and orbiter (MOC NA) images. Fromthe data gap to EnduranceCrater, a translationwas appliedso that large features such as the Fram and Endurancecraters generally matched well with their positions on theMOCNA imagewhenmeasured from the ground images.

Fig. 11. (a) Spirit rover seen on MOC NA image, and (b) bundle-adjusted r

After the rover descended into Endurance Crater andperformed investigations on the crater wall, significantslippages occurred once again. Rover localization errorswere corrected by comparing the features in roverorthophoto patches locally at each stop with a baseorthophoto of the entire crater (156m diameter) generatedusing two Pancam panoramas taken at two separatelocations of the crater rim. Along the traverse fromEndurance Crater to Erebus Crater, whenever a compar-ison of features between orbital and rover images waspossible, a manual adjustment was made. Particularly,local orthophotos of craters such as Argo, Jason, andVostok were generated from rover panoramic images andcompared with the MOC NA base map for adjustment.This orbit-ground comparison proved to be a practicalmethod for rover localization at theMeridiani Planum sitewhere the terrain is generally flat, not much image textureexists, and the data gaps are too significant to overcome.

4. Rover track imaged from orbit and mapped onthe ground

On January 3, 2005, almost one year after Spiritlanded in Gusev Crater, MSSS (Malin Space ScienceSystems) released a 1 meter-resolutionMOCNAmosaic(No. MOC2-960) on which the rover track from thelander to Columbia Hills is visible (Malin, 2005). Themosaic (Fig. 9) is a composite of MOC images R15-02643 (acquired on Spirit's Sol 85) and R20-01024(acquired on Spirit's Sol 223). The bundle-adjustedrover traverse from ground imagery is superimposed on

over position on Sol 652 and the rover position transferred from (a).

Page 14: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

194 K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

this mosaic for comparison purposes. This orbit-groundcomparison can serve two purposes: a) check of theground image-based traverse adjustment computation,and b) ground truth of the orbital images. The followingis the preliminary result of this comparison. A strictorbit-ground comparison and integration is planned forpost-mission analysis. That will be based on thephotogrammetric sensor model of the MOC NA camera

Fig. 12. (a) Interest point matching in a stereo pair. White dots are matched pomatches. (b) Parallax ordered by “row first” (left) and the parallax curve geparallax curve. White dots are correct matches. Black dots are discarded out

and use of MOLA data as absolute control (Kirk et al.,2003; Shan et al., 2005).

Most parts of the rover track are distinguishable in theMOC mosaic because the surface materials disturbed bythe rover wheels appear darker than the surroundingmaterials of the dust-coated plain. However, it is verydifficult to identify locations where the rover stopped totake panoramas and which BA computed as rover

ints that passed left-right cross-stereo-verification. Black ones have nonerated the data (right). (c) Interest points after verification using theliers.

Page 15: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

Fig. 12 (continued ).

195K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

positions. We were able to identify and measure ninecorresponding rover positions (crosses in Fig. 9) along thetrack in the orbital image and the BA traverse. Thecomparison of these positions suggests that there is arotation of 1.3° between the two tracks. This may bebecause theMOC imagemosaic is “uncontrolled”, i.e., theimaging geometry is computed solely from the pointinginformation of the orbital telemetry without any furtherphotogrammetric processing using ground data.

For further analysis, we georeferenced the MOC NAmosaic to the bundle-adjusted rover traverse using thenine corresponding rover locations as control points(crosses in Figs. 9 and 10). After georeferencing, wetranslated the MOC NA mosaic by (2 m, −3 m) in X andY directions so that the two traverse starting points wereat the same lander position. Fig. 10 shows the bundle-adjusted rover traverse overlaid on the georeferencedMOCNAmosaic. The rover track in theMOCNA imageand the bundle-adjusted rover traverse now match verywell. The difference between the two rover tracks at thelast common point is 12 m (about 0.4% of the traveleddistance of 3081 m from the lander), while the averagedifference at the nine locations is 8.8 m. Overall, thedifferences reflect the inconsistency between the twotracks that may be attributed by errors of the MOC NAmosaic and residuals of the BA.

On November 2, 2005 (Spirit's Sol 652), MSSSacquired another MOC NA image centered at theHusband Hill summit. Although the rover track is notvisible, the rover itself was identified in the image(Fig. 11a, Malin, 2005). We identified the same positionon our georeferenced base map according to thesurrounding features (Fig. 11b). In comparing thisMOC-imaged rover location with the bundle-adjustedrover position on Sol 652, a difference was found of about20 m, or 0.4% of the overall traverse of 4559 m from thelander. It is important to note that the 20m difference doesnot mean that the absolute accuracy of BA-based roverlocalization is 20 m. Again, this difference may containgeoreferencing errors of the MOC base map and residualsof the BA. Particularly, it may be largely caused by theaccumulation of map georeferencing errors across a fewstrips at the landing site.

For the purpose of rover localization, orbital imageryand ground imagery can be employed in a complementaryway. The MOC NA image has a resolution of 1 to 3 m.With orbital images, if the rover track is visible (in manycases it is not) and stereo orbital images are available, 3Drover track may be derived by a photogrammetricprocessing where the accuracy depends on the orbit-ground pointing errors. Even with a single orbital image,we may see the rover track that does not have the

Page 16: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

Fig. 13. (a) Orthophoto and (b) 3D view of Methuselah outcrop at the Spirit landing site.

196 K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

cumulative nature of errors as the traverse increases.However, it cannot provide details of the traverseinformation as where the rover stopped, where roverimages were taken, and what scientific experiments werecarried out. These are all important for sol-to-soloperations of the mission. On the other hand, the groundimage-based traverse records incremental traverse infor-mation including the above critical information and otherdetails such as associated mapping products generatedfrom the same data sets. The accuracy of ground image-based rover localization depends on the quality of the

image network. Errors may accumulate along the traverseas the rover explores an extended landing site, althoughsuch errors can be reduced by designing the optimaltraverse geometry.

The optimal method of rover localization is to useboth orbit and ground images in an integrated geometricnetwork. The ground imagery provides detailed infor-mation of the 3D rover traverse, while the orbitalimagery controls the accumulation of localization errors.The integration of orbital and ground imagery with anextended bundle adjustment model is being researched.

Page 17: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

197K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

5. Topographic mapping and product dissemination

Topographic products, such as DTM, orthophoto, and3D models, were routinely generated using Pancam andNavcam images along the traverse. The orthophotos coveran area of 60 m×60 m (using Navcam images) or120m×120m (using Pancam images)with a resolution of0.01 m. Contour maps of craters were also produced. Thesteps of DTM and orthophoto generation include: denseinterest-point matching between intra-stereo images, 3Dposition calculation of thematched points, TIN (triangularirregular network) construction and DTM interpolation,and orthophoto generation through projection between theimages and DTM. Details of the mapping algorithms canbe found in Xu (2004). The techniques of interest pointmatching and parallax-curve verification are given blew,since they are the key techniques of intra-stereo imagematching for DTM generation.

Interest points are extracted using Förstner operator(Förstner and Guelch, 1987). There are usually about5000 interest points extracted from each image. Interest

Fig. 14. (a) DTM generated from multiple panoramas taken in the Husband Htraverse; (b) 3D view of the DTM.

points are first matched from left image to right image. Tofind a match for an interest point in the left image, allpoints close to its epipolar-line in the right image arechecked by the Normalized Cross-Correlation Coefficient(NCCC) method. A window of 15×15 is typically usedfor computation. The point with highest NCCC, if greaterthan a threshold (e.g., 0.8), will be kept as a pair ofcandidates. Then points are matched from right image toleft image for cross verification. If a pair of candidates(point a in left image and point b in right image) is unique,which means a is b's best match in left image and b is a'sbest match in right image, they will be kept, or otherwisediscarded. Fig. 12a shows the result of interest pointmatching of a Pancam stereo pair taken by Opportunityrover at the rim of Endurance Crater. In the figure, whitepoints are unique candidates and black ones have nomatches. Although the left-right cross-verificationreduces the possibility of mismatches (outliers), theabove matching results often contain some outliersbecause the matching considers only local context in arelatively small image window. A parallax-curve

ill summit area (230 m×180 m), overlaid with contour lines and rover

Page 18: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

Fig. 14 (continued ).

198 K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

verificationmethod is developed and used to eliminate theremaining outliers.

In the left graph of Fig. 12b, the parallaxes (verticalvalues) of the candidates of the matched points aredisplayed in the order (order ID in horizontal direction)along each row of the image from image top to imagebottom. We can see that the general trend of parallax ismonotonically decreasing, plus some abnormal pointsdeviating far away from this trend. This trend representsthe general terrain shape, and the abnormal pointscorrespond to outliers. The terrain can be modeled as aparallax curve (right graph in Fig. 12b) which can beextracted from parallaxes through a median filter. Theoutliers can then be identified if their distances to theparallax curve are greater than a terrain roughnessthreshold. Fig. 12c shows the verification result: thewhite ones are correct matches and black ones are outliers.After parallax verification, the correct matches from allthe stereo pairs in one panorama are used for 3D positioncalculation, TIN generation, and DTM interpolation.

Three-dimensional models were generated for someimportant features and detailed areas where the roverperformed extensive investigations. Fig. 13 shows anorthophoto (a) and a 3D model (b) of the Methuselah

outcrop at the Spirit landing site, where the rover performedclose-up investigation using its robotic arm. Topographicproducts like these are important for stratigraphic analysisand computation of dips and strikes. As of Spirit's Sol 670,90 orthophotos andDTMs have been produced. In additionwe have produced a large DTM of the Husband Hillsummit area using stereo images taken at multiplepositions. Extensive images including Navcam andPancam panoramas were bundle-adjusted to produce 3Dground points from multiple rover locations in the area.Consistency between these 3D points was found to be at acentimeter to submeter level. Fig. 14a shows the DTMgenerated from data collected from Sol 576 to Sol 609 inthe Husband Hill summit area. Contour lines at 1 mintervals and the rover traverse are overlaid. Fig. 14b showsa 3D view of the DTM. These maps have proved veryuseful in short-term planning along the way to the summit.Some artifacts can be observed in the 3D view. They weregenerated from interpolation in the occluded areas that arealso far from the traverse.

Leaving Husband Hill summit, Spirit traveled south tothe southern inner basin, an area that contains a broad rangeof interesting geological targets such as layered outcrops.To support science and engineering operations, we

Page 19: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

Fig. 15. 3D view of the DTM in an extended area of Husband Hill summit and Inner Basin (690 m×640 m).

199K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

continued to expand the DTM using additional panoramicimages taken along the way. In particular, we designed andimplemented a wide baseline of 10 m by having the rovertaking two sets of stereo Pancam partial panoramas at thepositions of Sol 591 and Sol 592 to 597, respectively (seeFig. 14a) to have an improved capability for mapping a far-range area (up to 500 m). We built an image network bylinking these wide baseline images and all other availablepanoramic images from Sol 576 to Sol 696. The resultantDTM covers an area of 690 m×640 m including theHusband Hill summit area and inner basin. Fig. 15 is a 3Dview of this DTM. Such DTMs are often used to producenorth facing slope maps that are used for selection of“winter havens” during winter of the Mars southhemisphere to achieve maximum solar energy. Again,some observable artifacts were generated from interpola-tion in occluded areas.

Aweb-based landing site GIS system was establishedat the OSU Mapping and GIS Laboratory to update anddisseminate localization and topographic information formission operations. This Web GIS was developed usingboth HTML and ESRI's ArcIMS. The interfaces of theWeb GIS for accessing Spirit and Opportunity traverseinformation and local topographic products can be foundin Li et al. (2005). All mapping and localization productsare included and organized in different layers or hyper-linked web pages and can be explored using tools such as

zoom, pan, identification, measurement, and hyperlink.The original rover images can also be retrieved throughhyperlinks of the image pointing lines. The 3D interactivemodel can be viewed andmanipulated on theweb throughan embedded VRML viewer. This internal Web GISproved to be valuable and efficient for mission scientistsand engineers to track the two rovers.

6. Conclusions

This paper presents the photogrammetric processingtechniques and application results of the MER mission.Cameramodels and reference frames used for thismissionare discussed. A description of automatic and manual tiepoint selection for building the image network is given.An incremental bundle adjustment technique developedfor processing MER rover images and applied for roverlocalization and MER landing site topographic mappingis described. The rover position information and mapproducts are disseminated through a Web GIS. The roverlocalization method was capable of correcting positionerrors both at Gusev Crater landing site (Spirit) and inEagle Crater at the Meridiani Planum landing site(Opportunity). Generation of numerous topographicproducts, computation of localization information, andtheir dissemination through a Web GIS greatly supportedstrategic planning and tactical mission operations.

Page 20: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

200 K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

Acknowledgements

Much of the work has been conducted at theMapping and GIS Laboratory of The Ohio StateUniversity. We thank Dr. Xutong Niu, Charles Serafy,Feng Zhou, Ju Won Hwangbo, Jeremiah Glascock andLin Yan for their assistance in data processing duringdifferent periods of mission operations. This work waspartially performed at the Jet Propulsion Laboratory,California Institute of Technology, under a contract withthe National Aeronautics and Space Administration(NASA). Funding for this research by the NASA MarsExploration Program is acknowledged. Collaboration ofthe MER science and engineering teams is greatlyappreciated. Reviewers' constructional comments areacknowledged.

References

Alexander, D.A., Deen, R.G., Andres, P.M., Zamani, P.,Mortensen, H.B.,Chen, A.C., Cayanan, M.K., Hall, J.R., Klochko, V.S., Pariser, O.,Stanley, C.L., Thompson, C.K., Yagi, G.M., 2006. Processing ofMars Exploration Rover imagery for science and operations planning.Journal of Geophysical Research — Planets 111 (E2), E02S02.doi:10.1029/2005JE002462.

Arvidson, R.E., Anderson, R.C., Bartlett, P., Bell III, J.F., Blaney, D.,Christensen, P.R., Chu, P., Crumpler, L., Davis, K., Ehlmann, B.L.,Fergason, R., Golombek, M.P., Gorevan, S., Grant, J.A., Greeley, R.,Guinness, E.A., Haldemann, A.F.C., Herkenhoff, K., Johnson, J.,Landis, G., Li, R., Lindemann, R., McSween, H., Ming, D.W.,Myrick, T., Richter, L., Seelos IV, F.P., Squyres, S.W., Sullivan, R.J.,Wang, A., Wilson, J., 2004. Localization and physical propertiesexperiments conducted by Spirit at Gusev Crater. Science, SpecialIssue on MER 2003 Mission 305 (5685), 821–824.

Bell III, J.F., Squyres, S.W., Herkenhoff, K.E., Maki, J.N., Arneson, H.M.,Brown, D., Collins, S.A., Dingizian, A., Elliot, S.T., Hagerott, E.C.,Hayes, A.G., Johnson, M.J., Johnson, J.R., Joseph, J., Kinch, K.,Lemmon, M.T., Morris, R.V., Scherr, L., Schwochert, M., Shepard,M.K., Smith, G.H., Sohl-Dickstein, J.N., Sullivan, R.J., Sullivan,W.T., Wadsworth, M., 2003. Mars Exploration Rover AthenaPanoramic Camera (Pancam) investigation. Journal of GeophysicalResearch — Planets 108 (E12), 8063. doi:10.1029/2003JE002070.

Biesiadecki, J.J., Maimone, M.W., 2005. The Mars Exploration Roversurface mobility flight software: Driving ambition. IEEE Aero-space Conference, Big Sky, MT, 5–12 March, vol. 5. 15 pp.

Blais, J.A.R., 1983. Linear least-squares computations using Givenstransformations. The Canadian Surveyor 37 (4), 225–233.

Cheng, Y., Maimone, M.W., Matthies, L.H., 2006. Visual odometry onthe Mars Exploration Rovers. IEEE Robotics and AutomationSpecial Issue (MER) 13 (2), 54–62.

Clarke, T.A., Fryer, J.G., 1998. The development of camera calibrationmethods andmodels. The Photogrammetric Record 16 (91), 51–66.

Di, K., Li, R., 2004. CAHVOR camera model and its photogrammetricconversion for planetary applications. Journal of GeophysicalResearch— Planets 109 (E4), E04004. doi:10.1029/2003JE002199.

Di, K., Xu, F., Li, R., Matthies, L., Olson, C., 2002. High precisionlanding site mapping and rover localization by integrated bundleadjustment of MPF surface images. International Archives of the

Photogrammetry, Remote Sensing and Spatial Information Sciences34 (Part 4), 733–737.

Di, K., Xu, F., Li, R., 2004. Constrained bundle adjustment of panoramicstereo images for Mars landing site mapping. Proceedings of the 4thInternational Symposium on Mobile Mapping Technology, Kunm-ing, China, 29–31 March. 6 pp. (on CD-ROM).

Di, K., Xu, F., Wang, J., Niu, X., Serafy, C., Zhou, F., Li, R., Matthies,L.H., 2005. Surface Imagery Based Mapping and RoverLocalization for the 2003 Mars Exploration Rover Mission.Proceeding of ASPRS 2005 Annual Conference, Baltimore,Maryland, 7–11 March. 10 pp. (on CD-ROM).

Edmundson, K.L., Novak, K., 1992. On-line triangulation forautonomous vehicle navigation. International Achieves of Photo-grammetry and Remote Sensing 29 (Part B5), 916–922.

Förstner, W., Guelch, E., 1987. A Fast operator for detection andprecise location of distinct points, corners and centers ofcircular features. ISPRS Intercommission Workshop on FastProcessing of Photogrammetric Data, Interlaken, Switzerland,pp. 281–305.

Fraser, C.S., 2001. Photogrammetric camera calibration: a review ofanalytical techniques. In: Gruen, A., Huang, T.S. (Eds.), Calibrationand orientation of cameras in computer vision. Springer-Verlag,Berlin Heidelberg, pp. 95–121.

Gennery, D.B., 1992. Least-squares camera calibration including lensdistortion and automatic editing of calibration points. Workshop onCalibration and Orientation of Cameras in Computer Vision, XVIICongress of the International Society of Photogrammetry andRemote Sensing, Washington, D.C., 2 August.

Gennery, D.B., 2006. Generalized camera calibration including fish-eye lenses. International Journal of Computer Vision 68 (3),239–266.

Gruen, A., 1982. An optimum algorithm for on-line triangulation.International Archives of Photogrammetry and Remote Sensing24 (Part 3), 131–151.

Gruen, A., 1985. Algorithmic aspects in on-line triangulation.Photogrammetric Engineering & Remote Sensing 51 (4),419–436.

Gruen, A., Huang, T.S. (Eds.), 2001. Calibration and orientation ofcameras in computer vision. Springer-Verlag, Berlin Heidelberg.

Gruen, A., Kersten, T., 1992. Sequential estimation in robot vision.International Achieves of Photogrammetry and Remote Sensing29 (Part B5), 923–931.

Holm, K.R., 1989. Test of algorithms for sequential adjustment in on-line phototriangulation. Photogrammetria 43 (3–4), 143–156.

Kersten, T.P., Baltsavias, E.P., 1994. Sequential estimation of sensororientation for stereo image sequences. International Achieves ofPhotogrammetry and Remote Sensing 30 (Part 5), 206–213.

Kirk, R.L., Howington-Kraus, E., Hare, T., Dorrer, E., Cook, D.,Becker, K., Thompson, K., Redding, B., Blue, J., Galuszka, D.,Lee, E.M., Gaddis, L.R., Johnson, J.R., Soderblom, L.A., Ward, A.W.,Smith, P.H., Britt, D.T., 1999. Digital photogrammetric analysisof the IMP camera images: Mapping the Mars Pathfinder landingsite in three dimensions. Journal of Geophysical Research 104(E4), 8869–8887.

Kirk, R.L., Howington-Kraus, E., Redding,B., Galuszka,D., Hare, T.M.,Archinal, B.A., Soderblom, L.A., Barrett, J.M., 2003. High-resolution topomapping of candidate MER landing sites with MarsOrbiter Camera narrow-angle images. Journal of GeophysicalResearch 108 (E12), 8088. doi:10.1029/2003JE002131.

LaVoie, S.K., Green, W.B., Runkle, A.J., Alexander, D.A.,Andres, P.M., DeJong, E.M., Duxbury, E.D., Freda, D.J.,Gorjian, Z., Hall, J.R., Hartman, F.R., Levoe, S.R., Lorre, J.J.,

Page 21: Photogrammetric processing of rover imagery ... - paper.edu.cn · payloads and engineering cameras for exploration of the Gusev Crater and Meridiani Planum landing sites. This paper

201K. Di et al. / ISPRS Journal of Photogrammetry & Remote Sensing 63 (2008) 181–201

中国科技论文在线 http://www.paper.edu.cn

McAuley, J.M., Suzuki, S., Woncik, P.J., Wright, J.R., 1999.Processing and analysis of Mars Pathfinder science data at the JetPropulsion Laboratory's science data processing system section.Journal of Geophysical Research 104 (E4), 8831–8852.

Li, R., Ma, F., Xu, F., Matthies, L.H., Olson, C.F., Arvidson, R.E.,2002. Localization of Mars rovers using descent and surface-basedimage data. In: Arvidson, R.E. (Ed.), Journal of GeophysicalResearch— Planets FIDO Special Issue, vol. 107 (E11), pp. FIDO4.1–FIDO 4.8.

Li, R., Di, K., Xu, F., 2003. Automatic Mars landing site mappingusing surface-based images. ISPRS WG IV/9: ExtraterrestrialMapping Workshop on Advances in Planetary Mapping, Houston,TX, 22 March. 6 pp.

Li, R., Di, K., Matthies, L.H., Arvidson, R.E., Folkner, W.M.,Archinal, B.A., 2004. Rover localization and landing site mappingtechnology for 2003 Mars Exploration Rover mission. Photo-grammetric Engineering & Remote Sensing 70 (1), 77–90.

Li, R., Squyres, S.W., Arvidson, R.E., Archinal, B.A., Bell, J., Cheng,Y., Crumpler, L., Des Marais, D.J., Di, K., Ely, T.A., Golombek,M., Graat, E., Grant, J., Guinn, J., Johnson, A., Greeley, R., Kirk,R.L., Maimone, M., Matthies, L.H., Malin, M., Parker, T., Sims,M., Soderblom, L.A., Thompson, S., Wang, J., Whelley, P., Xu, F.,2005. Initial results of rover localization and topographic mappingfor the 2003 Mars Exploration Rover Mission. PhotogrammetricEngineering and Remote Sensing 71 (10), 1129–1142.

Li, R., Archinal, B.A., Arvidson, R.E., Bell, J., Christensen, P.,Crumpler, L., DesMarais, D.J., Di, K., Duxbury, T., Golombek,M.,Grant, J., Greeley, R., Guinn, J., Johnson, A., Kirk, R.L., Maimone,M., Matthies, L.H., Malin, M., Parker, T., Sims, M., Thompson, S.,Squyres, S.W., Soderblom, L.A., 2006. Spirit rover localization andtopographic mapping at the landing site of Gusev Crater, Mars.Journal of Geophysical Research— Planets, Special issue onMERmission (Spirit) 111, E02S06. doi:10.1029/2005JE002483.

Li, R., Arvidson, R.E., Di, K., Golombek, M., Guinn, J., Johnson, A.,Maimone, M., Matthies, L.H., Malin, M., Parker, T., Squyres, S.W.,Watters, W.A., 2007. Rover localization and topographic mapping atthe landing site of Meridiani Planum, Mars. Journal of GeophysicalResearch — Planets, Special issue on MER mission (Opportunity)112, E02S90. doi:10.1029/2006JE002776.

Ma, F., Di, K., Li, R., Matthies, L.H., Olson, C.F., 2001. IncrementalMars rover localization using descent and rover imagery. ASPRS2001 Annual Conference, St. Louis, Missouri, 23–27 April. 11 pp.(on CD-ROM).

Maki, J.N., Bell, J.F., Herkenhoff, K.E., Squyres, S.W., Kiely, A.,Klimesh, M., Schwochert, M., Litwin, T., Willson, R., Johnson, A.,Maimone, M., Baumgartner, E., Collins, A., Wadsworth, M.,Elliot, S.T., Dingizian, A., Brown, D., Hagerott, E.C., Scherr, L.,Deen, R., Alexander, D., Lorre, J., 2003. Mars Exploration RoverEngineering Cameras. Journal of Geophysical Research— Planets108 (E12), 8071. doi:10.1029/2003JE002077.

Malin, M., 2004. MOC cPROTO and ROTO images with Spirit andOpportunity landers. http://www.msss.com/mer_mission/index.html,Malin Space Science Systems, San Diego, CA (Accessed July 26,2007).

Malin, M., 2005. MOC view of Spirit's trek to the Columbia Hills.http://www.msss.com/mer_mission/index.html, Malin Space Sci-ence Systems, San Diego, CA (Accessed July 26, 2007).

Matthies, L.H., 1989. Dynamic stereo vision. Ph.D. thesis, CarnegieMellon University, Pittsburgh, PA, CMU-CS-89-195.

Mikhail, E.M., Helmering, R.J., 1973. Recursive methods inphotogrammetric data reduction. Photogrammetric Engineering39 (9), 983–989.

Olson, C.F., Matthies, L.H., Schoppers, M., Maimone, M.W., 2003.Rover navigation using stereo ego-motion. Robotics and Auton-omous Systems 43 (4), 215–229.

Runge, A., 1987. The use of Givens transformations in on-linephototriangulation. Proceedings ISPRS Intercommission Confer-ence on Fast Processing of Photogrammetric Data, Interlaken,Switzerland, pp. 179–192.

Schenk, T., 1999. Digital photogrammetry. TerraScience, LaurelvilleOhio.

Shan, J., Yoon, J.-S., Lee, S., Kirk, R., Neumann, G., Acton, C., 2005.Photogrammetric analysis of the Mars Global Surveyor mappingdata. Photogrammetric Engineering and Remote Sensing 71 (1),97–108.

Squyres, S.W., Arvidson, R.E., Baumgartner, E.T., Bell III, J.F.,Christensen, P.R., Gorevan, S., Herkenhoff, K.E., Klingelhöfer, G.,Madsen, M.B., Morris, R.V., Rieder, R., Romero, R.A., 2003.Athena Mars rover science investigation. Journal of GeophysicalResearch— Planets 108 (E12), 8062. doi:10.1029/2003JE002121.

Squyres, S.W., Arvidson, R.E., Bell III, J.F., Brückner, J., Cabrol, N.A.,Calvin, W., Carr, M.H., Christensen, P.R., Clark, B.C., Crumpler,L., Des Marais, D.J., d'Uston, C., Economou, T., Farmer, J.,Farrand, W., Folkner, W., Golombek, M., Gorevan, S., Grant, J.A.,Greeley, R., Grotzinger, J., Haskin, L., Herkenhoff, K.E., Hviid, S.,Johnson, J., Klingelhöfer, G., Knoll, A., Landis, G., Lemmon, M.,Li, R., Madsen, M.B., Malin, M.C., McLennan, S.M., McSween,H.Y., Ming, D.W., Moersch, J., Morris, R.V., Parker, T., Rice Jr.,J.W., Richter, L., Rieder, R., Sims, M., Smith, M., Smith, P.,Soderblom, L.A., Sullivan, R., Wänke, H., Wdowiak, T., Wolff, M.,Yen, A., 2004. The Spirit rover's Athena science investigation atGusev Crater, Mars. Science, Special Issue on MER 2003 Mission305 (5685), 794–799.

Wyatt, A.H., 1982. On-line photogrammetric triangulation — analgorithmic approach. Master Thesis, Department of GeodeticScience and Surveying, The Ohio State University, Columbus,Ohio.

Xu, F., 2004. Mapping and Localization for Extraterrestrial RoboticExplorations. Ph.D. Dissertation, The Ohio State University,Columbus, OH.

Xu, F., Di, K., Li, R., Matthies, L., Olson, C., 2002. Automatic featureregistration and DEM generation for Martian surface mapping.International Archives of the Photogrammetry, Remote Sensingand Spatial Information Sciences 34 (Part 2), 549–554.

Yakimovsky, Y., Cunningham, R., 1978. A system for extracting three-dimensional measurements from a stereo pair of TV cameras.Computer Graphics and Image Processing 7 (2), 195–210.