[ieee 2009 ieee aerospace conference - big sky, mt, usa (2009.03.7-2009.03.14)] 2009 ieee aerospace...

8
1 Hazard Detection Methods for Lunar Landing Tye Brady α , Edward Robertson β , Chirold Epp β , Stephen Paschall α , and Doug Zimpfer α α −− Charles Stark Draper Laboratory, 555 Technology Square, MS27, Cambridge, MA 02139, [email protected] β −−NASA, Johnson Space Center, 2101 NASA Parkway, Houston, TX 77058 Abstract—The methods and experiences from the Apollo Program are fundamental building blocks for the development of lunar landing strategies for the Constellation Program. Each of the six lunar landing Apollo missions landed under near ideal lighting conditions. The astronauts visually performed terrain relative navigation while looking out of windows, and were greatly aided by external communication and well lit scenes. As the LM approached the landing site, the astronauts performed visual hazard detection and avoidance, also under near-ideal lighting conditions. The astronauts were looking out of the windows trying to the best of their ability to avoid rocks, slopes, and craters and find a safe landing location. NASA has expressed a desire for global lunar access for both crewed and robotic sortie lunar exploration missions [2] [3]. Early NASA architecture studies have identified the lunar poles as desirable locations for early lunar missions. These polar missions provide less than ideal lighting conditions that will significantly affect the way a crewed vehicle is to land at such locales. Consequently, a variety of hazard identification methods should be considered for use by the crew to ensure a high degree of safety. This paper discusses such identification methods applicable to the poorly lit polar lunar environment, better ensuring global access for the soon to be designed Lunar Lander Vehicle (LLV). 1,2 TABLE OF CONTENTS 1. INTRODUCTION.................................................................1 2. TEN METHODS..................................................................2 3. HAZARD METHOD TRADE CONSIDERATIONS ..................4 4. CONCLUSIONS ..................................................................7 REFERENCES ........................................................................7 ACKNOWLEDGEMENTS ........................................................7 BIOGRAPHY ..........................................................................7 1 1 978-1-4244-2622-5/09/$25.00 ©2009 IEEE 2 2009 IEEEAC paper #1087, Version 4, Updated November 1, 2008 1. INTRODUCTION NASA has embarked on a bold vision for space exploration that includes returning to the Moon no later than 2020, and intends to extend human presence across the solar system and beyond by implementing a sustained and affordable human and robotic program. Recently, the Constellation Lunar Lander Project Office (LLPO), also known as the Altair Project Office, was formed within NASA to develop a Lunar Landing Vehicle (LLV) that will be capable of safely transporting four astronauts to the surface of the Moon [1]. NASA has expressed a desire for global lunar access for both crewed and robotic sortie lunar exploration missions [2] [3]. Early NASA architecture studies have identified the lunar poles as desirable locations for early lunar missions. To support the maturation of the technologies necessary for safe and precise lunar landing, the Exploration Technology Development Program Office (ETDPO) chartered the Autonomous Landing and Hazard Avoidance Technology (ALHAT) Project in early FY06. In December 2007, a Customer Supplier Agreement (CSA) was signed between the Altair Project Office and the ALHAT Project to facilitate formal communication and promote the sharing of mission concepts, trajectory analysis and sensor technologies supporting the navigation, hazard detection and precision landing functions required for safe, global lunar landing [4]. In the CSA document, Altair formally requested ALHAT to consider a crewed landing scenario to a landing site with polar lighting conditions and no navigational aids placed at the landing site. The Altair Project Office is interested in finding a minimum suite of

Upload: doug

Post on 07-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2009 IEEE Aerospace conference - Big Sky, MT, USA (2009.03.7-2009.03.14)] 2009 IEEE Aerospace conference - Hazard Detection Methods for Lunar Landing

1

Hazard Detection Methods for Lunar Landing Tye Brady α, Edward Robertson β, Chirold Epp β, Stephen Paschall α, and Doug Zimpfer α

α −− Charles Stark Draper Laboratory, 555 Technology Square, MS27, Cambridge, MA 02139, [email protected]

β −−NASA, Johnson Space Center, 2101 NASA Parkway, Houston, TX 77058

Abstract—The methods and experiences from the Apollo Program are fundamental building blocks for the development of lunar landing strategies for the Constellation Program. Each of the six lunar landing Apollo missions landed under near ideal lighting conditions. The astronauts visually performed terrain relative navigation while looking out of windows, and were greatly aided by external communication and well lit scenes. As the LM approached the landing site, the astronauts performed visual hazard detection and avoidance, also under near-ideal lighting conditions. The astronauts were looking out of the windows trying to the best of their ability to avoid rocks, slopes, and craters and find a safe landing location.

NASA has expressed a desire for global lunar access for both crewed and robotic sortie lunar exploration missions [2] [3]. Early NASA architecture studies have identified the lunar poles as desirable locations for early lunar missions. These polar missions provide less than ideal lighting conditions that will significantly affect the way a crewed vehicle is to land at such locales. Consequently, a variety of hazard identification methods should be considered for use by the crew to ensure a high degree of safety. This paper discusses such identification methods applicable to the poorly lit polar lunar environment, better ensuring global access for the soon to be designed Lunar Lander Vehicle (LLV).1,2

TABLE OF CONTENTS

1. INTRODUCTION.................................................................1 2. TEN METHODS..................................................................2 3. HAZARD METHOD TRADE CONSIDERATIONS ..................4 4. CONCLUSIONS ..................................................................7 REFERENCES ........................................................................7 ACKNOWLEDGEMENTS ........................................................7 BIOGRAPHY ..........................................................................7

1 1 978-1-4244-2622-5/09/$25.00 ©2009 IEEE 2 2009 IEEEAC paper #1087, Version 4, Updated November 1, 2008

1. INTRODUCTION NASA has embarked on a bold vision for space exploration that includes returning to the Moon no later than 2020, and intends to extend human presence across the solar system and beyond by implementing a sustained and affordable human and robotic program. Recently, the Constellation Lunar Lander Project Office (LLPO), also known as the Altair Project Office, was formed within NASA to develop a Lunar Landing Vehicle (LLV) that will be capable of safely transporting four astronauts to the surface of the Moon [1]. NASA has expressed a desire for global lunar access for both crewed and robotic sortie lunar exploration missions [2] [3]. Early NASA architecture studies have identified the lunar poles as desirable locations for early lunar missions.

To support the maturation of the technologies necessary for safe and precise lunar landing, the Exploration Technology Development Program Office (ETDPO) chartered the Autonomous Landing and Hazard Avoidance Technology (ALHAT) Project in early FY06. In December 2007, a Customer Supplier Agreement (CSA) was signed between the Altair Project Office and the ALHAT Project to facilitate formal communication and promote the sharing of mission concepts, trajectory analysis and sensor technologies supporting the navigation, hazard detection and precision landing functions required for safe, global lunar landing [4]. In the CSA document, Altair formally requested ALHAT to consider a crewed landing scenario to a landing site with polar lighting conditions and no navigational aids placed at the landing site. The Altair Project Office is interested in finding a minimum suite of

Page 2: [IEEE 2009 IEEE Aerospace conference - Big Sky, MT, USA (2009.03.7-2009.03.14)] 2009 IEEE Aerospace conference - Hazard Detection Methods for Lunar Landing

2

sensors to perform this mission while still meeting performance requirements.

2. TEN METHODS Ten possible safe landing point identification methods are described in Figure 1. A safe landing point identification is the culmination of not only sensing the safe landing location (and subsequent hazardous locations), but also using either cognitive function or computer automation to actually identify the safe landing location.

Each of the ten methods is assessed to determine whether or not the method could function successfully independently of the highly variable natural illumination conditions on the Moon. A more robust system will not be sensitive to illumination conditions. The next column specifically asks if the method could work with the sub 1.5 degree Sun angle expected at the Lunar South Pole (LSP), given that this is the specific identified potential landing location. Each method is then qualitatively scored on relative complexity, mass and power to better illustrate the possible safe site identification trade.

Method 1, Human Eyes and Brain: The eyes and brain need sufficient brightness and contrast to resolve terrain details. Unaided human visual capabilities are severely challenged at the LSP given the extremely shallow Sun angles. The possibility exists that sunlight will be reflected from local slopes and features in some favorable local regions at the LSP. The LSP lighting angle conditions are far below the Apollo limits and the associated landing sites are likely to have regionally poor contrast and significant shadow effects. If the functional limitations can be tolerated, the unaided human visual method offers the simplest, least massive, and least power demanding solution. However, the Apollo missions showed that even under ideal lighting conditions, humans might have difficulty locating and avoiding lander-scale hazards (rocks, slopes, craters) when attempting to identify a safe landing location [5]. At the LSP with its associated poor lighting conditions, one would expect a degraded performance from a LLV pilot in selecting a safe landing location. Additionally, the LLV window geometry restricts the view of the landing site unless proper vehicle orientation is maintained during the landing site approach.

Method 2, Human Eyes, Low Light Goggles, and Brain: This method augments the human visual capability with low

METHODS MERITS

Method Number Hazard Detection Sensor

Hazard Detection Processor M

etho

d In

depe

nden

t of N

atur

al Il

lum

inat

ion?

Met

hod

Appl

icab

le to

LSP

Lig

htin

g E

nviro

nmen

t?

Rel

ativ

e H

ardw

are

Com

plex

ity?

Rel

ativ

e So

ftwar

e C

ompl

exity

?

Rel

ativ

e M

ass?

Rel

ativ

e Po

wer

?

1 Human Eyes Human Brain No Limited Low Low Low Low

2 Human Eyes + Low Light Goggles Human Brain No Limited Low Low Low Low

3 Human Eyes + External Illuminator Human Brain Limited Limited Medium Low Medium High

4 Camera Human Brain No Limited Low Low Low Low

5 Camera + External Illuminator Human Brain Limited Limited Medium Low Medium High

6 Camera Computer No Limited Low High Low Low

7 Camera + External Illuminator Computer Limited Limited Medium High Medium High

8 Stereo Camera Computer No Limited Medium High Medium Low

9 Stereo Camera + External Illuminator Computer Limited Limited High High Medium High

10 Lidar Computer Yes Yes High High Medium Medium

Figure 1: Ten Possible Safe Landing Point Identification Methods with Associated Merits

Page 3: [IEEE 2009 IEEE Aerospace conference - Big Sky, MT, USA (2009.03.7-2009.03.14)] 2009 IEEE Aerospace conference - Hazard Detection Methods for Lunar Landing

3

light goggles. The LLV pilot would wear the goggles during the approach phase to detect the hazards. Associated issues with this method are keeping the photonic flux within an acceptable dynamic range (i.e. no bright objects in the LLV cockpit, long integration times, and cognitive acclimation to the scene), and obtaining a reasonable SNR to detect the local craters and rocks using earthshine or scattered sunshine. It is unclear at this time if low light goggles can provide sufficient contrast or sufficient scene resolution. Low light vision goggles are useful on Earth since they sense the sky glow emissions due to OH radicals high in the atmosphere, but this illumination source is absent on the Moon, and the goggles will not be adequate in the presence of starlight alone. If these issues can be overcome, this method offers a relatively simple, low power, and low mass solution to potentially augment a human’s ability to detect hazards out the window. If the natural light intensity is too low, this method is not effective.

Method 3, Human Eyes, External Illuminator and Brain: This method actively provides light from the LLV to the landing site such that a human can resolve and identify hazardous areas. An issue here is that the light source is collocated with the crew and creates a visual “wash out” effect on the terrain, eliminating or reducing terrain contrast. Another issue is the mass and power required to illuminate the lunar surface during the approach phase. If the associated power and mass are prohibitive for an external illuminator onboard the LLV, this method is not viable. A third issue exists in the relative postponement of hazard detection and safe site selection until late in the trajectory. This has associated propellant mass usage in order to nullify any unknown trajectory dispersions and highlights the inefficiency of delaying hazard detection and landing aim point redesignation.

Method 4, Camera and Brain: In this method the LLV pilot looks through a camera in lieu of the window in order to detect hazards. As for the human eye, the scene needs sufficient lighting in order for the human to accurately discern hazards. The long shadows and lack of contrast at the LSP makes this similar to method 1, with the exception that the camera could be made to be more sensitive than the human eye given a large aperture, moderate lens, and the fact that the camera could look in areas unrestricted by the LLV window geometry. Associated issues include possibly less situational awareness due to the inherent resolution of the monitor and camera.

Method 5, Camera, External Illuminator, and Brain: This method is a combination of methods 3 and 4 with the same camera and illuminator issues capabilities and limitations.

Method 6, Camera and Computer Processor: This method substitutes a computer algorithm to detect the hazards and frees the human from this cognitive task. In this method a camera is mounted to the LLV, possibly gimbaled, and the

digital image is passed though an algorithm to automatically identify a list of hazards (or associated safe sites). This method is subject to the same contrast and shadow issues previously mentioned, but benefits from a possibly faster safe site identification process. The main issue at the LSP is whether or not sufficient SNR exists to detect hazards and inferring 3-D hazards from 2-D images.

Method 7, Camera, External Illuminator, and Computer Processor: This method, like method 6, allows for an external camera to see the scene in coordination with an external illuminator. The illuminator could possibly flash/strobe to save energy as long as it is in sync with the receiving camera. An algorithm automatically identifies the safe landing aim points and presents this information to the crew. One issue is how much mass and power is required to illuminate the surface during the approach phase. If the associated power and mass become prohibitive for an external illuminator onboard the LLV, this method is not viable. Additionally, a “wash out” effect could exist that may pose challenges to the autonomous algorithms.

Method 8, Stereo Camera and Computer Processor: In this method a stereo camera determines range for each pixel within the FOV. Issues with this method involve the need for two cameras with a relatively large baseline in order to achieve sufficient resolution. Stereo imaging in a low lighting environment with associated poor contrast and long shadows is very challenging. Stereo cameras involve some complexity, in terms of hardware and software needed, and will require sufficient lighting of the surface.

Method 9, Stereo Camera, External Illuminator, and Computer Processor: This method is very similar to method 8, but uses an external illuminator to brighten the scene. Issues include having the illuminator collocated with the stereo camera onboard the LLV, dealing with very limited shadow information from a small baseline, and the mass and power required to illuminate the surface during the approach phase.

Method 10, Lidar and Computer Processor: This is the only method listed in figure 1 that functions completely independently of the natural lighting conditions, even at low solar angles. An important consideration not included in the above discussion is the altitude (or slant range) at which the sensor has the ability to identify hazards. The sensor must be able to determine the elevation of hazards relative to the surrounding surface at significant distances (hundreds of meters) in order to provide time to select a safe landing aim point and efficiently alter the LLV trajectory. This must be accomplished in the absence of well defined reference features. Cameras only provide 2-D images and the human stereo vision is probably not effective until a distance of approximately 100 meters. Lidars are specifically designed for this purpose and have been shown to work effectively beyond 500 meters, so they can provide the information much earlier in the descent than other sensors. This

Page 4: [IEEE 2009 IEEE Aerospace conference - Big Sky, MT, USA (2009.03.7-2009.03.14)] 2009 IEEE Aerospace conference - Hazard Detection Methods for Lunar Landing

4

provides considerably more decision time and more time to maneuver to a safe spot without excessive propellant consumption. Lidars also have the advantage of working in any lighting conditions.

A flash lidar sensor can rapidly detect hundreds of hazards across tens of thousands of square meters of lunar surface at significant range (hundreds of meters) and under any lighting condition. The associated HDA software can identify, rank and track associated safe areas.

Since the lidar is an active system that provides a direct measurement of range to the lunar surface, the resulting 3-D image can be processed through an algorithm to develop a list of safe sites within the scanned area. Issues with this method are technology advancement of the lidar unit and the associated hardware and software processing to distill the large volumes of data.

3. HAZARD METHOD TRADE CONSIDERATIONS The “minimum” suite of sensors required to meet performance requirements for a crewed mission to the LSP is highly dependent on the descent and landing environments that the LLV and the crew will experience. The needs of the LLV are born from the mission requirements, physics of lunar landing, difficulty of the landing site terrain, natural lighting environment, and the ability of the crew (if present) to perform a safe landing.

Figure 2: Two views of the Apollo 15 lunar lander straddling the rim of a small hazardous crater. The landing resulted in a landing tilt of about 11º, only 1º away from the maximum allowable limit, and damaged the engine bell.

The methods and experiences from the Apollo Program are fundamental building blocks for the development of lunar landing strategies for the Constellation Program. Each of

the six lunar landing Apollo missions landed under near ideal lighting conditions and in areas known to be less hazardous than the LSP. The astronauts visually performed terrain relative navigation while looking out of windows, and were greatly aided by external communication and well lit scenes. As the LM approached the landing site, the astronauts performed visual hazard detection and avoidance, also under near-ideal lighting conditions. The astronauts were looking out of the windows trying to the best of their ability to avoid rocks, slopes, and craters and find a safe landing location. As shown in Figure 2, even under favorable environmental conditions, the Apollo astronauts were near their human limits in terms of the functions they needed to achieve a safe landing. For the Constellation Program, the design of the LLV and crew interfaces for the descent and landing sequence must be tailored for landing in lunar regions with relatively poor lighting conditions and hazardous terrain, such as the LSP. Some form of sensor augmentation will be required to assist the crew in identifying hazards on-the-fly, and to guide an uncrewed LLV to a safe landing.

The physics of lunar landing have not changed since Apollo. The lunar descent sequence begins with the mated Orion-Altair spacecraft in low lunar orbit. Prior to separation, the LLV GNC system is initialized with an accurate state vector, which is then propagated through the separation and de-orbit maneuvers and the coast phase. Analysis indicates that inertial navigation during the coast phase is sufficient to achieve precision landing, although navigation updates from terrain relative techniques, Earth-based tracking, or lunar navigation infrastructure could be used to improve the timing of the braking burn. The LLV coasts for nearly one-half of an orbit to prepare for the braking phase, during which the majority of orbital velocity is eliminated. Navigation updates during the braking phase can save LLV propellant by efficiently correcting for state errors and associated trajectory dispersions, and by providing a more accurate LLV state at the beginning of the approach phase. Terrain relative navigation techniques have been identified for the braking phase that can significantly reduce inertial state errors and accurately position the LLV for final approach to the landing target using on-board passive or active sensors. The transition from the braking phase to the approach phase is initiated by a pitch-up maneuver that provides the crew and landing sensors with an opportunity to view the landing zone. Active terrain mapping sensors are believed to represent the best technical solution for hazard detection and avoidance and hazard relative navigation during the approach phase. The LLV can then be maneuvered precisely to the landing aim point by navigating relative to local surface features. Obscuration of the lunar surface by dust kicked up during terminal descent was a challenge during several of the Apollo missions. However, when initialized with a highly accurate state (position, velocity and attitude) at the start of the terminal descent phase, a LLV can land quite accurately (within meters) relative to the landing aim point using only

Page 5: [IEEE 2009 IEEE Aerospace conference - Big Sky, MT, USA (2009.03.7-2009.03.14)] 2009 IEEE Aerospace conference - Hazard Detection Methods for Lunar Landing

5

inertial propagation. This approach eliminates the requirement for active sensing through the dust field stirred up by the descent engine exhaust plume.

Five natural illumination sources are available at the Moon – sunlight, reflected sunlight (earthshine and scattered from lit lunar surface features), starlight, zodiacal light and thermal emissions from the surface. The intensity of natural illumination at the lunar surface varies widely, depending on the latitude of the lunar landing site and the timing of the mission, as well as the illumination source and the frequency bands of interest within the electromagnetic spectrum. Sunlight was employed very effectively during the Apollo Program to support hazard detection and avoidance using unaided human vision by constraining lighting conditions to provide favorable levels of shadowing and contrast. However, the lighting constraints imposed a narrow monthly time constraint for landings which could not be supported at polar latitudes. An example of the rugged terrain at high lunar latitudes is shown in figure 3. In addition, it may not be practical to apply Apollo lighting constraints in combination with other constraints that might exist for Constellation missions to other lunar latitudes.

Earthshine or scattered illumination from lit lunar surface features may support hazard detection and avoidance using unaided human vision or passive sensors, but further analysis is required. The analysis will be a function of specific sites given unique local terrain and resultant scattering. Starlight, zodiacal light, and thermal emissions from the surface are fainter and more diffuse illumination sources that are not considered to be viable illumination options for hazard detection and avoidance at the Moon.

Human vision combined with human judgment results in a highly adaptable and effective sensor package. However, some Apollo crew members reported difficulty in judging slope and identifying shallow craters during a landing. They found that they could not detect many of the hazards until they were close to the surface. After Apollo 16, it was reported that it was not possible to judge slope for shallow craters without a clear shadow visible. Apollo crews reported varying levels of obscuration of the lunar surface during terminal descent due to the dust field stirred up by the descent engine exhaust plume. Apollo crews also reported that despite the dust obscuration they could indeed make out boulders, but most craters disappeared. Dust conditions during terminal descent for Constellation missions will be driven by the thrust level (mass flow rate) and exhaust velocity (propellant type) of the LLV descent engine as well as the regolith conditions (particle size distribution and depth) at the landing location.

Although the absolute dynamic range of the eye is quite large (up to 90-100 dB), at any given time the eye is chemically and optically (via the dilation of the iris) adapted to a useful dynamic range of 25-30 dB, with anything dimmer seen as black and anything brighter seen as glare.

Within the center of this range, contrast differences of about 0.1 dB (2%) can be distinguished. Higher contrasts are needed for perception at the upper and lower limits of the adapted dynamic range, and the eye takes significant time to acclimate to changes in lighting conditions. The range of lighting and associated contrast for the human eye is expected to be problematic for LSP landings because of brightly lit areas due to direct sunlight and more dimly lit areas due to earthshine, scattered light, and shadow.

Figure 3: Lunar highlands surface near the LSP. The large crater in the center of the image is Daedalius (93 km in diameter, 3 km deep). Area is located on the far side at 5.9°S, 179.4°E. AS11-44-6609.

Based on the information in this paper, one can conclude that human vision without augmentation would not be considered safe for landing at the LSP. Lighting conditions can vary widely in the area of the LSP which may result in contrast problems. The ability of the crew to identify safe landing sites on-the-fly will be limited at higher altitudes and, in the absence of sensor augmentation, may require extended flight time, or even hover capability, as the LLV approaches the surface. The earlier in the descent trajectory that safe landing areas can be identified, the more fuel efficient it will be to reach a safe site. Early identification of safe landing sites also increases the divert range of the LLV, thus providing the capability to reach a larger number of safe landing areas.

Night vision goggles, which have been used for many years on Earth, are based on image intensification. They operate on the whole image field simultaneously and are susceptible to an overall loss of sensitivity if there are bright spots in the field of view. Even though these devices are sometimes called starlight goggles or scopes, in the terrestrial

Page 6: [IEEE 2009 IEEE Aerospace conference - Big Sky, MT, USA (2009.03.7-2009.03.14)] 2009 IEEE Aerospace conference - Hazard Detection Methods for Lunar Landing

6

environment they rely on emissions from the Earth’s upper atmosphere which are absent on the Moon. Use of existing night vision goggles at the LSP will be problematic due to the large range of illumination they will encounter and the lack of illumination from atmospheric emissions.

A number of options for artificial illumination are available, including both onboard and deployed sources. The advantage of an onboard illuminator is that it offers the potential for beam pointing to ensure that the entire surface area of interest is lit. The disadvantage is the associated power needed to illuminate the scene. Off-board sources offer the potential to illuminate the lunar surface at an angle to the LLV line of sight, which may enhance contrast in lit areas and enable obstacles to be detected from cast shadows. Off-board sources may not be highly controllable, however, and much of the illumination may fall outside of the surface area of interest. The integration of an illuminator requires mass, volume and power resources, and may also increase the complexity of the LLV.

Estimates of radiation emitted from a LO2/LH2 descent engine plume across the visual and infrared bands indicate that the surface illumination provided by the plume will be too faint to be of value for the hazard detection and avoidance function.

Cameras are generally of low mass, power, and volume, and could fill a similar role to human eyes in observing the lunar terrain. A basic camera system would have constraints similar to that of the human eye (i.e., constrained perspective and lighting to provide sufficient contrast to resolve features). The major differences are that cameras can zoom, offer the flexibility to “see” frequencies outside of the visible band, and can be made very sensitive to low light intensity and slight variations in intensity. Analysis has shown that illumination due to reflected sunlight, either from earthshine or from high points on the lunar terrain, can create twilight conditions in local regions on the lunar surface and enable imaging by sensors in the visual and NIR bands. To avoid interference due to lines of sight through the plume, the visual band is preferable. For a surface location on the lunar far side that is not illuminated by the Sun, reflected solar illumination will not be available and the only applicable passive sensors that may be of value are cooled long wavelength infrared cameras viewing thermally emitted radiation.

Terrain imaging sensors, such as simple cameras, can be used to enhance the situational awareness of the crew via gain, zoom and pointing control. Modern terrain mapping sensors and data processing hardware, which produce a three-dimensional map of the landing area, offer an even greater opportunity to augment the navigation, hazard detection, hazard avoidance and precision landing capabilities of the crew. An active terrain mapping sensor, such as a flash lidar, can rapidly identify hundreds of

hazards across tens of thousands of square meters of lunar terrain at significant range and under any natural lighting conditions. Software algorithms can utilize these digital elevation maps to identify, rank, and track safe landing areas.

The potential benefits of terrain mapping sensors include:

• Increased LLV autonomy – reduced reliance on mission control and/or lunar infrastructure

• Hazard avoidance and precision landing capability for automated lunar missions

• Enhanced ability to address off-nominal situations during descent and landing

• Relaxation of mission constraints (e.g., ambient lighting at the landing site), especially for active sensors such as radar and lidar

• Access to landing sites in more challenging and complex terrain (e.g., polar regions, far side, highlands)

• Reduction of crew workload, enabling the crew to focus on tasks that will benefit the most from human judgment and input

• Rapid and accurate identification of landing site surface features (rocks, holes and slopes) at long range (possibly greater than 1 km) that exceed the hazard tolerances of a LLV

• Decreased propellant (delta-V) consumption due to a more efficient descent and landing trajectory achieved via rapid hazard detection and safe landing site selection as well as reduced GNC errors/uncertainties

• Compensation for potential window field of view limitations that might exist on a LLV that is large in scale relative to the Apollo LM

• Precision navigation to a pre-defined safe landing site

The integration of a terrain mapping system will impact the mass, volume, power, and complexity of a LLV. In addition, the terrain mapping sensor will likely be mounted on a gimbal to provide precision pointing control and, if needed, rapid scanning capability to enable the generation of a large terrain map from a mosaic of smaller sections. However, it appears likely that the mass and volume penalties of a terrain mapping sensor will be more than offset by propellant savings resulting from a more efficient approach and landing trajectory.

Page 7: [IEEE 2009 IEEE Aerospace conference - Big Sky, MT, USA (2009.03.7-2009.03.14)] 2009 IEEE Aerospace conference - Hazard Detection Methods for Lunar Landing

7

4. CONCLUSIONS The ALHAT Project is investigating a range of lunar landing hazard detection methods by developing reference GNC architectures, trajectories, trade study reports, and analysis software in an effort to pursue and field test the proper set of relevant technologies that are in need of technology advancement for lunar landing. In addition, a comprehensive multi-year trade study was completed in 2007 that established top level requirements and a concept of operations [6]. The advancement of these technologies through the ALHAT Project will validate a system capable of enabling the LLV to land precisely and safely in areas such as the LSP.

There are a number of interdependencies driving the design of a lunar landing system including LLV size and hazard robustness, landing site conditions (terrain and natural lighting), pre-mission landing site knowledge and maps, trajectories, sensors, and crew involvement. A practical system for lunar landing must represent a logical compromise among these factors to support an overall LLV solution that is simple, reliable, robust and efficient in terms of mass, power and volume. The LLV design, in turn, must address the needs and drivers for the overall lunar exploration architecture including flexibility and extensibility for future missions.

Given the uncertainty in the performance of passive sensing systems when applied to safe site identification during low lighting conditions, an active sensor hazard detection method should be developed to quickly trade between cost maps that define the performance, cost, and risk at landing while safely avoiding hazards and minimizing divert fuel costs. Coupled with an on-board hazard detection system, the hazard detection method should provide a powerful means for safe redesignation of the LLV during the approach phase of the mission to better achieve the goals of the mission. A modern LLV can leverage state-of-the-art sensor hardware and software to enable landing at more challenging locations than Apollo. These technologies of benefit are being pursued with a modest investment through NASA’s technology development program.

Uncertainty exists regarding the environment at the LSP. Uncertainties also exist regarding the effectiveness, cost, benefit, and risk of new technologies to support lunar landing. As a result, the optimal blend of technological and human capability required to land at the LSP remains unclear. Defining with confidence the “minimum suite of landing sensor equipment” required is problematic without reduction in these uncertainties. At this time, however, it would seem prudent to continue investigations of active methods of hazard detection that are insensitive to the poor lighting conditions known to exist at the LSP. Uncertainty can be reduced via continued scientific research, simulation, technological maturation and field testing, clear definition

of mission objectives, and documentation of the LLV design and capabilities.

REFERENCES [1] NASA. (2007). NASA Broad Agency Announcement: Constellation Lunar Lander Development Study. [2] Cook, D. (2007). Lunar Architecture Update. AIAA Space Conference. San Diego. [3] Dale, S. (2006). Implementing the Vision. 2nd Space Exploration Conference. [4] Hansen, L. (2007) “CxP Lunar Landing Project Office (LLPO) Exploration technology Development Program Development for Autonomous precision Landing and Hazard Detection and Avoidance (ALHAT), NASA internal memo, 2007. [5] Major, L., Brady, T., Paschall, S. (2008). Apollo Looking Forward: Crew Task Challenges. IEEE Aerospace Conference. Big Sky, MT. [6] Brady, T., & Schwartz, J. (2007). ALHAT System Architecture and Operational Concept. IEEE Aerospace Conference. Big Sky, MT.

ACKNOWLEDGEMENTS The authors would like to acknowledge the entire ALHAT project team consisting of outstanding and distinguished members from NASA’s Johnson Space Center, Jet Propulsion Laboratory, Langley Research Center, C.S. Draper Laboratory, and the Applied Physics Laboratory.

The NASA technology development research described in this paper is being carried out in part by Draper Laboratory under a contract with the Johnson Space Center.

BIOGRAPHY Tye Brady is a Principal Member of the Technical Staff and a Group Leader in the Space Systems Engineering Group at Draper Laboratory. He has worked over the past 18 years on spacecraft instrumentation, design, and integration on a wide variety of programs. He holds a BS in Aerospace Engineering from Boston University and a SM in Aeronautics and Astronautics from MIT.

Page 8: [IEEE 2009 IEEE Aerospace conference - Big Sky, MT, USA (2009.03.7-2009.03.14)] 2009 IEEE Aerospace conference - Hazard Detection Methods for Lunar Landing

8

Ed Robertson is a senior systems engineer in the Aeroscience and Flight Mechanics Division in the Engineering Directorate at the Johnson Space Center. He joined NASA in 1989 and spent his first six years working lunar/Mars mission architecture as well as launch vehicle and spacecraft

conceptual design and sizing. Notable accomplishments during that period include the First Lunar Outpost study and the Liquid Flyback Booster study. He then spent eight years on the X-38 Project gaining hands-on experience with the design, fabrication, assembly and testing of atmospheric and space flight test vehicles. In 2004 he led the Focused Trade Study for the Constellation Program, and subsequently joined the Vehicle Integration Office of the Orion Project. He is currently the Lead Systems Engineer for the ALHAT Project. He earned a M.S. in aerospace engineering in 1989 and a B.S. in aerospace and ocean engineering in 1987, both from the Virginia Polytechnic Institute and State University.

Chirold Epp is a technical project manager in the Aeroscience and Flight Mechanics Division in the Engineering Directorate at the Johnson Space Center. He joined NASA in 1980 and has worked on crew training, real-time Space Shuttle flight support and was the

Operations Manager of the International Space Station Program for 6 years. He is currently the Project Manager for the ALHAT Project and is also coordinating some Mars activities for the Directorate. He earned a Ph.D in physics in 1969 from the University of Texas, a M.S. in physics from the University of Oklahoma in 1965, and a B.S. in physics and mathematics from Northwestern Oklahoma State College in 1960.

Stephen Paschall II is a Senior Member of the Technical Staff in the GNC Mission Design Group at Draper Laboratory. He has worked on GNC system analysis and design work for Earth, Moon, and Mars projects for the last 4 years. He currently serves

as GNC-System engineer on the ALHAT project. He received his B.S. in Mechanical Engineering from Texas A&M University and his S.M. in Aeronautics and Astronautics at MIT.

Doug Zimpfer is the Program Lead for Human Space Exploration and Operations at Draper Laboratory. He has worked in Human Space for over twenty years in the areas of GN&C, avionics and flight

software. He leads Draper’s efforts for the STS, ISS, Orion, Ares, Altair, ALHAT and Lunar Surface Systems. He holds a BS in Electrical Engineering from The Ohio State University and a MSEE from the University of Houston.