Abstract Improving aviation safety is a primary mission of the National Aeronautics and Space Administration (NASA). One technology currently being developed at NASA Langley to support this mission is the Enhanced Vision System (EVS). This system utilizes multi-sensor/multi-spectral image fusion to provide enhanced images of the flight environment. The enhanced images are used to assist pilots in poor visibility conditions such as fog, rain, snow or haze. Extensive digital image processing is used to generate the enhanced images. Digital image processing is inherently computationally intensive. Large amounts of data must be stored, processed and shuffled between processor and memory. Even simple point operations on a standard 640 x 480 8-bit color image required the manipulation of nearly a Megabyte (MB) of data. Other algorithm implementation issues such as non-sequential data access or multiple line processing exaggerate the data processing problems. When video is considered, the data processing requirements become enormous. A single second of video data in the format mentioned above recorded at 30 frames per second requires 30 MB of storage and each frame must be processed within 33.3 milliseconds (ms) to sustain the frame rate. The digital image processing algorithm chosen to perform image enhancement for the EVS is the Retinex algorithm. The Retinex algorithm is a general-purpose image enhancement algorithm for producing good visual representations of scenes. It performs a non-linear spatial/spectral transform that synthesizes strong local contrast enhancement and color constancy. It was developed and patented at NASA Langley. The algorithm was initially targeted to process multi-spectral satellite data but has found applicability in very diverse areas such as medical radiography, forensic investigations, consumer photography, and aviation safety. The dynamic range compression, color constancy and sharpening provided by the Retinex make it an ideal algorithm to use for enhancement of poor weather images. A real-time, video frame rate, implementation of the Retinex is required to meet the needs of the EVS and other potential users. Retinex processing contains a relatively large number of complex computations that, for video exceed the performance capabilities of most general-purpose computers. In addition, several applications require an embedded, low cost processor for computation. To meet the goal of embedded, real-time Retinex processing, we are currently studying and evaluating the use of digital signal processors (DSPs) and reconfigurable logic devices such as field programmable gate arrays (FPGAs) for implementation. Preliminary results obtained using a DSP system indicate that real-time operation is achievable. This discussion will describe the EVS, Retinex algorithm and DSP system currently under study. Algorithm optimizations and architecture trades used to achieve current performance levels will also be presented. Biography Glenn Hines is a senior electronics engineer at NASA Langley Research Center. He holds B.S and M.S. degrees in Electrical Engineering from Old Dominion University and a M.S. degree in Computer Science from the College of William and Mary. He is currently a Ph.D. candidate in Computer Science at The College of William and Mary. He is responsible for the design, development, and test of electronic systems for space and aeronautic flight research instruments. He has developed application specific integrated circuits (ASICs), multi-chip modules (MCMs), digital signal processors (DSPs) and field programmable gate arrays (FPGAs) for over 12 years. He is currently performing research in the development and optimization of real-time image enhancement algorithms and architectures.