CCD image sensors have been used successfully in orbital platforms for many years. However, CCD image sensors require several high speed, high voltage clock drivers as well as analog video processors to support their operation. These support circuits must be shielded and placed in close proximity to the image sensor IC to minimize the introduction of unwanted noise. The end result is a camera that weighs more and draws more power than desired.

CMOS image sensors, on the other hand, allow for the incorporation of clock drivers, timing generators and signal processing onto the same integrated circuit as the image sensor photodiodes. This keeps noise to a minimum while providing high functionality at reason able power levels. CMOS Sensor Inc employs its proprietary advanced CTIA structure and buffer MOS readout method to eliminate the fixed pattern. Due to performance breakthrough, CMOS Sensor Inc. had involved the following space projects for Visible and Near Infrared (VNIR) CMOS sensor.

 

The C640 was designed and developed for Chandrayaan-1 project. The Chandrayaan-1 satellite was launched on October 22, 2008. It is a 4000 elements linear image sensor is designed to provide high resolution, low power consumption for space applications. This device uses CMOS Sensor’s proprietary advanced APS technology and readout structure to reduce the fixed pattern noise, increase dynamic range and improve linearity. The device consists of 4000 photodiode elements. The pixel size is 7 μm square on an element pitch of 7 μm.

The TMC in the visible spectrum band from 500 nm to 750 nm utilizing three separate C640, linear images sensors facing fore, nadir and aft to the lunar surface. Being used in push-broom mode to provide along track stereo viewing, the TMC covers a swath of 20 km with the fore and aft imagers being placed at +/- 25 degrees from nadir. The end result is a ground resolution of 5 meters at an altitude of 100 k. Figure 1 presented the earth image taken from nadir of TMC that made by C640 sensor on about 70,000 km of earth orbit before the satellite reach to moon.

 



Image data from TMC was used to create 3D image of the lunar surface. These digital elevation models (DEM) were generated from combinations of images produced by three CMOS image sensors. The process is to obtain the same image from two or three of the CMOS sensors taken at different angles due to the orientation of the image sensors within the satellite. These stereoscope views are then processed to identify matching points between the two images. Having identified as many matching points as possible, these points are fit to a triangular mesh from which the 3D coordinates are interpolated for all data points.

Draping the imaging features over this 3D coordinate surface results in very effective image as shown below. A measure of the effectiveness of this procedure is the number of pixels or data points that can be matched up between two or three viewing angles. As one might expect, the worst correlation (27%) occurs using images from the fore and aft camera (largest angle between images) while the best correlation (100% point matching) is obtained from using all three images: aft, nadir and fore. Figure 2 displayed the 3D view of mountain on moon surface. The 3D crater view was presented in Figure 3.

 





The C650 is the second sensor made for the Chandrayaan-1 project. It consists of a 256 x 512 pixel area array active pixel sensor (APS) has a large pixel size, slow scan and low power consumption needed for space based, scientific and medical applications. The device block diagram is displayed in figure 3. The C650 chip integrates a 256x512 active pixel sensor array; a PGA for row wise gain setting; I2C interface; SRAM, 12 bit analog to digital (ADC); voltage regulator; low voltage differential signal (LVDS) and timing generator.

This device uses CMOS Sensor’s proprietary advanced APS technology and readout structure to reduce the fixed pattern noise, increase dynamic range and improve linearity. The sensor has an active image array size of 256 columns x 512 rows. However, the full array contains 286 columns and 516 rows, with the extra 30 Optical Bloc k (OB) pixels in each column and the extra 4 OB rows. The optical block pixels are designed to provide a dark reference voltage and eliminate edge effect. For each column, they are arranged for 20 pixels on the beginning of the 1st active pixel and 10 pixels after the 256th pixels. The optical block rows are arranged for 2 rows on the beginning of the 1st active row and 2 rows after the 512th active row. Figure 3 shows the pixel arrangement of the sensor array and the APS unit. The optical block pixels are same as active image sensor except a light shielding opaque element covers them.

 



The 256 x 512 element C650 image sensor area array was designed for hyper spectral imager (HySI) in a space borne application. As part of the Chandrayaan-1 mission to map the lunar surface, it is critical to gaining an understanding of the mineralogical makeup of the moon. The camera was designed such that one row of 256 elements would image a spatial swath of 20 km on the lunar surface and each of the 512 rows would image a different spectral band.

Spectral separation is achieved through the use of wedge filter. By operating the camera in push broom mode, each of the 512 rows would get an opportunity to image the same 20 km spatial swath collecting 512 different spectral images of the same surface geometry. The wedge filter is oriented such that the cross track dimensions of the wedge filter are uniform and the along track dimensions are those of the varying thickness produced by the wedge filter. Hence, the spectral bands appear in the along track direction.

The HySI camera has produced impressive imagers as shown in Figure 4. This series of images was taken from a 40 km by 20 km section of the moon near the equatorial region from an altitude of 100 km. The majority of the imagers only show subtle variation in the shades of gray because the lunar surface is devoid of many of the color producing features we are accustomed to. However, by subtracting and / or taking the ratio of one image to another these subtleties become evident. This slight variation across the 64 bands will produce the chemical signatures of the lunar surface.

 


Figure 5 is a hyper cube of a single crater. A hyper cube is obtained by stacked all 64 images (one from each band) of the same surface topography. In this case, the information has been color coded to make it easier to detect changes in the chemical makeup of the surface.

 




The C468, five bands image sensor array consists of five independent sensor lines: one PAN band and four MS band which designates as MS1, MS2, MS3 and MS4, packaged in a ceramic substrate. The PAN band has a total of 12,000 pixels, the pixel size is 10 μm square on a pixel pitch of 10 μm. The multi-spectral (MS) bands, (MS1 ~ MS4), each band has 6000 pixels, the pixel size is 20 μm square on a pixel pitch of 20 μm. Five bands arranged on MS1, MS2, PAN, MS3 and MS4 sequence. The spacing between each band to neighbor band (MS1 to MS2, MS2 to PAN, PAN to MS3, and MS3 to MS4) is 4 mm. Thus, the focal plane (image sensor area) is 120 mm x 16.02 mm. A 132 pin of Pin Grid Array (PGA) ceramic package is used to house the silicon chip. The space qualified radiation hardness glass window with double side AR coating is used to seal the silicon sensor.

The device uses our proprietary technologies (e.g., wafer butting, multi chip butting, and multiple readout) to achieve the requirement of gapless image pixel line and very short integration time. The array is designed to provide a high resolution, low power consumption for high attitude (~ 720 km) earth orbit RSI application. The C468 is mixed mode MSOC IC that integrates active pixel sensor (APS), programmable gain amplifier (PGA), 12 bit analog to digital (ADC), voltage regulator, low voltage differential amplifier (LVDS) and timing generator together. The C468 is also built with power down mode that will consume a very small of power while the focal plane array (FPA) is not active.

The device is response over the spectral wavelength of 450 to 900 nm with five different bands. With external multi mode filter, it is defined as: PAN (450 ~ 700 nm), MS1 (455 ~ 515 nm), MS2 (525 ~ 595 nm), MS3 (630 ~ 690 nm), and MS4 (762 ~ 897 nm). A scribe line is designed between each band to band. Therefore, all five bands are total isolation and an independent chip. All of five sensor bands are electrically isolated. The user can power on any band of the sensor array independently. This functionality allows the user to read different colors from the imager.