November 21, 2017
Image sensors have to cope with a very high dynamic range of the captured scenes. Objects in the shadow during a noon summer day in summer, can have quite often an intensity, which is 100 000 less than the brightest parts. In night vision automotive applications, the difference can be even larger: the signal of a pedestrian behind the high beams of a car can be 10 million times less than the lights of the car or the street lights. Also in spectroscopy for medical and remote sensing, the dynamic range can be very large as the spectrum of a lamp or the sun varies greatly as a function of wavelengths. Also in stimulated emission imaging as Raman or fluorescence the signal levels can vary with several orders.
As most analog systems, image sensors have a dynamic range in the order of 60 to 80 dB. Several methods exist to increase this dynamic range as eg the capture of a sequence of pictures with different integration times of sensitivities; these images can then be combined in software to yield a high Dynamic range image.
But for fast moving objects it is very important that the utilized method does not introduce motion induced artefacts as can be the case in the above example. In the IISW workshop Gaozhan Cai, Senior Design Engineer at Caeleste presented an image sensor with an in-pixel very high linear dynamic range (HDR) that is obtained by a unique method where in the pixel a three level transfer gate is used combined with a dual or triple gain charge storage and where a column-level automatic gain selection (AGS) is implemented. The AGS picks one out of three linear ranges each having a largely different conversion gain. The data rate remains the same as without high dynamic range, thus preserving the maximal frame rate.
For fast moving objects it is very important that the utilized method does not introduce motion induced artefacts
An example of high dynamic range image processing is shown below: The raw image (in medium gain setting) is shown over the left. Some parts of the image are clearly over illuminated while others are under illuminated. In the middle part the segmented image is shown, where the gain setting for each of the image is indicated; the rightmost image is the HDR images where the different gain images are combined and the image content compressed to fit in the display range.
How to hand-calculate MTF in frontside and backside illuminated image sensors
The spatial resolution of a camera and in general an imaging system is one of the key performance parameters apart from the temporal pixel noise. It determines which small objects can still be separated by the system. Not only the lens is limiting the spatial resolution, also the image sensor is limiting the image resolution. The macroscopic crosstalk, which is at the basis of the resolution reduction, has several components, which require complicated modeling taking into account the pixel geometry and the material properties. Prior models for this resolution or MTF prediction were based on ‘brute force’ solving the diffusion equations in a finite elements mesh detailing the pixel’s geometry.
Bart has now proposed a closed-form analytical MTF-nyquist model, being suitable for integration in a spreadsheet-like calculator, enabling thus quick surveys and design parameter trade-offs in the presence of many other image sensor parameters. The method yields an analytical expression for the Line Spread Function as intermediate result. In this way much faster system optimizations can be made.
Idealized pixel cross section as used in the model, and the analytical expression for the distribution of electrons as arriving at the collection photodiodes