3D Flash LIDAR
Encyclopedia
About LIDAR

LIDAR
LIDAR
LIDAR is an optical remote sensing technology that can measure the distance to, or other properties of a target by illuminating the target with light, often using pulses from a laser...

, is an acronym which stands for Light Detection And Ranging based on a laser-radar paradigm. RADAR
Radar
Radar is an object-detection system which uses radio waves to determine the range, altitude, direction, or speed of objects. It can be used to detect aircraft, ships, spacecraft, guided missiles, motor vehicles, weather formations, and terrain. The radar dish or antenna transmits pulses of radio...

 (Radio Detection And Ranging) is the process of transmitting, receiving, detecting, and processing and electromagnetic wave that reflects from a target, a technology that was developed by the German Army in 1935. All ranging systems function by transmitting and receiving electromagnet
Electromagnet
An electromagnet is a type of magnet in which the magnetic field is produced by the flow of electric current. The magnetic field disappears when the current is turned off...

 energy with the primary difference between RADAR and LIDAR being the frequency bands.
Advantages Of LIDAR

The term 3D Flash LIDAR refers to the 3D point cloud and intensity data that is captured by a 3D Flash LIDAR imaging system. 3D Flash LIDAR enables real-time 3D imaging capturing 3D depth and intensity data characterize by lack of platform motion distortion or scanning artifacts. When used on a moving vehicle or platforms, blur-free imaging is expected. This is as the result of using single laser pulses to illuminate the field-of-view creating an entire frame. Because the integration time is fast (e.g. at 100 meters range, capture requires 660 nano-seconds), the ability to produce real-time 3D video streams consisting of absolute range and co-registered intensity (albedo) for use in autonomous applications is a natural fit.

Because there 3D Flash LIDAR systems are solid state (no moving parts), they carry less mass than alternative LiDAR cameras (i.e. scanners). Capture distances range from 5cm to 5km.
How Does It Work?

3D Flash LIDAR camera’s readout semiconductors enable each pixel in the focal plane array to act independently and measure range and intensity of every pixel (point) in the camera's field of view. Using an Avalanche Photodiode
Avalanche photodiode
An avalanche photodiode is a highly sensitive semiconductor electronic device that exploits the photoelectric effect to convert light to electricity. APDs can be thought of as photodetectors that provide a built-in first stage of gain through avalanche multiplication. From a functional standpoint,...

 Detector (APD) hybridized with a CMOS
CMOS
Complementary metal–oxide–semiconductor is a technology for constructing integrated circuits. CMOS technology is used in microprocessors, microcontrollers, static RAM, and other digital logic circuits...

 focal plane array, the 3D Flash LIDAR camera operates like a 2D digital camera with “smart 3D pixels” in an array recording the time the camera’s laser pulse requires to travel to and from the objects illuminated by the laser in the scene.

The image capture process using 3D Flash LIDAR is:
1. A short duration pulsed laser able to illuminate the area in front of the camera. This illumination is "back-scattered" to the receiver by the objects in front of the camera (the scene).

2. The laser’s aperture uses a diffuser to shape and convert the laser output to a square “top hat” pattern, increasing the efficiency and uniformity of the illumination while matching the field of view. This creates a number of opportunities to enhance specific illumination attributes that result in better imaging of challenging objects such as wires at a long range.

3. The photonic energy back-scatter is collected by an optical lens and focused onto the hybrid focal plane array of 3D pixels.

4. In one use model, the solid state APD detector pixels produce an avalanche of photo-electrons from the incoming photons where each pixel. The gain of the APD detector determines how many electrons are produced per photon..

5. A second use model substitutes PIN diodes for the APD detector, using the same laser wavelengths.

6. A third model provides for using CMOS detectors, but these involve using lasers in the visible spectrum for illumination.

In all use cases, the 3D focal plane:
A. images a the scene,
B. has independent triggers and counters to record the time-of-flight of the laser pulse to and from the objects, and
C. records a time sample of the returned pulse and
D. records the intensity (albedo) of the reflection

Each pixel in the detector array captures independently and is connected to an amplifying and threshold circuit on the CMOS
CMOS
Complementary metal–oxide–semiconductor is a technology for constructing integrated circuits. CMOS technology is used in microprocessors, microcontrollers, static RAM, and other digital logic circuits...

 read-out IC (ROIC). A ‘counter’ measures the time-of-flight (TOF) of the reflected light imaged on the APD/CMOS hybrid array. The range to any object and surface in the scene is computed along with the intensity of that reflected light. Both are used to produce 3D video output at various frame rates (e.g. 1 – 60 Hz).

3D Flash LIDAR sensor engines are able to capture laser wavelengths ranging from 500 nm to 1700 nm depending on which semiconductor detector material is used. The prevalent use model favors InGaAs because of the efficiencies achieved at 1.06 to 1.7 micron laser wavelengths. Because the nature of 3D Flash LIDAR focal planes favors dissimilar materials (e.g. InGaAs and CMOS), 3D Flash LIDAR focal planes are created by hybridizing the focal plane detector and read-out semiconductor material by using indium "bumps" to create the inter-chip connections resulting in the focal plane array.

Typical laser wavelength for 3D Flash LIDAR cameras used in proximity to humans is the eye-safe wavelength of 1.57 microns; preferred because it is blocked by the cornea
Cornea
The cornea is the transparent front part of the eye that covers the iris, pupil, and anterior chamber. Together with the lens, the cornea refracts light, with the cornea accounting for approximately two-thirds of the eye's total optical power. In humans, the refractive power of the cornea is...

 preserving the integrity of retinas.

Objects in the field of view at the same range may return differing numbers of photons based upon their reflective characteristics resulting in a distinct difference in intensity values (2D image). This intensity difference is useful when imaging diverse environments such as city streets with distinctive lane markings and street signs(e.g. road surface vs. markings).

Because the velocity of light is a universal constant, accurate range data is a direct and simple calculation as opposed to non-time-of-flight imaging systems whose range is interpolated.

A different time-of-flight 3D camera was invented that does not require counters to measure the time-of-flight of the illuminating pulse, but instead measures this time indirectly by synchronous gating of the received pulse and detecting the received photons using an ordinary image sensor[5] and requires sequential gating to obtain 3D images/depth. This approach is also less functional in that it is limited in range and depth acquisition because a reflected pulse does not trigger the 3D depth pixels, instead requires that the acquisition gate must be open when the reflected pulse arrives. Using this paradigm, the range resolution depends on the gate spacing. If there is no knowledge of object positions, the entire depth space must be gated. Interrogating the entire space of interest slice-by-slice is an inefficient use of laser energy and a time intensive process requiring multiple laser pulses. Because the gated system uses a visible sensor, a non-eye-safe laser wavelength is required. For longer-range, more intense laser applications the signal for each gate must sum the results of many laser pulses. In general the gating and the signal summing severely limit time resolution. By contrast, 3D Flash LIDAR uses only a single eye-safe laser pulse to interrogate the entire depth space with each frame with only the laser repetition frequency limiting the time resolution (number of frames).

LIDAR Applications

3D Flash LIDAR Video cameras provide accurate 3D representation (models) of the scene including measurements, real-time imaging through obscuration such as dust, cloud or smoke. The "framing camera" nature of 3D Flash LIDAR cameras and the real-time 3D video output makes them ideal for fast moving-vehicle solutions such as automotive or aviation.

Supporting various data capture modes, the 3D data streamed from a 3D FLVC is presented in three forms, RAW data, Range & Intensity (R&I) or SULAR data. RAW data, as the name suggests, is raw data time-of-flight and intensity data for each pixel for each frame including 20 or more pulse shape samples per laser pulse. The RAW mode allows various algorithms to be applied to the raw data post capture. The R&I mode assumes RAW data is processed against algorithms on-camera in real-time and represents an entire frame of x, y, depth(z) and intensity(i) data for all the pixels in the entire frame.

SULAR data is captured using a gated mode where the individual pixels could be triggered by obscuration such as dust or smoke. In this mode, the initial trigger is suppressed and the pulse sampling occurs in all pixels simultaneously at specified increments. In this mode, the hard target triggering is suppressed to avoid imaging just the outer edge of the obscuration and a sequence of variable-sized range gates, can be applied at predetermined depth. For a given laser pulse, within the camera field of view, the volume can automatically be moved deeper into the obscuration with each successive laser pulse.

The SULAR range gating is effective because it captures light and doesn’t integrate all the light reflected by the obscuration surface. In this way, the noise is greatly reduced and the signal to noise ratio increased to the level of detection. Some photons get through the obscuration, reflect from the targets within the obscuration and reflect back out through the obscuration to be collected by the receive aperture and focused on the focal plane array.

A diffuse attenuation length characterizes this process where the number of photons transported through one attenuation length are about 1/3 those that started into the attenuating or scattering media. Objects can typically be 3D imaged at 3 to 5 attenuation lengths.
3D Flash LIDAR Developers

3D Flash LIDAR camera systems are available from Advanced Scientific Concept, Inc. (ASC 3D) and Raytheon Vision Systems (RVS), both based in Santa Barbara, California. While ASC has shipped various iterations of its products for Space (STS-127 and STS-133), unmanned air and ground vehicles and surveillance and remains as the key contributor to the technology’s development, RVS has limited its activity to the NASA Sensor Test for Orion RelNav Risk Mitigation (STORRM) Development Test Objective (DTO) tested on STS-134. Little public information is available for the RVS solution.

ASC 3D Flash LIDAR cameras currently have the equivalent of 16,384 range-finders on each sensor chip, allowing the sensor to act as a 3D video camera with functionality well beyond just range finding. Capable of mapping ~½ million points per second, with processing done on-camera to generate the 128 X 128 range maps at 30 Hz, has demonstrated single-pulse 3D Flash LIDAR imagery with a sensor capable of such a wide range of physical ranges from centimeters to kilometers.

NASA Langley Research Center has published papers suggesting enhancement to 3D Flash LIDAR data models applicable to all 3D Flash LIDAR products. Analysis of processed Flash Lidar data indicates that 8 times resolution (Super Resolution) enhancement is feasible. Study of the processed data also shows reduction in random noise as multiple image frames are blended to create a single high resolution DEM.

3D Flash LIDAR Characteristics

• Full frame time-of-flight data collected with single laser pulse

• Unambiguous direct calculation of absolute range

• Full frame rates with area array technology

• Blur-free images without motion distortion

• Co-registration of range and intensity for each pixel

• Pixels are perfectly registered within a frame

• Ability to represent objects that are oblique to the camera

• Non-mechanical (no need for precision scanning mechanisms)

• Calibration done at manufacturing time

• Smaller and lighter than point scanning systems

• Low power consumption

• Ability to “see” into obscuration (range-gating)

• Eye-Safe laser assembly

• Combine 3D Flash LIDAR with 2D cameras (EO and IR) for 2D texture over 3D depth

• Possible to combine multiple 3D Flash LIDAR cameras for a full volumetric 3D scene

Sample Applications for 3D Flash LIDAR

Automotive
  • Autonomous Navigation
  • Collision avoidance

Aviation
  • Helicopter brownout
  • Wire detection
  • Landing zone mapping

Defense
  • Unmanned ground vehicles
  • Unmanned air vehicles (UAV/UAS)
  • Laser trackers

Marine

Robotics
  • Unmanned Ground Vehicles

Industrial

Space
  • 3D FLVC have been tested on STS-127, STS-133 and and STS-134 and are planned for deployment by SpaceX
    SpaceX
    Space Exploration Technologies Corporation, or more popularly and informally known as SpaceX, is an American space transport company that operates out of Hawthorne, California...

    for Autonomous Rendezvous & Docking on their Dragon Vehicle
  • Autonomous Rendezvous & Docking
  • On-orbit Satellite Servicing
  • Entry Descent and Landing
  • Autonomous Landing and Hazard Avoidance, Collision Avoidance, Situational Awareness

Surveillance
  • Threat Detection
  • Object ID

Topographical Mapping

Transportation
  • Train collision avoidance (through fog/obscuration), track profiling
  • Airport bird detection
  • Traffic control and monitoring system

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK