3D%20Time%20of%20Flight%20Sensors%20for%20Robot%20Navigation - PowerPoint PPT Presentation

About This Presentation
Title:

3D%20Time%20of%20Flight%20Sensors%20for%20Robot%20Navigation

Description:

Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara – PowerPoint PPT presentation

Number of Views:155
Avg rating:3.0/5.0
Slides: 31
Provided by: Riz91
Category:

less

Transcript and Presenter's Notes

Title: 3D%20Time%20of%20Flight%20Sensors%20for%20Robot%20Navigation


1
3D Time of Flight Sensors for Robot Navigation
  • Mohammed Rizwan Adil, Chidambaram Alagappan.,
    and Swathi Dumpala Basaveswara

2
Robots
  • Gaining immense importance
  • Presence of robots being felt in all walks of
    life.
  • Image detection has become a prerequisite for
    effective navigation.
  • The robot should be able to extract all the
    necessary information from its sensors.

3
Image detection
  • Conventional 2D images detect brightness but
    dont detect depth.
  • Therefore 3D Time of Flight Cameras are being
    used.
  • The depth information is depicted using color
    codes.
  • 3D ToF cameras combine the accurate distance
    measurements and camera based system.
  • A final discussion about PMD and the psuedo four
    phase shift algorithm

4
Introduction
  • Four building blocks of navigation
  • 1. perception-robot must be able to interpret
    meaningful data using the sensors
  • 2. localization- the robot must be able to
    determine its position with regard to the
    environment
  • 3. cognition- the robot must be able to determine
    its path
  • 4. motion control- the mechanical traversal along
    the planned path

5
Simultaneous Localization and Mapping (SLAM)
3.
  • In most cases, the processes of exploring an
    unknown environment through maps and determining
    the relative position are performed
    simultaneously through a process known as
    Simultaneous Localization and Mapping

6
Several methods to obtainb 3D images
  • An image from stereo vision camera which provides
    3D details of an object can be fused with the
    measurements of a 2D laser range finder.
  • Stereo vision requires complicated algorithms
    and powerful sensors to construct its occupancy
    grid and despite all these, it is prone to error

7
SfM Structure from Motion
  • Works assuming that the object is going to move.
  • Trajectories of points are used to estimate
    dimensions.
  • Technique will not work if object is dynamic(like
    flowing water)

8
Stereo Vision v/s Kinetic depth technique
  • In Stereo Vision, the image and the data from the
    laser range finders corresponding to the same
    time has to be overlapped to obtain a 3D vision.
  • In Kinetic depth technique, the image of the same
    object has to be taken at two different time
    intervals- either ways, both techniques require
    data fusion which requires computing power.

9
Laser Range Scanners
  • Laser Range Scanner which works on the principle
    of calculating the distance from the observer to
    a particular point.
  • Laser Range Scanners provide sparse data sets,
    use mechanical components and do not provide a 3D
    image with one image capture

10
ToF cameras
  • The time of flight cameras combine the features
    of active range sensors and camera based
    approaches and provide a complex image which
    contains both the intensities and also the
    distances of each and every point.
  • There is no fusion of data from two separate
    sources and the data is being gathered
    continuously

11
Principle behind the time of flight
cameras
  • Points that are distant from the camera will take
    greater time to reach it.
  • The distance to the object us calculated using
    properties of light and phase shift of modulation
    envelope of the light source.
  • The phase and amplitude of the reflected light
    can be detected using various signal processing
    techniques. Usually, to get a high resolution CCD
    based sensors are employed

12
CMOS ToF camera
  • CMOS chip based cameras appear most widely in
    the literature.

13
  • CMOS sensors usually have 64x64 pixel array and
    are implemented on a single chip using ordinary,
    low cost CMOS process.
  • It also needs to have ADC and also a mechanism
    to generate high speed modulation signals
  • The main part of the sensor design is the unique
    pixel structure

14
Unique pixel structure
15
  • The differential structure accumulates
    photogenerated charges in two collection nodes
    using two modulated gates.
  • The gate modulation signals are synchronized with
    the light source, and hence depending on the
    phase of incoming light, one node collects more
    charges than the other.

16
Calculating the depth resolution
17
Resolution contd
18
Enhancement of Depth Images
  • Optical noise existence, unmatched boundaries,
    and temporal inconsistency are the three critical
    problems which a ToF image suffers from.
  • Techniques like Gaussian smoothing and quadratic
    Bezier curve are used for static 3D images
  • However, for enhancement of dynamic images, we
    use newly designed joint bilateral filtering,
    color segmentation based boundary refinement, and
    motion estimation based temporal consistency.

19
Bilateral Filter
  • Constructed using both color and depth
    information at the same time.
  • After color segmenting a color image, we extract
    the color segment set to detect object
    boundaries.
  • To minimize temporal depth flickering artifacts
    on stationary objects, we match previous and
    current frame color images.

20
Review of latest developments
  • These cameras are able to provide registered
    dense depth and intense images, complete image
    acquisition and high frame rate, small and
    compact design.
  • They dont need any mobile parts and have
    auto-illumination

21
(No Transcript)
22
(No Transcript)
23
Errors and Compensations for ToF cameras
  • Systematic Errors
  • 1. Depth Distortion
  • 2. Integrated time related error
  • 3. Built in pixel related errors
  • 4. Amplitude related errors
  • 5. Temperature related errors

24
  • Non Systematic Errors
  • 1. SNR
  • 2. Multiple light reception
  • 3. Light scattering
  • 4. Motion blurring

25
Photonic Mixer Devices
26
PMD contd
  • Photonic Mixer Devices are also based on ToF
    principle and can realize a 3D image without
    complex electronics similar to a CMOS device
  • In a PMD, instead of a single laser beam (which
    would have to be scanned over the scene to obtain
    3D) the entire scene is illuminated with
    modulated light.

27
(No Transcript)
28
Pseudo-Four-Phase-Shift Algorithm for
PerformanceEnhancement of 3D-TOF Vision Systems
29
  • Only two image captures instead of four are
    required to calculate the phase difference f.
  • The frame rate of PMD TOF sensors is doubled
    without changing the integration time Tint .

30
Thanks
Write a Comment
User Comments (0)
About PowerShow.com