A 3DTV IMPLEMENTATION FOR E2E COMMUNICATION - PowerPoint PPT Presentation

1 / 17
About This Presentation
Title:

A 3DTV IMPLEMENTATION FOR E2E COMMUNICATION

Description:

Free-point TV (FTV) implementation for a meeting room scenario consisting of at ... FTV implementations. View interpolation for dynamic ... FTV implementations ... – PowerPoint PPT presentation

Number of Views:113
Avg rating:3.0/5.0
Slides: 18
Provided by: Ramana1
Category:

less

Transcript and Presenter's Notes

Title: A 3DTV IMPLEMENTATION FOR E2E COMMUNICATION


1
A 3DTV IMPLEMENTATION FOR E2E COMMUNICATION
2
REQUIREMENTS
  • Free-point TV (FTV) implementation for a meeting
    room scenario consisting of at least two actors
    and environment.
  • Require to synthesize virtual views given
  • Motion of actors and parts of environment.
  • Possibility of new actors entering the scene from
    environment.
  • Occlusion of actors and environment for one or
    more sensors sampling the scene.
  • Free to use sensors other than visual

3
Possible approaches
  • Sample parts of the environment a priori
  • Environment map updated using real-time mosaic
    and visual hulls for actors Needs the scene to
    be simple (unsuitable if room contains large
    number of objects).
  • A detailed 3D reconstruction map using ToF depth
    sensors for static environment parts and visual
    hulls for mobile objects and live actors.

4
Possible approaches
  • For occlusions, given a desired viewpoint,
    compute the best camera view for texture mapping
    and fill holes using other camera images.
  • Sometimes, none of the camera images can give
    complete information camera panning and
    recalibration using structured lighting???

5
PAST WORK REAL-TIME MOSAICS
  • Real-time Mosaic for Multi-camera
    videoconferencing (Anton Klechenov, Wee Kheng
    Leow et al., 2002).
  • Video mosaic generated using multiple camcorders
    Changes in environment detected real-time and
    mosaic updated.
  • Changes detected through image subtraction and
    the corresponding part in the mosaic updated.

6
REAL-TIME MOSAICS
  • Real-Time Panorama Generation and Display in
    Tele-immersive Applications (Wai-Kwan Tang et
    al., 2005)
  • To obtain a large FOV and avoid occlusions from
    supporting structures for air-traffic control.
  • Real-time panoramas created using groups of
    flexible cameras such that their center of
    projections roughly coincide.

7
REAL-TIME MOSAICS
  • Real-time mosaicing using PTZ cameras for motion
    detection through background subtraction (Azzari
    et al. 2005).
  • To generate real-time, high quality mosaic from
    images captured using a PTZ camera.
  • Consecutive frames captured by camera registered
    upon detecting foreground objects and removing
    them.
  • Error correction while blending current frame
    onto mosaic through frame registration.

8
FTV implementations
  • View interpolation for dynamic scenes (Xiao,Rao,
    Mubarak Shah, EG 2002).
  • Morphing between images where scene objects have
    undergone motion.
  • Images divided into several layers where each
    layer consists of objects undergoing rigid body
    transformation and mapped using a Fundamental
    matrix.

9
FTV implementations
  • Tri-view morphing (Xiao Shah 2004)- navigating
    a scene based on three wide-baseline uncalibrated
    images.
  • Trifocal plane (instead of epipolar plane)
    determined to morph within the 2D space of the
    cameras.
  • With multiple moving objects, images segmented
    into layers with one moving object as earlier.

10
FTV implementations
  • Virtual view synthesis of people from multiple
    view video sequences (Starck, Hilton 2005).
  • Synthesizing virtual views of moving humans
  • with the same quality as captured from
    multiple cameras.
  • Initial estimate of scene geometry from multiple
    view silhouette images refined using
    view-dependent scene optimization.
  • High quality virtual views and 3D geometry for
    videos captured in blue-screen studio.

11
FTV implementations
  • Depth map creation and IBR for advanced 3DTV
    services (Kauff et al., 2006).
  • Videodepth representation scene capture using
    array of n cameras placed in a straight line so
    as to have n-1 stereo pairs.
  • Videodepth obtained thro rectification -
    Disparity maching - Depth map creation -
    de-rectification of input images.

12
FTV implementations
  • Using multiple stereo pairs ensures that
    information about occluded regions (for one pair)
    can be derived from other camera pairs.
  • Parallax will be rendered w.r.t. a virtual camera
    at the display.
  • Results demonstrated for real-life scenes.
    However baseline distance between camera
  • pairs is small.

13
FTV implementations
  • View point entropy (VPE) measure for obtaining
    good views (Vazquez et al. TCVG02). VPE
    Amount of scene info that can be captured from a
    point. Good is defined to be application-specifi
    c.
  • No work on evaluating goodness of view for
    maximizing scene coverage quality.
  • Automatic Selection of reference views for IBR
    (Hlavac et al. 1996) set of reference views
    around an object chosen so that error is within
    threshold for intermediate views.

14
FTV implementations
  • Method consists of interval growing (for
    maximizing scene coverage between adjacent views)
    and selection (to select minimum no. of views)
    from a large number of views.
  • Visual appearance Dissimilarity (VAD) measures
    error between reference and interpolated views.
  • Plausible viewing interval (PVI) primary views
    within interval can be interpolated from PVI
    endpoints with VAD for two very different projections.

15
Non-visual sensors for 3D scene reconstruction
  • Burak et al. (2007) propose Time of Flight (TOF)
    depth sensor-based camera system that generates
    real-time depth images.
  • System consists of a laser or LED source and an
    array of pixels sensing the phase of the incoming
    light.

16
Non-visual sensors for 3D scene reconstruction
  • Distance of the object is linearly proportional
    to the phase shift of the received light. Depth
    resolution of a few millimeters.
  • Parvizi Wu (2007) 3D head tracking using TOF
    depth sensor.
  • Zhu et al. (2008) combine characteristics of
    TOF and stereo imaging which possess
    complementary characteristics.

17
Structured lighting
Write a Comment
User Comments (0)
About PowerShow.com