3D Video Surveillance with Augmented Virtual Environments - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

3D Video Surveillance with Augmented Virtual Environments

Description:

Segmenting and tracking moving objects (people, vehicle) in the scene. Event modeling ... Dynamic modeling - detection, tracking, and modeling of moving objects ... – PowerPoint PPT presentation

Number of Views:93
Avg rating:3.0/5.0
Slides: 20
Provided by: ios2
Category:

less

Transcript and Presenter's Notes

Title: 3D Video Surveillance with Augmented Virtual Environments


1
3D Video Surveillance withAugmented Virtual
Environments
  • Ismail Oner Sebe, Jinhui Hu,
  • Suya You, Ulrich Neumann
  • Integrated Media Systems Center
  • University of Southern California

2
Problem Statement
  • Imagine dozens of video/data streams from people,
    UAVs, and robot sensors distributed and moving
    through a scene
  • Problem visualization as separate
    streams/images provides no integration of
    information, no high-level scene comprehension,
    and obstructs collaboration

3
A Simple Example USC Campus
1
2
Visualization as separate streams provides no
integration of information, no high-level scene
comprehension, and obstructs collaboration
3
4
Motivation
  • Current Surveillance Monitoring Center
  • Overwhelmed with data fusion and comprehension of
    multiple image streams.
  • Limited number of displays
  • Waste of Resources
  • Better Surveillance System
  • Better understanding of streams
  • Better use of resources
  • Additional capabilities tracking, statistics

USC Public Security Surveillance Center
5
Outline
  • Augmented Virtual Environment (AVE)
  • Surveillance with AVE
  • Dynamic Object Detection
  • Visualization
  • Conclusion and Future work

6
AVE Fusion of 2D Video/Image 3D Model
  • VE captures only a snapshot of the real world,
    therefore lacks any representation of dynamic
    events and activities occurring in the scene
  • AVE Approach uses sensor models and 3D models
    of the scene to integrate dynamic video/image
    data from different sources
  • Visualize all data in a single context to
    maximize collaboration and comprehension of the
    big-picture
  • Address dynamic visualization and change
    detection

7
AVE vs. Others
  • Augmented Virtual Environment 1
  • Fusion of dynamic imagery with 3D models in a
    real-time display to help observers comprehend
    multiple streams of temporal data and imagery
    from arbitrary views of the scene
  • Related Work
  • Distributed Interactive Video Array (DIVA) at
    UCSD 2
  • VideoFlashlight at Sarnoff Corporation 3
  • Video Surveillance and Monitoring (VSAM) at CMU
    4

1 Neumann U., You S., Hu J., Jiang B., and Lee
J. Augmented Virtual Environments (AVE) Dynamic
Fusion of Imagery and 3D Models VR03, March
2003 2 Hall B. Trivedi M. A novel graphical
interface and context aware map for incident
detection and monitoring, 9th World Congress on
Intelligent Transport Systems, October, 2002. 3
Kumar R. Sawhney H.S. Guo Y. Hsu S. Samarasekera
. 3D manipulation of motion imagery, ICIP2000,
September 2000 4 T. Kanade, R. Collins, A.
Lipton, P. Burt and L. Wixson, Advances in
cooperative multi-sensor video surveillance,
Proc. of DARPA Image Understanding Workshop, Vol.
1, pp. 3-24, 1998
8
AVE System Components
  • Accurate 3D models
  • Scene model as substrate
  • Accurate 3D sensor models
  • Sensor calibration tracking
  • Image analysis
  • detection tracking of moving objects (people,
    vehicles) and pseudo-models
  • Dynamic visualization
  • Data fusion video projection

9
AVE Requires a 3D Scene Model (Substrate)
  • Approach
  • Model reconstruction
  • Input LiDAR point cloud
  • Output 3D mesh model
  • Automated
  • Building extraction
  • Vegetation remove
  • Building detection
  • Model fitting
  • Semi-automated

3D model of USC campus
10
AVE Requires Sensor Models (Tracking)
  • Tracking is the key
  • Need accurate tracking information for image
    projection and fusion
  • (where am I, where am I looking?)
  • 6DOF measurement
  • High precision
  • Approach
  • Combines geometric and intensity constraints to
    establish accurate 2D-3D correspondence
  • Hybrid GPS/Vision/Vision tracking strategy
  • An Extended Kalman Filter framework

11
AVE Requires Image/Video Texture Projection
Video Texture Mapping
  • Update sensor pose and image to paint the
    scene each frame
  • Compute texture transformation during rendering
    of each frame
  • Dynamic control during visualization session to
    reflect most recent information
  • Supports up to 4 real-time video streams
  • Real-time rendering - graphics HW produces
    28fps on dual 2G PC - 1280x1024 screen

12
Dynamic Event Analysis Modeling
  • Video analysis
  • Segmenting and tracking moving objects (people,
    vehicle) in the scene
  • Event modeling
  • Creating pseudo-3D animated model
  • Improving visualization situational awareness

13
Tracking and Modeling Approach
  • Object detection
  • Background subtraction
  • A variable-length time average background model
  • Morphological Filtering
  • Object tracking
  • SSD correlation matching
  • Object modeling
  • Dynamic polygon model
  • 3D parameters (position, orientation and size)

14
Tracking and Modeling Results
  • Tracking in 2D and modeling in pseudo 3D

15
Integrated AVE Environment
  • An integrated visualization environment built in
    the IMSC laboratory
  • 8x10 foot acrylic back-projection screen
    (Panowall) with stereo glasses interface
  • Christie Mirage 2000 stereo cinema projector with
    HD SDI
  • 3rdTech ceiling tracker
  • A dual 2G CPU Computer (DELL) with Nvidia Quadro
    FX 400 graphics card
  • Supports multiple DV video sources (lt4) in
    real-time (28pfs)

16
DEMO
17
Future Work
  • Texture Management - texture retention,
    progressive refinement
  • System Architecture - scalable video streams and
  • Dynamic modeling - detection, tracking, and
    modeling of moving objects

18
Conclusion
  • A novel visualization system for video
    surveillance based on tracking and 3D display of
    moving objects in an Augmented Virtual
    Environment (AVE) is presented
  • Adaptive background-subtraction method combined
    with a pseudo-tracking algorithm for dynamic
    object detection
  • Visualization of the dynamic objects in the AVE
    system
  • Fusion of all the video, image and 3D models

19
Acknowledgement
  • Integrated Media Systems Center, USC
  • NIMA
  • MURI team Avideh Zakhor (UC Berkeley), Suresh
    Lodha (UC Santa Cruz), Bill Ribarsky (Georgia
    Tech), and Pramod Varshney (Syracuse)
Write a Comment
User Comments (0)
About PowerShow.com