CSE 291 Introduction to Virtual Environments - PowerPoint PPT Presentation

1 / 29
About This Presentation
Title:

CSE 291 Introduction to Virtual Environments

Description:

8 channels (5 front, 2 rear, 1 sub-bass) SDDS (Sony Dynamic Digital Sound) 1-8 ... Dolby Surround, Dolby Pro Logic. 2. Stereo. 1. Mono # Channels. Sound ... – PowerPoint PPT presentation

Number of Views:90
Avg rating:3.0/5.0
Slides: 30
Provided by: jurgens
Category:

less

Transcript and Presenter's Notes

Title: CSE 291 Introduction to Virtual Environments


1
CSE 291 - Introduction to Virtual Environments
  • Collaborative systems, spatialized sound
  • Jürgen P. Schulze
  • California Institute for Telecommunications and
    Information Technology (Calit2)
  • jschulze_at_ucsd.edu

November 9, 2006
2
Overview
  • Part 1 Collaborative systems
  • Part 2 Spatialized sound

3
Collaborative Systems - Overview
  • Multiplayer games
  • Platforms SPLINE, Blaxxun
  • VR systems SPLINE, DIVE, MASSIVE, NPSNET

4
Off-line Multiplayer Games
  • Players take turns vs. players play concurrently
  • Support for multiple input devices game console
    controllers, different areas on keyboard

5
On-line Multiplayer Games
  • Connection via network (Ethernet)
  • LAN vs. Internet
  • Run on desktop PC with high-end graphics card and
    monitor/mouse
  • Often 3D virtual environments, but no 3D display
    or 3D input devices
  • Support hundreds of users
  • Software runs 24/7
  • Often users can join any time
  • Connection to real world via pay scheme and
    virtual currencies

6
Types of Collaboration
  • Collaboration in same VE
  • display environment must support multi-users
  • Collaboration between VEs
  • single user display environments can be used
  • Hybrid
  • different kinds of display environments at
    various ends

7
SPLINE
  • SPLINE Scaleable Platform for Large Interactive
    Network Environments
  • Developed in mid-90s by Barrus and Waters
  • Uses peer-to-peer communication
  • Has evolved from a pure multicast approach to a
    mixed client-server and multicast approach, in
    order to cope with low-bandwidth networks
  • Divides the universe, called world model, into
    sub-regions called locales, each associated with
    a multicast group

8
DIVE
  • DIVE Distributed Interactive Virtual
    Environment
  • Developed 1993 by C. Carlsson and O. Hagsand
    (Swedish Institute of Computer Science)
  • Heterogeneous distributed VR system based on UNIX
    and Internet networking protocols
  • Each participating process has a copy of a
    replicated database and changes are propagated to
    the other processes with reliable multicast
    protocols
  • Provides a dynamic virtual environment where
    applications and users can enter and leave the
    environment on demand
  • Supports coarse-grained partitioning of the whole
    universe by introducing worlds and gateways
    between worlds
  • Several user-related abstractions have been
    introduced to ease the task of application and
    user interface construction

9
MASSIVE
  • "Model, Architecture and System for Spatial
    Interaction in Virtual Environments"
  • Developed 1995 at Department of Computer Science,
    University of Nottingham, UK
  • Distributed virtual reality system
  • Provides facilities to support user interaction
    and cooperation via text, audio, graphics media,
    interaction
  • Focus on large-scale multi-user virtual
    environments

10
NPSNET
  • Developed in mid-90s by M. Zyda et al. at Naval
    Postgraduate School, Monterey, CA
  • Branches
  • Techno developing network and software
    technology for large scale virtual environments
    (LSVE) with 1000 players (human NPC)
  • Interact focus on human-computer interaction
    technology for LSVE and on evaluation of LSVE for
    training
  • Apps development of LSVEs useful for the
    Department of Defense
  • Based on SGI Performer and C

11
Spatialized Sound - Overview
  • Acoustics
  • Home theater solutions
  • VR solutions
  • 3D sound specification

12
Sound Waves
  • Propagate at 340 m/s at 21 deg Celsius (in water
    1450 m/s)
  • Generally sound waves propagate in all directions
  • Propagation affected by interference, reflection,
    diffraction, refraction -- much like light
  • Can interact with other media, e.g., can be
    transmitted from solids into fluids

13
Acoustics (1)
  • Room acoustics
  • Depend on many parameters like room size,
    materials used for floor, walls, ceiling,
    location and type of windows, curtains,
    furniture, plants, etc.
  • Sound engineers can measure acoustical properties
    of a room resonance frequency, reverberations,
    echoes

14
Acoustics (2)
  • Psycho-acoustics
  • Describe the effects specific to human hearing.
  • Human hearing is affected by parameters like
    frequency, direction, loudness, etc.
  • Every person has a different perception of sound.
  • Mathematical models of the ear help understand
    how hearing works and how sound must be modified
    to give the best possible reproduction for
    binaural hearing.
  • Recording sound with different properties with
    microphones on a dummy head can generate an ear
    print.
  • Set of functions depending on frequency and
    direction of sound is called Head Related
    Transfer Functions (HRTF) and specific for each
    ear of a specific person.
  • A subset of the HRTF is interchangeable and can
    be used with most people, and therefore is
    included in HRTF processors for 3D sound for use
    with headphones.

15
Surround Sound
  • Surround sound is general term for reproducing
    sound that comes or seems to come from different
    directions around the listener.
  • Few surround sound systems can reproduce a 3D
    sound environment, mostly sounds are located on a
    horizontal plane (2D sound).
  • A mono signal cannot carry surround sound
    information. At least two channels are needed for
    spatialized sound.

16
Ambisonics
  • Invented by in early 1970s.
  • Not a regular audio signal but a mathematical
    representation of the recorded surround sound
    not compatible with other sound representations.
  • Using an array of 4 microphones, acoustical
    environments can be recorded in 3D and later
    encoded to the Ambisonics B-Format.
  • Four component signal mono sound signal (sound
    pressure) W, three difference signals X
    (front-back), Y (left-right) and Z (up-down).
  • Decoder reproduces surround sound with at least 4
    speakers located in a quad array on a circular
    plane or more speakers located on a sphere around
    the listener.
  • http//www.ambisonic.net
  • Peter Felgett. Ambisonics. Part One General
    System Description. Studio Sound, 120--22,40,
    August 1975.
  • Michael A. Gerzon. Ambisonics. Part Two Studio
    Techniques. Studio Sound, pages 24--30, October
    1975. Correction in Oct. 1975 issue on page 60.

17
Home Theater Speaker Setup
Common speaker setups mono, stereo, quad, 5.1
18
ITU Recommendation
ITU (International Telecommunications Union)
recommendation 775 for 5.1 speaker setup
19
Sound APIs
20
The VR Challenge
  • In a single-user, head-tracked, screen-based
    environments
  • virtual sound sources move relative to speakers
  • listener moves relative to speakers
  • all movements in 3D
  • Existing movie theater solutions designed for
    non-moving listener in 'sweet spot' or 'sweet
    area' speakers in a plane 2D sound

21
Headphones vs. Speakers
  • Headphones
  • work well with head-coupled visual displays
  • easier to implement spatialized 3D sound fields
  • mask real-world noise
  • greater portability
  • private
  • Speakers
  • work well with stationary visual displays
  • don't requre sound processing to create
    world-referenced sound stage
  • greater user mobility
  • little encumbrance
  • multiuser access

22
Virtual Sound Source
23
VRML Sound Support
  • In VRML, objects are declared as nodes (scene
    graph approach)
  • Relevant nodes for audio are
  • Sound
  • AudioClip
  • MovieTexture
  • Since VRML 2.0, sounds can have locations in 3D

24
VRML Sound Node
  • Sound node holds properties of a sound source
    position, orientation, minimum/maximum listening
    distances for back and front side of sound
    position

25
VRML Sound Node Geometry
26
VRML AudioClip and MovieTexture
  • AudioClip describes a sound file to use with a
    Sound node.
  • While Sound node is the emitter, like a
    loudspeaker, the AudioClip node is the generator
    of the sound.
  • MovieTexture describes a movie file can be used
    as sound generator if movie file contains an
    audio channel

27
Conclusions
  • No standard solution for VR-compatible true 3D
    sound yet
  • Various available solutions from research labs
  • Most current solutions ignore room and psycho
    acoustics

28
This Week's Class Paper
  • M. Pinho, D. Bowman, C. Freitas Cooperative
    Object Manipulation in Immersive Virtual
    Environments Framework and Techniques, In
    Proceedings of Virtual Reality Software and
    Technology (VRST), 2002, pp. 171-178
  • Questions? Comments?

29
Announcements
  • Next week Guest lecture by Prof. Thomas DeFanti
    (UCSD/Calit2), topic High-resolution video
    transmission over fast networks. LOCATION
    Atkinson Hall, room 5004
  • Paper to read and summarizeShimizua et al.
    International real-time streaming of 4K digital
    cinema, Future Generation Computer Systems,
    Volume 22, Issue 8, October 2006, Pages 929-939
Write a Comment
User Comments (0)
About PowerShow.com