Implementing Avatars within Virtual Environments - PowerPoint PPT Presentation

1 / 39
About This Presentation
Title:

Implementing Avatars within Virtual Environments

Description:

DI-Guy is designed to simplify the task of adding life-like human characters to ... DI-Guy includes a whole family of characters and behavior. ... – PowerPoint PPT presentation

Number of Views:207
Avg rating:3.0/5.0
Slides: 40
Provided by: michae761
Category:

less

Transcript and Presenter's Notes

Title: Implementing Avatars within Virtual Environments


1
Implementing Avatars withinVirtual Environments
  • Krist Norlander, LT, USNR
  • MV4473 Virtual Worlds Simulation Systems

2
Making a believable scenario!!!
  • ..they need to sweatwithout that we are wasting
    our time
  • George Solhan on VIRTE Program

3
Outline
  • Definition
  • Description Intent of avatar..
  • Advantages Why we want avatars
  • Limitations Why we have problems
  • Implementation How we make it work
  • Applications What is available
  • Problems yet to solve!

4
Points of Reference
  • Look back at these class topics
  • HMD
  • Gloves
  • Motion tracking
  • Haptic feedback
  • Gesture recognition
  • Locomotion

5
Avatar Definition
  • Merriam-Webster Online Dictionary
  • An incarnation in human form
  • An embodiment (as of a concept or philosophy)
    often in a person
  • Virtual Reality Domain
  • Symbolic representation of a real person within
    a virtual world.

6
Avatar Intent
  • Looks
  • Moves
  • Interacts..
  • similar to a human

7
False Implementation
  • Looks
  • Moves
  • Interacts
  • not quite like human

8
Surrealistic Avatars
  • Chat rooms
  • Multi-User Domains (MUD)
  • Games

9
Advantages
  • Humans naturally interact with humans
  • Abnormal reaction to machines
  • Provide familiarity to VE user
  • Potential for improved proprioception
  • User can project self into VE

10
Limitations
  • Limited articulation of body segments
  • No dynamic sizing to meet user dimensions
  • Limited tracking of user articulation
  • Difficult translation of articulation to avatar
  • No dynamic representation of user appearance
  • Limited synchronization of user expressions
  • Limited tactile feedback from avatar to user
  • Computationally expensive to render avatars

11
Major Players
  • Humanoid Animation Working Group
  • Part of Web3D Consortium
  • H-Anim Standard
  • Center for Human Modeling and Simulation
  • University of Pennsylvania
  • Dr. Norman I. Badler, Director
  • Jack
  • Boston Dynamics, Inc.
  • DI-Guy

(Click pictures to see more)
12
Tackling Body Tracking
  • Divide human body into movable segments
  • Define rotation limitations of segments
  • Track user segments
  • Translate user movements into data
  • Inject movement data into avatar
  • Standardization thru H-Anim 1.1 specification

h-anim.org/Specifications/H-Anim1.1
13
Body Segment Diagram
H-Anim 1.1
DI-Guy
14
Tracking User
  • Ascertain posture thru inertial/magnetic sensors
    attached to limbs
  • Passive measurement of physical quantities
    directly related to motion and attitude (i.e..
    Sourceless)
  • Segments oriented independently
  • Segments positioned relative to each other by
    adding rotated vectors
  • Position data needed for reference point to place
    avatar in VE

15
NPS Prototype
  • Wireless full body tracking system based on
    inertial/magnetic orientation sensing
  • MARG sensors are used to determine posture
  • Reference point obtained thru an optical or
    ultrasonic tracking system

16
User Representation
  • Creating an avatar that resembles user
  • Obtain user dimensions
  • Limb segment lengths, widths, density, hardness,
    etc.
  • Obtain user attributes
  • Colors, joint limits, behaviors, etc.
  • Create avatar using available data
  • Significant technology limitations exist!

17
The Laser Scan
  • Laser-triangulation method
  • Capable of capturing textures
  • Full scan in
  • Hair/masking issues

18
Graphical Scan Output
19
Putting It Together
  • Creation of avatars thatrepresent people in a
    believable wayallow for user controlled
    actionscan be used for CGFprovide feedback
    to user

20
Issues/Difficulties
  • Linking bipedal motion between user and avatar
  • Linking avatar interaction with VE to user
  • Solutions often use scripted gestures/actions
  • Believability
  • Animation LOD
  • Not just polygon LOD

21
Examples
  • FPS Gaming
  • Unreal, Half-life, Quake, Rainbow Six
  • Multi-User Domains
  • Commercial Simulations
  • Vega, BDI
  • NPS Research
  • Bachmann, Dutton, Storms, Norlander, VIRTE

22
Military Application
Team coordination
Building clearing
Initiative based tactics
Images taken from Rogue Spear from Red Storm,
Inc.
23
Highest Potential Use
  • Jack
  • Generic Human model
  • Articulation supports ergonomic data analysis
  • GI-Guy
  • Human Simulation
  • Scripted behaviors
  • Support for
  • DIS/HLA
  • Input control

(Click picture to see more about Jack)
24
BDI DI-Guy
  • What is DI-Guy?
  • DI-Guy is software for adding life-like human
    characters to simulated environments. Each
    character moves realistically, responds to simple
    high-level commands, and travels about the
    environment as directed. DI-Guy characters make
    seamless transitions from one activity to the
    next, moving naturally like a real person. DI-Guy
    is fully interactive, with all behavior occurring
    in real time.

Quoted from BDI DI-Guy documentation
25
BDI DI-Guy (continued)
  • What is the Design Goal of DI-Guy?
  • DI-Guy is designed to simplify the task of adding
    life-like human characters to real-time
    interactive simulations. The goal is to allow
    users to concentrate on telling DI-Guy where to
    go and what to do, while freeing them from
    low-level details such as joint angle control,
    motion generation, graphics hierarchy management,
    model and texture creation, and animation.
  • DI-Guy is designed to provide a set of integrated
    and encapsulated human figures that provide
    versatile and visually engaging behavior.

Quoted from BDI DI-Guy documentation
26
DI-Guy Characters
  • What Characters Does DI-Guy Include?
  • DI-Guy includes a whole family of characters and
    behavior. There are soldiers, flight deck crew,
    chem/bio characters, and ordinary men and women,
    and we are adding new characters all the time.
    Starting with the 4.0 release of DI-Guy, you can
    modify the appearance of characters to suit your
    needs. If we dont have the characters you need
    and you do not want to make them yourself, give
    Boston Dynamics a call We can create new
    characters to your specifications.

Quoted from BDI DI-Guy documentation
27
DI-Guy Graphic LOD
  • Adjust polygon rendering based on viewed object
    distance
  • Avatars inherently have high polygon count

Quoted from BDI DI-Guy documentation
28
Motion Level of Detail
  • Reduce computational needs by reducing avatar
    motions
  • DI-Guy soldier characters support motion LODs
  • Characters viewed from a long distance do not
    display so much detail in their motion as
    characters viewed up close
  • Fewer joint positions are calculated and updated
    for display
  • Motion LODs are not switch automatically by the
    graphics library

Quoted from BDI DI-Guy documentation
29
DI-Guy Motion LOD
Quoted from BDI DI-Guy documentation
30
Realistic Motion
  • Used to best advantage, a DI-Guy character will
    move with realism, make smooth transitions from
    one action to the next, and maintain accurate
    position and orientation in the synthetic
    environment, all at the same time.
  • There must be consistency among the speed,
    heading, position, and desired action. Such
    simulations should
  • Allow sufficient time for each transition from
    one activity to the next,
  • Limit accelerations to the human range, and
  • Specify travel rates that are consistent with
    each gait.
  • Humans (and animals) normally travel at a narrow
    range of speeds for each gait.
  • This is different than cars, airplanes, and
    tanks.

Quoted from BDI DI-Guy documentation
31
DI-Guy LOD Demo
  • Indirect control of
  • Graphic LOD
  • Motion LOD
  • Gaze
  • Aim
  • State
  • or direct limb control

32
User Controlled Motion
  • When DI-Guy is driven by data from a live human
    (e.g. tread-Port, I-Port, joysticks,
    Omni-Directional Treadmill, etc.), the interface
    device should have output filters that provide
    realistic transition rates, accelerations, and
    maximum travel speeds.
  • The same kinds of output filters should be used
    for CGF/SAF.
  • Variation will provide an appropriate trade-off
    between smoothness of motion and travel
    precision, depending on the requirements of your
    application.

Quoted from BDI DI-Guy documentation
33
Networking Avatars
  • Interactive avatars implies distributed networks
  • Every human is uniqueshouldnt avatars be?
  • Avatars have graphical description
  • Graphical model
  • Graphical LOD
  • Avatars have state description
  • Motion model
  • Motion LOD

34
DIS Lifeform States
  • DIS provides limited description for avatar
    control

35
DI-Guy DIS States
  • DI-Guy enhances DIS protocol

36
Other Networked Apps
  • H-Anim and VRML provides standards
  • User can define models
  • Animation is model independent
  • No automated LOD
  • No motion LOD standard

(Click pictures to see more)
37
Goals
  • Virtual human avatars with articulated joint
    structure allowing for both scripted movement and
    real-time networked control
  • Produce an avatar that is as realistic as
    possible, but can still be rendered efficiently
    on todays computers
  • Source code platform independent, open source and
    worldwide deployable
  • Visually compelling for acceptance

38
Related Work
  • NPS Thesis
  • Miller, Bachmann, Dutton, Norlander
  • Links
  • MV4473 Avatar Web (AvatarsOverview.htm)
  • Other Web Sites (resourceLinks.htm)

39
Questions
Write a Comment
User Comments (0)
About PowerShow.com