Title: Aucun%20titre%20de%20diapositive
1Visuo-gestural interaction with a video
wall Frédérick Gianni, Patrice Dalle IRIT TCI,
118 route de Narbonne 31064 Toulouse
Applicative context command of a device with
gestures ( example a video wall) obseved by a
vision system.
- Two axis
- Determination of the command language
- Method of video analysis
- Context
- Meeting rooms
- Background may change
- Low user constraints (clothes, moves)
- Method
- 2D visual cues to instanciate a jointed model
Interaction Context
Constraints
Interaction purpose Organisation of the
displaying surfaces of a room composed of a video
wall and several sources of information spatially
distributed.
- Environment with low control
- Presence of pits of natural light
- Natural gestures
- Command gestures mixed up with co-verbal gestures
- User can move a lot
- Need of real-time response
Elaboration of the gestural language
Elaboration in four steps, uses of wizard of Oz
in order to realise video corpuses.
1
1er corpus, free gestures
Commands realised in the 1er corpus
A,B,C...
2
Elaboration of a gestural command language
Commands validated in the 2emer corpus
2ème corpus, scenario of presentation with
command language incorporated
3
4
Validation of the gestural language
Architecture of the vision system
Realisation of gestures model
Background model
Skin model
Joint model
Skin area in the silhouette
Image
Silhouette
2
1
Command
Angular values of joints
Treatments
Models
1
Background subtraction
Skin model
Background model
Vp
If
the pixel belong to the ground, else to the
silhouette
Skin colour can be modelize, in the
chromatic space, by a bivariate normal
distribution. Definition of its domain
sequence In, definition of its domain of
variation
2
Area of skin colour search and track
P(pixel Nskin) gt e
the pixel is skin
coloured, area formation by connexity.
Db (m,s)
Search if
Nskin (m,S)
m(mh, ms)
where
Track on the conservation of the size and the
speed of the area
shh
shs
Joint model
S
and
sss
ssh
Instanciation of the jointed model Initialisation
following a known position of the speaker we can
match the model. Tracking knowing the motion of
the skin areas, identify as the hands, and using
inverse kinematics, we compute the angular values
of the arms.
Realisation of gestures model
Uses of 2 models In the image, sequences of
activation of specific areas in the 'image. In
joint space, angular trajectories of the joints.
Command identification.
Her the model of the arm, define by the
parameters of Denavit-Hartenberg