Title: Laserspot centers
1Lecture 5What is the best way to use
camera-detected features on the target and
manipulator bodies in order to exploit the
asymptotic-limit region?
2Laser-spot centers in a differenced image.
3How should we locate them?
4And is their camera-space center location
influenced by color?
5Although our camera registers only grayscale
intensity, refraction is still impacted by the
extreme position of the red color within the
visible spectrum.
6Although our camera registers only grayscale
intensity, refraction is still impacted by the
extreme position of the red color within the
visible spectrum.
Can we exploit our asymptotic-limit region with
cues that on average are incident onto the
lenses at different frequencies?
7Although our camera registers only grayscale
intensity, refraction is still
impacted by the extreme position of the red color
within the visible spectrum.
8Will the blue light fall on the same place as red?
9Same place as red?
Not with this lens.
10Same place as red?
With this one, yes.
11n (index of refraction) for various materials.
12How good of a job does the compound lens do in
placing this feature onto the image plane
consistently with the end-member cue feature?
13How good of a job does the compound lens do in
placing this feature onto the image plane
consistently with the end-member cue feature?
14Note that the cues are black and white,
reflecting roughly equally the entire visible
spectrum ...
15... whereas the laser spots are red.
16Sam Chen recently completed a number of
experiments to investigate this matter, using
laser spots combined with the cue-bearing plate
below.
17Before reporting his results, lets return to the
question of the algorithm to locate a laser-spot
center in camera space.
18One possibility The brightest
19One possibility The brightest
20Another possibility A weighted average of (say)
the ten brightest.
21In such a case, we need to be careful of
extraneous, non-spot pixels.
22Due to distances involved, these can drag the
assessed center far from the actual.
23Note that there is no particular right
definition of the feature-center coordinates.
24There are, however, a couple of ideal attributes
of these coordinates as identified in software.
25Recall our two-camera criterion for positioning a
dot onto the surface.
26Image of the spot in camera 1.
27Image of the spot in camera 2.
28The assessed spot center ideally locates the same
physical juncture in the actual mappings of
physical space into each of the participant
camera spaces.
29Or, at least on average, over several accumulated
spots, there is no bias in this regard.
30It is interesting to note that real cameras
have their own pixel-brightness manufacturing
quirks.
31It is interesting to note that real cameras
have their own pixel-brightness manufacturing
quirks.
32These flaws are used in forensics to
fingerprint images from individual, as-built
cameras.
33Our spot-center-detection method ideally produces
results that are robust to such variations.
34Also, as discussed previously, there should be no
relative bias due to frequency-dependent
refraction in the compound lens vis-à-vis the
paper cues.
35For Sam Chens experiments, the following mask
was applied to identify each spot center.
36Mask
37Mask
Remaining elements are zero.
38Mask overlay onto differenced image.
39Differenced image raw pixel data.
40Image conditioned with mask overlay.
41Possibility for subpixel identification of spot
center.
42Here is a plot of densely packed laser spots
accumulated over a 100mmx100mm flat region.
43Even with the smoothed, subpixel
pixel-center-location strategy, the data are
rough.
44Based on CSM (discussed later), the raw nominal
physical coordinates are mapped here.
45Although the plate on which they fall is flat,
random variation causes more than a mm of avg.
deviation from a common flat plane.
46Averaging about 50 individual-spot-center results
per 20mmx20mm region into 25 separate regions
reveals the flatness of the actual plate.
47So redundancy is our friend.
48This is one advantage that machines have. We can
accumulate and match among participant cameras
as many laser spots as desired in advance of the
introduction of the manipulator.
49This ability is due to the pan/tilt-ability
(re-directability) of the laser-pointing base
together with the ability to acquire multiple
differenced images in a very short period of time.
50A similar averaging effect though for
different reasons is applied to the positioning
or CSM side.
51S.C.s tests of precision of the whole occurred
over the
indicated, large region of physical space.
52The target plate was located throughout the
region,
and its orientation was varied.
53The mean error normal to the
plate surface (the only component that could be
assessed precisely) was 0.0mm.
54The std. dev. of the error
was about 0.1 mm, and, importantly, the range was
55-0.3mm
56-0.3mmAbout 1/10 pixel.
57Return to the discussion of the asymptotic-limit
region of c.s. mapping that makes this level of
precision possible.
58Return to the discussion of the asymptotic-limit
region of c.s. mapping that makes this level of
precision possible.
59Any point x, y, z that is in focus and in view of
our camera will have a mapping, an actual
position or pair of coordinates in camera space.
The actual relationship here depends upon the
lens and electronics, and is complex, almost
impossible to determine globally.
60xcfx(x,y,z) ycfy(x,y,z)
61 As a cue enters the cameras field of view its
x,y,z coordinates move as a function of the
robots internal joint angles.
62 Consider any physical-space point xo yo zo that
happens to lie along the cameras focal axis
63 Consider any physical-space point xo yo zo that
happens to lie along the cameras focal axis
64 Consider any physical-space point xo yo zo that
happens to lie along the cameras focal axis
65 xcfx(xo yo zo )0 ycfy(xo yo zo)0
66 xcfx(xo yo zo )0 ycfy(xo yo zo)0
67 xcfx(xo yo zo )0 ycfy(xo yo zo)0
68 consider x xo Dxy yo Dy z zo
Dz
69 consider x xo Dxy yo Dy z zo
Dz
70 consider x xo Dxy yo Dy z zo
Dz
71 consider x xo Dxy yo Dy z zo
Dz
72 for sufficiently small Dx Dy Dz, xc A11 Dx
A12 Dy A13 Dz yc A21 Dx A22 Dy A23 Dz
73 for sufficiently small Dx Dy Dz, xc A11 Dx
A12 Dy A13 Dz yc A21 Dx A22 Dy A23 Dz
74The constraints are due to radial symmetry of the
lenses.
75 for sufficiently small Dx Dy Dz, xc A11 Dx
A12 Dy A13 Dz yc A21 Dx A22 Dy A23 Dz
76 for sufficiently small Dx Dy Dz, xc A11 Dx
A12 Dy A13 Dz yc A21 Dx A22 Dy A23 Dz
77 for sufficiently small Dx Dy Dz, xc A11 Dx
A12 Dy A13 Dz yc A21 Dx A22 Dy A23 Dz
78 for sufficiently small Dx Dy Dz, xc A11 Dx
A12 Dy A13 Dz yc A21 Dx A22 Dy A23 Dz
79 for sufficiently small Dx Dy Dz, xc A11 Dx
A12 Dy A13 Dz yc A21 Dx A22 Dy A23 Dz
80 for sufficiently small Dx Dy Dz, xc A11 Dx
A12 Dy A13 Dz yc A21 Dx A22 Dy A23 Dz
81 for sufficiently small Dx Dy Dz, xc A11 Dx
A12 Dy A13 Dz yc A21 Dx A22 Dy A23 Dz
82Rigid body
83Nominal kinematics
84Actual kinematics
85EP
AP
86Nominal kinematics
87Nominal kinematics
88Nominal kinematics
89Nominal kinematics
90Nominal kinematics
91Nominal kinematics
92Example of development of a homogeneous transfor
mation
matrix for a 3-axis robot.
93Example of development of a homogeneous transfor
mation
matrix for a 3-axis robot.
Note that the first axis of rotation is vertic
al.
94Example of development of a homogeneous transfor
mation
matrix for a 3-axis robot.
Stationary, base frame.
95Example of development of a homogeneous transfor
mation
matrix for a 3-axis robot.
Moving end- member frame
96Example of development of a homogeneous transfor
mation
matrix for a 3-axis robot.
97This is the diplacement vector to point P with
respect to and referred to the stationary O
frame.
98This is the diplacement vector to point P with
respect to and referred to the stationary O
frame.
99(No Transcript)
100This is the diplacement vector to point P with
respect to and referred to the end-most 3
frame.
101This is the diplacement vector to point P with
respect to and referred to the end-most 3
frame.
102(No Transcript)
103(No Transcript)
104(No Transcript)
105(No Transcript)
106(No Transcript)