Removing Shadows From Images - PowerPoint PPT Presentation

1 / 41
About This Presentation
Title:

Removing Shadows From Images

Description:

Removing Shadows From Images G. D. Finlayson1, S.D. Hordley1 & M.S. Drew2 1School of Information Systems, University of East Anglia, UK 2School of Computer Science, – PowerPoint PPT presentation

Number of Views:239
Avg rating:3.0/5.0
Slides: 42
Provided by: SteveHo150
Category:

less

Transcript and Presenter's Notes

Title: Removing Shadows From Images


1
Removing Shadows From Images
  • G. D. Finlayson1, S.D. Hordley1 M.S. Drew2

1School of Information Systems, University of
East Anglia, UK
2School of Computer Science, Simon Fraser
University, Canada
2
Overview
Introduction Shadow Free Grey-scale images -
Illuminant Invariance at a pixel Shadow Free
Colour Images - Removing shadow edges using
illumination invariance - Re-integrating edge
maps Results and Future Work
3
The Aim Shadow Removal
We would like to go from a colour image with
shadows, to the same colour image, but without
the shadows.
4
Why Shadow Removal?
For Computer Vision - improved object tracking,
segmentation etc. For Image Enhancement -
creating a more pleasing image For Scene
Re-lighting - to change for example, the
lighting direction
5
What is a shadow?
Region Lit by Sunlight and Sky-light
Region Lit by Sky-light only
A shadow is a local change in illumination
intensity and (often) illumination colour.
6
Removing Shadows
So, if we can factor out the illumination locally
(at a pixel) it should follow that we remove the
shadows.
So, can we factor out illumination locally? That
is, can we derive an illumination-invariant
colour representation at a single image pixel?
Yes, provided that our camera and illumination
satisfies certain restrictions .
7
Conditions for Illumination Invariance
(1) If sensors can be represented as delta
functions (they respond only at a single
wavelength)
(2) and illumination is restricted to the
Planckian locus
(3) then we can find a 1-D co-ordinate, a
function of image chromaticities, which is
invariant to illuminant colour and intensity
(4) this gives us a grey-scale representation of
our original image, but without the shadows (it
takes us a third of the way to the goal of this
talk!)
8
Image Formation
Camera responses depend on 3 factors light (E),
surface (S), and sensor (R, G, B)
9
Using Delta Function Sensitivities

Delta functions select single wavelengths
10
Characterising Typical Illuminants
Most typical illuminants lie on, or close to, the
Planckian locus (the red line in the figure)
So, lets represent illuminants by their
equivalent Planckian black-body illuminants ...
11
Planckian Black-body Radiators
Here I controls the overall intensity of light, T
is the temperature, and c1, c2 are constants
But, for typical illuminants, c2gtgtlT. So,
Plancks eqn. is approximated as
12
How good is this approximation?
2500 Kelvin
5500 Kelvin
10000 Kelvin
13
Back to the image formation equation
For, delta function sensors and Planckian
Illumination we have
Surface
Light
Or, taking the log of both sides ...
14
Summarising for the three sensors
Where subscript s denotes dependence on
reflectance and k,a,b and c are constants. T is
temperature.
Constant independent of sensor
Variable dependent only on reflectance
Variable dependent on illuminant
15
Factoring out the illumination
First, lets calculate log-opponent
chromaticities
Then, with some algebra, we have
That is there exists a weighted difference of
log-opponent chromaticities that depends only on
surface reflectance
16
An example - delta function sensitivities
Narrow-band (delta-function sensitivities)
Log-opponent chromaticities for 6 surfaces under
9 lights
17
Deriving the Illuminant Invariant
Log-opponent chromaticities for 6 surfaces under
9 lights
Rotate chromaticities
This axis is invariant to illuminant colour
18
A real example with real camera data
Normalized sensitivities of a SONY DXC-930 video
camera
Log-opponent chromaticities for 6 surfaces under
9 different lights
19
Deriving the invariant
Log-opponent chromaticities for 6 surfaces under
9 different lights
Rotate chromaticities
The invariant axis is now only approximately
illuminant invariant (but hopefully good enough)
20
Some Examples
21
A Summary So Far
With certain restrictions, from a 3-band colour
image we can derive a 1-d grey-scale image which
is - illuminant invariant - and so, shadow free
22
Whats left to do?
To complete our goal we would like to go back to
a 3-band colour image, without shadows
We will look next at how the invariant
representation can help us to do this ...
23
Looking at edge information
Consider an edge map of the colour image ...
And an edge map of the 1-d invariant image ...
These are approximately the same, except that the
invariant edge map has no shadow edges
24
Removing Shadow Edges
From these two edge maps we can remove shadow
edges thus
Edges ?Iorig ?Iinv
(Valid edges are in the original image, and in
the invariant image)
25
Using Shadow Edges
So, now we have the edge map of the image we
would like to obtain (edge map of the original
image with shadows edges set to zero)
So, can we go from this edge information back to
the image we want? (can we re-integrate the edge
information?).
26
Re-integrating Edge Information
Of course, re-integrating a single edge map will
give us a grey-scale image
Red
So, we must apply any procedure to each band of
the colour image separately
Green
Blue
Re-integrated
Original
Colour Channels
Edge Maps of Channels
Shadow Edges Removed
27
Re-Integrating Edge Information
The re-integration problem has been studied by a
number of researchers
- Horn - Blake et al - Weiss ICCV 01
(Least-Squares) - ... - Land et al (Retinex)
The aim is typically to derive a reflectance
image from an image in which illumination and
reflectance are confounded.
28
Weiss Method
Weiss used a sequence of time varying images of a
fixed scene to determine the reflectance edges
of the scene
His method works by determining, from the image
sequence, edges which correspond to a change in
reflectance(Weiss definition of a reflectance
edge is an edge which persists throughout the
sequence)
Given reflectance edges, Weiss re-integrates the
information to derive a reflectance image
In our case, we can borrow Weiss re-integration
procedure to recover our shadow-free image.
29
Re-integrating Edge Information
Let Ij(x,y) represent the log of a single band of
a colour image
We first calculate
?y is the derivative operator in the y direction
?x is the derivative operator in the x direction
T is the operator that sets shadow edges to zero
This summarises the process of detecting and
removing shadow edges
30
Re-integrating Edge Information
To recover the shadow free, image we want to
invert this Equation
To do this, we first form the Poisson Equation
We solve this (subject to Neumann boundary
conditions) as follows
31
Re-integrating Edge Information
We solve by applying the inverse Laplacian
Note the inverse operator has no Threshold
Applying this process to each of the three
channels recovers a log image without shadows.
32
A Summary of Re-integration
1. Iorig Original colour image, Iinv
Invariant image 2. For j1,2,3 Ijorig jth band
of Iorig 3. Remove Shadow Edges Edges
?Ijorig ?Iinv 4. Differentiate the
thresholded edge map 5. Re-integrate the
image 6. Goto 3
33
Some Remarks
The re-integration step is unique up to an
additive constant (a multiplicative constant in
linear image space
Fixing this constant amounts to applying a
correction for illumination colour to the image.
Thus we choose suitable constants to correct for
the prevailing scene illuminant
In practice, the method relies upon having an
effective thresholding step T, that is, on
effectively locating the shadow edges.
As we will see, our shadow edge detection is not
yet perfect
34
Shadow Edge Detection
The Shadow Edge Detection consists of the
following steps
1. Edge detect a smoothed version of the original
(by channel) and the invariant images
Canny or SUSAN
2. Threshold to keep strong edges in both images
3. Shadow Edge Edge in Original NOT in
Invariant
4. Applying a suitable Morphological filter to
thicken the edges resulting from step 3.
This typically identifies the shadow edges plus
some false edges
35
An Example
OriginalImage
InvariantImage
Detected Shadow Edges
Shadow Removed
36
A Second Example
OriginalImage
InvariantImage
Detected Shadow Edges
Shadow Removed
37
More Examples
OriginalImage
InvariantImage
Detected Shadow Edges
Shadow Removed
38
More Examples
OriginalImage
InvariantImage
Detected Shadow Edges
Shadow Removed
39
A Summary
We have presented a method for removing shadows
from images
The method uses an illuminant invariant 1-d image
representation to identify shadow edges
From the shadow free edge map we re-integrate to
recover a shadow free colour image
Initial results are encouraging we are able to
remove shadows, even when shadow edge definition
is not perfect
40
Future Work
We are currently investigating ways to more
reliably identify shadow edges ...
or to derive a re-integration which is more
robust to errors (Retinex?)
Currently deriving the illuminant invariant image
requires some knowledge of the capture devices
characteristics - We show in the paper how to
determine these characteristics empirically and
we are working on making this process more robust
41
Acknowledgements
The authors would like to thank Hewlett-Packard
Incorporated for their support of this work.
Write a Comment
User Comments (0)
About PowerShow.com