Title: Towards Lightweight Camera Networks
1- Towards Lightweight Camera Networks
- That operate on Symbolic Information
Andreas Savvides (UCLA Graduate) EE Yale
2High-level Overview
Software sensory grammar hierarchy output
semantic form Hardware custom image
architecture
3Hardware
How are normal imagers designed? Lets take a
look at our good friend Cyclops.
44
1
3
2
5CMOS vs CCD
CCD charge coupled device is a type of
semiconductor that can store packets of
electrical charge in tiny surface regions
called potential wells The charges on each
row are "coupled" to those on the row above so
when one moves down, the next moves down to fill
its old space.
6It is technically feasible but not economic to
use the CCD manufacturing process to integrate
other camera functions, such as the clock
drivers, timing logic, and signal processing on
the same chip as the photosites. These are
normally put on separate chips so CCD cameras
contain several chips, often as many as 8, and
not fewer than 3. CMOS refers to the
manufacturing technique not to be
confused with the sensor type, aka active
pixel sensor (APS)
It is important that the active circuitry in
a pixel take up as little space as possible
to allow more room for the photodetector. (fill
factor)
three transistors as well as a photodetector.
7So what are the ppl at Yale ENLAB
proposing? Address-Event Representation
(AER) consists of a way of representing data by
an ordered train of addresses, each of them
related to the occurrence of an event
biologically inspired (neural networks, massive
parallelism) We are no longer getting a full
image but rather a sequence of address-events
over time, in particular address of pixel that
met a certain criteria. The exact criteria to
trigger an event is application
dependent. Frequency of recurring addresses ?
intensity of that particular event
8Benefits
- AER imagers consume µWatts of power only events
of interest are detected no need to poll imager
for information pixels generate events..think of
them as hardware triggers/interrupts. - Image features are extracted at pixel-level,
yielding simpler data processing fundamentally
different computation model that is faster and
more lightweight. - Data is automatically prioritized according to
relevancy, and irrelevant data is filtered out of
the stream providing compression - Due to the nature of the data, reconstruction of
an image can be made substantially hard thus
AER imagers are privacy preserving.
9AE image sensor
- Caveat
- Too expensive to fabricateso for now they are
using emulators by using commercial,
off-the-shelf (COTS) cameras to create a detailed
behavioural model for an AER imager architecture - So what do they have?
- Modified version of their XYZ sensor node
- ALOHA AER COTs camera for emulation
-
10ALOHA Imager Pixels act as capacitive tanks
generating an integration voltage. Once the
voltage reaches a set threshold an event is
signaled output (X,Y) coordinate of triggered
pixel.
why ALOHA? For the networking ppl in the crowd,
it uses a similar channel access protocol to
transmit triggered event. Mainly to simplify
circuitry and reduce latency of communication.
11Image Reconstruction
Histogram reconstruction frequency counter
stronger lighting makes a pixel generate higher
event rates. Inter-Event reconstruction
difference in time inversely-proportional to
light intensity
In general the goal of AE imagers is not the
image itselfthere are no notions of frame in
AER. Reconstruction of images are difficult and
are only used to gain insight on the operation of
the AER sensor and NOT for processing.
12Sample Application
Pattern Recognition
NEW
OLD WAY
Capture Frame into memory
Process Event
Process frame pixel by pixel
Compare data against database
Compare feature to database
13Anyone heard of the game Battleship?
hit
hit
no hit
event