Title: Gaigen talk
1A Geometric Algebra Implementation Generator
Daniel Fontijne, University of Amsterdam
2- Part I How to implement GA (the straightforward
way) - Part II Gaigen optimizations
- Discussion Conclusion
3Goal of this talk To take you from using
old-school Linear Algebra for solving and
implementing geometric problems to using
Geometric Algebra. For that goal to succeed, you
should at least believe something like this GA
is nice for mathematicians, but it is too
impractical in real world programming
applications. It uses too many coordinates (
for n-dimensional space). Maybe it can help me
reason about geometric problems, but when it
comes to programming, I'll use what I already had.
4- What I want you to believe after this talk
- GA is efficient. It gives us the ability to
automatically generate implementations for every
possible geometric construct in an optimized way,
saving you lots of time. - By exploiting the sparseness of the grade parts
and using some other common optimization
techniques, you can archieve computing
performance lower, but comparable to traditional
LA approaches.
5- Design goals of Gaigen
- General implementation of geometric algebra, for
scientific research and experimentation. - But also for real-world applications so
- Good memory/time performance.
- Limitations
- Only low dimension (lt 8) optimized high
dimensional algebras maybe later. - No extravagant mathematical stuff, just the
basics.
6Part I How to implement GA (the straightforward
way)
7Representation a multivector (the general object
of computation in GA) As an array of
coordinates. These coordinates refer to a basis.
An example of such a basis in 3d is 1 scalar
part (grade 0) e1, e2, e3 vector part
(grade 1) e2e3, e3e1, e1e2 bivector part
(grade 2) e1e2e3 trivector part (grade
3) An array of 8 floats like 1, 2, 3, 4, 5, 6,
7, 8 could represent to the 3d multivector A
12e13e24e35e236e317e128e123 Symbolically,
we will label the coordinates of a multivector
A A0, A1, A2, A3, A23, A31, A12, A123.
8How to compute a product of two multivectors (1/4)
- We will show how to simplify
- computing an arbitrary (geometric, outer, inner,
scalar) product of two multivectors to - computing the geometric product of two
multivectors to - a matrix vector multiplication to
- expanding a multivector into a product matrix to
- computing geometric product of two unit
orthogonal basis blades.
9How to compute a product of two multivectors (2/4)
- First of all which products do we want to
implement - Geometric product
- Outer product
- Scalar product
- A bunch inner products
- All these products can be derived from the
geometric product using simple rules. - E.g. the outer product of two blades of grade a
and grade b is the grade (ab)-part of the
geometric product of those blades.
10How to compute a product of two multivectors (3/4)
A n dimensional geometric algebra is actually
just a dimensional linear algebra? What's
really going on is that geometric algebra is
simply a straightjacket on that linear
algebra that restricts you to only perform
operations which make sense geometrically in the
original n dimensional space. Thus a GA-product
is a linear transformation of one multivector by
another, according to rules defined by geometric
algebra. The product of two multivectors can be
computed using an appropriate x matrix
expansion of one of the arguments.
11How to compute a product of two multivectors (4/4)
We denote the geometric product matrix expansion
of a multivector A as follows (using symbolic
labels for the coordinates of a multivector)
We can compute C AB as the following ordinary
matrix-vector multiplication C AGB
12How to construct a matrix representation of a
multivector to compute a product (1/2)
If we can compute the geometric product matrix
expansion, we can compute the matrix expansions
for all other products, using simple selection
rules, e.g. compare
(geometric product matrix)
(outer product matrix)
13How to construct a matrix representation of a
multivector to compute a product (2/2)
Suppose we can compute the geometric product of
two arbitrary basis elements eaeb eg This
implies that AGg,b Aa
14How to compute the geometric product of unit
orthogonal basis blades (1/3)
- Basis blades can be factored back into basis
vectors. - Each of these basis vectors can be treated
independently because of orthogonality. - The basis vectors either survive the product,
or annihilate each other. - Annihilation results in multiplication of the
result by 1, 0 or 1, depending on the signature
of the basis vector.
15How to compute the geometric product of unit
orthogonal basis blades (2/3)
An example compute the geometric product of e12
with e23 (e1e2)(e2e3) e1e3 Explanation
e1 survives the multiplication because it is only
present in e12. e2 gets annihilated because it is
present in both e12 and e23. The result is
multiplied by the signature of e2(1). And like
e1, e3 survives because it only present in
e23. So e12 e23 e13
16How to compute the geometric product of unit
orthogonal basis blades (3/3)
- If we represent each basis vector with a specific
bit in a binary number (e1 001b, e2 010b, e3
100b), computing the geometric product of basis
blades is exactly the xor operation on binary
numbers! - (e1e2)(e2e3) e1e3
- 011b xor 110b 101b
- We have to take care of the signs though
- basis vectors have to be rearranged into a
specific order before they can annihilate each
other (this rearranging causes a sign change in
the result). This can also be computed binary. - signature of annihilated basis vectors can change
the sign as well.
17- End of part I
- What have we accomplished so far?
- We can now implement a basic geometric algebra.
- We represent multivectors as arrays of
coordinates. - We can expand these to appropriate matrices to
compute the products using matrix-vector
multiplications. - Other basic GA operations such as addition,
reversion are either trivial to implement or to
elaborate to treat in detail here.
18Part II Gaigen optimizations
19- Observation 1
- Often we use GA objects that occupy only specific
grade parts - vector is only grade 1
- rotor is grade 0/2
- In most problems we deal with, we are interested
only in objects that use at most half of the
grade parts and coordinates (i.e. versors and
blades). - Hypothesis 1
- If we can deal with the sparseness of the
coordinates and grade parts efficiently, we might
get a large memory/time performance boost.
20Observation 2 Most computer programs spend gt90
of their time in lt10 of their code. This
suggests that most programs using geometric
algebra will spend gt90 of their time using lt10
of products/multivectors combinations. Hypothesis
2 If we try to implement this small set of
products/multivectors combinations efficiently,
we could get a large performance boost.
21How to deal with sparseness (1/2)
In general, GA objects like versors and blades
will always occupy all coordinates of certain
grade parts. Gaigen keeps track of the grade part
usage of every multivector variable. E.g. v
ae1 be2 ge3, grade part usage 1 R s
ae1e2 be2e3 ge3e1, grade part usage
0(scalar) and 2(bivector).
22How to deal with sparseness (2/2)
Every time an operation is performed on
multivector(s) and a new multivector is
created as the result, Gaigen computes the grade
part usage of this result. This can be done using
lookup tables, simple binary functions or
explicit computation whatever is most efficient
for the operation. When we know what grade parts
a multivector variable occupies, we don't require
floating point values to store it!
23How to optimize the products (1/4)
Gaigen does not explicitly expand the
multivectors to their matrix representation. It
generates specific source code which multiplies
and adds the coordinates as required by the
product. Of course, Gaigen has to take into
account that it stores only coordinates for
non-zero grade parts when computing a
product. The next slide shows an example of code
generated by Gaigen, which computes the scalar
product of any two multivectors from a 2d algebra.
24How to optimize the products (2/4)
void e2gai_general_scp(const float a, const
float b, float c) const float an,
bn memset(c, 0, sizeof(float) 4) if (an
a0) if (bn b0) c0 an0
bn0 if (an a1) if (bn b1)
c0 an0 bn0 an1 bn1
if (an a2) if (bn b2)
c0 - an0 bn0
25How to optimize the products (3/4)
- We want to generate optimized functions for a
specific set of product/multivector combinations,
e.g. the outer product of a vector and a
bivector. - The user can specify this set of
product/multivector combinations, depending on
the application. This set should be the 10 of
the combinations which is used 90 of the time. - Gaigen recognizes what type of multivectors it
has multiply by looking at their grade part
usage. It uses a lookup table to jump to an
optimized function (if available). The index in
the on lookup table is based on - product (geometric, outer, scalar, etc)
- grade part usage of the two multivector arguments
26How to optimize the products (4/4)
It's easy to derive optimized product functions
from general product functions simply leave out
the parts you don't need. Here is an example of
an optimized function Gaigen has generated. It
can multiply a grade 0/2 object (like a rotor)
with a grade 1 object (a vector) void
e3gai_opt_05_gp_02(const float a, const float
b, float c) c0 a0 b0 a3
b1 - a2 b2 c1 - a3 b0 a0
b1 a1 b2 c2 a2 b0 -
a1 b1 a0 b2 c3 a1 b0
a2 b1 a3 b2 Optimized
functions do not use conditional jumps. They
consist of one long list of floating point
operations.
27The outermorphism
Many transformation functions (e.g. rotations)
are outermorphisms. Gaigen can compute a matrix
representation of outermorphisms. Applying these
functions to multivectors is then reduced to what
one would do in the 'traditional' LA approach,
but without the hassle, mistakes and
misunderstanding. Most useful when the same
function is applied many times then it will be
just as fast as traditional LA approach. The
outermorphism can more easily be optimized for
SIMD instruction sets than the general products.
28- End of part II
- What have we accomplised
- We store only coordinates of non-zero grade parts
(exploiting sparseness). - We store our product matrix expansion tables in
the code itself. - We compute products using only non-zero grade
parts (again exploiting sparseness). - We can use optimized functions for frequently
used products. - We can use the outermorphism to speed up for
certain transformation functions.
29Gaigen user interface demo movie Not available
in this web-version of the talk. Download it
yourself from the Gaigen website.
30Part IV Applications
- Applications of Gaigen so far
- Optical motion capture camera calibration (Joan
Lasenby). Used in modified form by Wannes van de
Mark for ego-motion of a vehicle. - Finding singularies in a vector field (Alyn
Rockwood Steve Mann). 6000 times faster than
the GABLE/Matlab implementation according to
Mann. - Raytracer compare performance of various
algebras. Current / future work.
31Ray tracer movie Not available in this
web-version of the talk. Download it yourself
from the Gaigen website.
32Decisions in design of Gaigen, some results,
discussion, conclusion, future work.
33Template class which supports all algebras vs
specific classes for each algebra It is
unlikely that a template class (like CLU) written
for all algebras can ever be as fast as a class
written specifically for one algebra (like
Gaigen), if they are based on the same ideas and
algorithms. Hand-written vs code
generation Hand-written code will probably be
faster than generated code, but it is infeasible
for us to implement by hand every possible GA we
want to use. Thus code generation is a good
compromise between speed (hand-written) and
generality (template class). Gaigen vs CLU The
raytracer written with Gaigen is about 60 times
faster than written with CLU. So creating Gaigen
was worth the effort.
34- Why can GA perform comparable to LA for geometry?
- exploit sparseness of grade parts
- optimized products
- outermorphism use LA when suitable, but with
all the advantages of GA.
35Will GA ever perform as fast as LA for
geometry? We believe so. The next version of
Gaigen will probably generate code that will be
superset of an ordinary fast linear algebra
implementation just as fast, but with a lot more
features. Backwards compatibility in terms of
both efficiency and functionality will make it
easier to convince people to start using GA.
36So what should you use? It's like assembly vs C
vs fortran vs C vs Perl vs Java vs Matlab.
Over time people decided to use more powerful
languages because it allowed them to do more work
in less time. The same is true for math. You can
start using the GA and the conformal model for
euclidean geometry right now in applications
which are not performance critical. In the near
future, we will have implementations of GA that
are just as fast as traditional LA
implementations, but a lot more powerful.
37A Geometric Algebra Implementation Generator
Homepage http//carol.science.uva.nl/fontijne/ga
igen