Crude matrix multiplier


30 Oct 2012 Code on Github

Introduction

[Update: A slightly more sophisticated Javascript version of my program is available to use here.]

For part of my Pygame 3D graphics program I wanted to work out the matrix required to rotate an object around a point other than the origin. To rotate an object around an arbitrary point $(cx, cy, cz)$ you first translate the object along the vector $(-cx, -cy, -cz)$, so that $(cx, cy, cz)$ is now at the origin; then use the standard rotation matrix to rotate the object around the origin; then translate the object back along the vector $(cx, cy, cz)$.

$ \begin{bmatrix} 1 & 0 & 0 & -cx \\ 0 & 1 & 0 & -cy \\ 0 & 0 & 1 & -cz \\ 1 & 1 & 1 & 1 \end{bmatrix} \begin{bmatrix} cos(\theta) & -sin(\theta) & 0 & 0 \\ sin(\theta) & cos(\theta) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 & cx \\ 0 & 1 & 0 & cy \\ 0 & 0 & 1 & cz \\ 1 & 1 & 1 & 1 \end{bmatrix} = \text{ ?} $

Naturally I wanted to work out what the final matrix was, but after several attempts to do it by hand, I wasn't convinced that I hadn't made a mistake. So I decided to write a program to calculate matrix multiplications for me. It's easy enough to multiply matrices of numbers with numpy, but I wanted to keep the variables as variables.

The program

The first program I made just went through, converting the values to strings then multiplying and adding the relevant bits. This gave a very long, messy answer. The last entry in the first row of the matrix, for example, was:

(1.(c.-cx + -s.-cy + 0.-cz + 0.1) + 0.(s.-cx + c.-cy + 0.-cz + 0.1) + 0.(0.-cx + 0.-cy + 1.-cz + 0.1) + cx.(0.-cx + 0.-cy + 0.-cz + 1.1))

Note that c is short for $cos(\theta)$, s for $sin(\theta)$. Dots indicate multiplication, so 1.0 is 1 times 0, not a decimal.

I added some code to cancel anything multiplied by 0 and make any variable multiplied by 1 equal itself. This simplified the answer significantly:

((c.-cx + -s.-cy) + cx.(1))

Then I changed the code to only add parentheses where necessary:

c.-cx + -s.-cy + cx

Finally I added some code to keep track of minus signs so I could cancel them where possible and otherwise move minus signs to the front, giving a final answer for the whole matrix:

c  -s   0   -c.cx + s.cy + cx
s   c   0   -s.cx - c.cy + cy
0   0   1   -cz + cz
0   0   0   1

As you can see, it's still not perfect: one of the entries is $-cz + cz$, which should cancel to 0. It is also unable to multiply numbers that aren't 1 or 0. Now I would like to make a more complete program that can properly parse expressions, then add and multiply them. I think I will try to make an online Javascript version so anyone can quickly and easily use it [See top of page].

Comments (4)

neurofuzzy on 21 Nov 2012, 1:11 a.m.

Very cool. Something I've been wanting to do for a while is make a tensor multiplier. A tensor - despite it's intimate link to really complicated calculus - is pretty simple to handle. It has a set of indices, so A_{mn} (to use the LaTeX notation for subscript m n) would basically be a 2D array. You also have upper indices, so you could have B^{mn} (LaTeX for superscript m n), which would also just be a 2D array. Then you can contract indices, with einstein shorthand notation. A_{m n} * B^{n p} (Note the n in both the upper and lower indice) represents the sum of A_{m 1}*B^{1 p}+A_{m 2}*B^{2 p}+... + A_{m q}*B^{q p} (where q is the rank of the tensor [eg the size of the array]). So we would say that A_{m n} * B^{n p}=C_{m}^{p} (subscript m superscript p), and that we have contracted the indices. As you might notice this is exactly the same as a matrix multiplication.

There's also something really cool called penrose graphical notation (http://en.wikipedia.org/wiki/Penrose_graphical_notation) which allows you to draw matrix equations really easily. The tensor delta shown on that page d^{m}_{n} is just 1 when m=n and 0 otherwise (ie, the identity matrix), and the tensor g shown on that page is... one of the complicated tensor things [called the metric], but basically g^{p q} and g_{p q} are inverse matrices of each other.

has potential to be really awesome.

If you're interested in carrying this out further, there's lots you can do with equation editing. I did this for http://mathandcode.com/programs/graph/ You can use the shunting-yard algorithm (http://en.wikipedia.org/wiki/Shunting-yard_algorithm) to parse it into RPN notation, or more favorably, an Object Oriented tree of operators, and then with the Object Oriented tree you can define simplification rules. (Like, replace each node in the form "- <something> <something>" with zero)

Peter on 21 Nov 2012, 10:48 p.m.

I'm afraid most of that went well over my head. I only learnt about matrices last spring (on Khan Academy) when trying to work out how 3D images are created. It would be nice to learn about tensors (and quarternions, which are also something I've heard of and which seem to be useful for 3D thinking, but I only vaguely understand, ), but I suspect I won't unless Sal makes some videos about it - too many other things to learn!

Thanks for the link to the shunting-yard algorithm. I've been thinking a bit recently about writing a simple equation parser (for a Khan Academy program to help people visualise simple equations), so that could be very helpful.

MAXXtreme on 30 Jan 2013, 1 a.m.

Wow, I just have to say you are an INCREDIBLE program Peter. I sure wish I could just sit down beside you sometime and watch you work - I'd love to learn how you do all that you do. You're an awesome programmer and keep up the superb work that you do!

mark ptak on 2 Jul 2013, 3:02 p.m.

Wow great to see someone put out how straightforward translations and rotations can be once you put things in collections of column vectors. You might enjoy martin reinhardt's book on the web called "edges to rubies" designed to get people up and running in Sketchup animations. Thanks for you passions and insights!