Category Archives: Scientific Computation

Multiplying 70-by-70 Matrices

Strassen’s algorithm for matrix multiplication is a classic example of how a straightforward algorithm may be beaten asymptotically by a witty one. But there is a problem if you use this example in teaching—how did Strassen come up with the seven intermediate submatrix products?

If you dig up Strassen’s 1969 paper, you’ll see that he didn’t give us any hint. Others have attempted to offer plausible explanations, and you can find one in CLR(S), or a similar and apparently earlier one by Brent on the web.

Now I don’t know about you, but I was never quite convinced when I was reading CLR as a student. But the real kicker comes when later on you were told that Victor Pan has a way to multiply two 70-by-70 matrices with 143640 multiplications… Seventy!?

So I decided to come up with a fairy tale to explain this magical number. But as it turns out, there is not much need for my creativity. :P

In fact, back in 1984, Victor published a monograph that contains all the glory details that lead to his algorithm. Basically, he designed a class of matrix multiplication algorithms which is parameterized by the size of the submatrices used in the recursion. The number of multiplication happens to be minimized when the submatrices are chosen to be 70 times smaller. It’s just that. No fairy tale is necessary.

Incidentally, the first chapter of his book is more or less available in an article appeared in SIAM Review. You can see page 6 on which he talks about it.

Matrix Action

Don’t let Keanu Reeves steal the focus!

This is an excellent tutorial on Singular Value Decomposition (SVD) by Todd Will from UW-La Crosse. I highly recommend it for everybody who deals with Linear Algebra.

http://www.uwlax.edu/faculty/will/svd/index.html

Among other excellent insights, once you read the page on Perpframes, Aligner and Hangers (the 3rd page in that site), you will never see a matrix the same way again. It’s like the, erh, blue pill. (Or is it the red one? :P )