I’ve asked Alex to contribute today’s article to give people a taste of what kinds of problems IZENTE helps to solve. I first met Alex when I saw him give a talk on semi-separable matrices. He is one of the brightest programmers I have ever worked with. I think this article really shows the value that IZENTE can offer to companies. — David Lee
I was asked to write up a bit on what kinds of things I have been working on. One of the more interesting problems that came up during the last year was the issue of preconditioning a large sparse-ish system. The client had a very large system of equations they were solving in 3-D that were weakly coupled across dimensions. The big problem they were running into was that the solver they were using wouldn’t always converge, or would take way too long(multiple days).
Introduction
I had actually dealt with a similar issue before as part of my studies in grad school, so I had some ideas about how to solve this. The basic setup was that they had some operator like:
Where circle with the x in it, denotes the Kronecker product, and I is the usual Identity matrix. Unfortunately, WordPress doesn’t work well with LaTeX so I apologize for any formatting issues.
Now this is basically a sum of matrices that look like:
And if we had a matrix like that we could invert it very very quickly:
If we take all of the matrices to be N by N, this would take O(N^9) operations to invert naively, but because of the above property, we only need to do O(N^3) operations. So this would be tremendously quicker to invert. If N=10, this would by 1 million times quicker. It would go from being impossible to being possible.
One quick note, for those of you not well versed in linear algebra, the sum of the Kronecker products is not equally easy to invert.
Okay so we know what form our problem takes, and we know what form we want our solution to be in. How can we find the optimal approximation of that type?
Approximation of Sums of Kronecker Products
One of the very first questions we should be asking is “optimal in what norm?”. There are an infinite number of possible norms we could solve this problem in, the good news though is that there are actually very few that make much sense, and if the error in the approximation goes to zero in any reasonable norm, it goes to zero in most of them. For this case, we will use the Frobenius norm, which is induced by an inner product, which will be very useful. Let us write:
As our operator we seek to approximate, and
As the approximation we are looking for. So we seek to find the optimal ‘a’ and ‘b’ that solve the problem.
Now in the Frobenius norm we have:
Using the fact that
(see appendix) we can compute
quickly since it is just the Kronecker product of three matrices.
Where here I assume every matrix is N by N. Now it should be obvious that we just have a optimization problem in two unknowns. The other quantities can be solved for numerically.
In the end this approach wound up stabilizing the solver and dramatically improving the running time of the simulation.

Appendix
I used the fact that:
Which I have not seen a proof of, so I will give one below.
Lemma 1. Given matrices of compatible dimensions, such that the inner products below make sense:
Proof. It is enough to prove it true for , because
and so the property can be applied recursively. Let
,
, then
if ,
, where
ranges from
to
,
ranges from
to
,
ranges from
to
,
ranges from
to
, then we have:
Hopefully that is useful for someone.