Friday, October 19, 2012

An unexpectedly delightful application of the Singular Value Decomposition



We begin this entry with a bit of commentary on numerical analysis. If there's anything that I think that should be taught in math classes more, it is the Singular Value Decomposition of a matrix. Every real × m matrix A can be factored as A = UΣV* where U and V are orthogonal, and Σ is "diagonal" (but can be rectangular). It is in fact more useful than the much more often discussed eigenvalues and eigenvectors (which often are computed for symmetric matrices where the two concepts coincide, anyway). It would probably do a great amount of good for people's intuition in linear algebra classes. For some reason, it is completely ignored in a lot of abstract linear algebra books, somehow having acquired a reputation of being "too applied," and thus would sully the purity of their most precious subject! I never heard it talked about in any upper division or graduate math class, until I'd somewhat belatedly discovered that numerics was really the best place to go to explore my interests (better late than never). But the SVD is absolutely ubiquitous in science. The first place I'd heard of it was a linguistics class, where the prof talked about some paper on latent semantic analysis. As I understood it, the SVD was essentially used to compute axes of word meanings(!!) and the information used to categorize the words. That was really a long time ago, so forgive me, linguists, if this seems a gross mischaracterization. In any case, it also is intimately connected with least squares (briefly mentioned in the last entry).

At this point, the sufficiently attention-spanned reader (all too rare these days!) is most certainly scratching their head (yes, this blog will use singular they) wondering what that has to do with the very out-sized picture. First of all, the picture generated actually is a trivial case in which the SVD isn't actually needed. But if it wasn't for the SVD, I would have never been inspired to make a picture of my findings, so it deserves the tribute. It is inspired by trying to explore the concept of orientation, and to generalize constructions like the famous Möbius Strip:


We can generalize the Möbius strip by the use of vector bundles and identification spaces. We get the topological twist in a Möbius strip by gluing the ends of the usual strip with a flip. If we allow ourselves some imagination, one way to generalize is to first imagine the Möbius strip as now being constructed from a strip that is infinitely tall ([0,1] × ℝ) to get the Möbius vector bundle. Now when we "glue" the ends together, we can still flip it (identify (0,x) with (1,-x)). Or, we can stretch it and identify, say, (0, x) to (0, 2x), the "strained cylinder" (as Roger Penrose calls it in The Road to Reality). If we have to restrict ourselves to a finite picture, what happens here is that it goes around in a loop and reconnects to itself twice as high, so it looks discontinuous. But it's not. It turns out to be, in fact, a cylinder and it is in fact orientable.

Taking a cue from a problem in Spivak's excellent 5-volume series, A Comprehensive Introduction to Differential Geometry, we can imagine that instead of starting out with just a line for the vertical dimension, we could have a whole space ℝⁿ, and identify the initial and final spaces by an invertible linear transformation T : ℝⁿ → ℝⁿ, namely, identifying (0,v) to (1,Tv). If T has positive determinant, then we have not made anything topologically new: it's S¹ × ℝⁿ. But to actually (constructively) prove this is a more difficult task. But one way it can be done is by choosing a global frame, n continuous vector fields that are a basis at every point. And one way to do that is to somehow connect the transformation T to the identity. This can be done any number of ways, since, in fancy-schmancy speak, the orientation-preserving elements of the general linear group form a path-connected component containing the identity. But one quickly implementable realization is to use the SVD! Namely, factor the matrix T as UΣV* with U and V orthogonal and Σ the singular values, which must all be positive if the matrix has positive determinant. Since it is a diagonal matrix of positive numbers, Σ can be raised to any real power t. And U, V are respectively exponentials of some antisymmetric matrices ξ and η (which are not unique). So given some basis vectors vat 0, we consider the section

t ⟼ (t, exp(texp(–)vi).

At time t = 0 it is just (t, vi). At time t = 1, it is (1, UΣV*vi) = (1, Tvi), which is identified with (0,vi). (Warning: Although we could make things more notationally uniform by writing Σt = exp() where σ is the diagonal matrix consisting of the logarithms of the original singular values, we must resist the temptation to compute the result as exp(t(ξ+σ–η)), despite it being called an "exponential" map, as that only works when all the matrices commute!!) The path is continuous and so we have proved that there is indeed a continuous, globally defined frame, and thus our bundle is trivial. Beautiful? Well, where are the pictures? The picture above is given by considering the case n = 2, that of a plane, and the transformation by scaling by 2. If we take circular set in the initial plane, we can see how this transformation acts (here the U and V are the identity and Σ is just twice the identiy) and make it go 'round and 'round (without identifying), we get something that is reminiscent of nested tori (but not really, as it is just one torus spiraling into itself!). We're not done with the awesomeness of this example quite yet though—I've got some other math to get back to! But there are more interesting visualizations from this example, so stay tuned!

No comments:

Post a Comment