Main Content

Singular Values

A奇异值and correspondingsingular vectorsof a rectangular matrixA是, respectively, a scalarσand a pair of vectorsuandvthat satisfy

A v = σ u A H u = σ v ,

where A H is the Hermitian transpose ofA。The singular vectorsuandv是typically scaled to have a norm of 1. Also, ifuandv是singular vectors ofA, then-uand-v是singular vectors ofAas well.

The singular valuesσ是always real and nonnegative, even ifAis complex. With the singular values in a diagonal matrixΣand the corresponding singular vectors forming the columns of two orthogonal matricesUandV, you obtain the equations

A V = U Σ A H U = V Σ

SinceUandV是unitary matrices, multiplying the first equation by V H on the right yields the singular value decomposition equation

A = U Σ V H

The full singular value decomposition of anm-by-nmatrix involves:

  • m-by-mmatrixU

  • m-by-nmatrixΣ

  • n-by-nmatrixV

In other words,UandV是both square, andΣis the same size asA。IfAhas many more rows than columns (m > n), then the resultingm-by-mmatrixUis large. However, most of the columns inU是multiplied by zeros inΣ。In this situation, theeconomy-sizeddecomposition saves both time and storage by producing anm-by-nU, ann-by-nΣand the sameV:

In the economy-sized decomposition, columns in U can be ignored if they multiply zeros in the diagonal matrix of singular values.

The eigenvalue decomposition is the appropriate tool for analyzing a matrix when it represents a mapping from a vector space into itself, as it does for an ordinary differential equation. However, the singular value decomposition is the appropriate tool for analyzing a mapping from one vector space into another vector space, possibly with a different dimension. Most systems of simultaneous linear equations fall into this second category.

IfAis square, symmetric, and positive definite, then its eigenvalue and singular value decompositions are the same. But, asAdeparts from symmetry and positive definiteness, the difference between the two decompositions increases. In particular, the singular value decomposition of a real matrix is always real, but the eigenvalue decomposition of a real, nonsymmetric matrix might be complex.

For the example matrix

A = 9 4 6 8 2 7

the full singular value decomposition is

[U,S,V] = svd(A) U = 0.6105 -0.7174 0.3355 0.6646 0.2336 -0.7098 0.4308 0.6563 0.6194 S = 14.9359 0 0 5.1883 0 0 V = 0.6925 -0.7214 0.7214 0.6925

You can verify thatU*S*V'is equal toAto within round-off error. For this small problem, the economy size decomposition is only slightly smaller.

[U,S,V] = svd(A,0) U = 0.6105 -0.7174 0.6646 0.2336 0.4308 0.6563 S = 14.9359 0 0 5.1883 V = 0.6925 -0.7214 0.7214 0.6925

Again,U*S*V'is equal toAto within round-off error.

批处理计算计算

If you need to decompose a large collection of matrices that have the same size, it is inefficient to perform all of the decompositions in a loop withsvd。相反,you can concatenate all of the matrices into a multidimensional array and usepagesvdto perform singular value decompositions on all of the array pages with a single function call.

Function Usage
pagesvd Usepagesvdto perform singular value decompositions on the pages of a multidimensional array. This is an efficient way to perform SVD on a large collection of matrices that all have the same size.

For example, consider a collection of three 2-by-2 matrices. Concatenate the matrices into a 2-by-2-by-3 array with thecatfunction.

A = [0 -1; 1 0]; B = [-1 0; 0 -1]; C = [0 1; -1 0]; X = cat(3,A,B,C);

Now, usepagesvdto simultaneously perform the three decompositions.

[U,S,V] = pagesvd(X);

For each page ofX, there are corresponding pages in the outputsU,S, andV。For example, the matrixAis on the first page ofX, and its decomposition is given byU (:: 1) * S (:,: 1) * V(:,: 1)”

Low-Rank SVD Approximations

For large sparse matrices, usingsvdto calculateallof the singular values and singular vectors is not always practical. For example, if you need to know just a few of the largest singular values, then calculating all of the singular values of a 5000-by-5000 sparse matrix is extra work.

In cases where only a subset of the singular values and singular vectors are required, the圣言会and圣言会ketchfunctions are preferred oversvd

Function Usage
圣言会 Use圣言会to calculate a rank-kapproximation of the SVD. You can specify whether the subset of singular values should be the largest, the smallest, or the closest to a specific number.圣言会generally calculates the best possible rank-kapproximation.
圣言会ketch Use圣言会ketchto calculate a partial SVD of the input matrix satisfying a specified tolerance. While圣言会requires that you specify the rank,圣言会ketchadaptively determines the rank of the matrix sketch based on the specified tolerance. The rank-kapproximation that圣言会ketchultimately uses satisfies the tolerance, but unlike圣言会, it is not guaranteed to be the best one possible.

For example, consider a 1000-by-1000 random sparse matrix with a density of about 30%.

n = 1000; A = sprand(n,n,0.3);

The six largest singular values are

S = svds(A) S = 130.2184 16.4358 16.4119 16.3688 16.3242 16.2838

Also, the six smallest singular values are

S = svds(A,6,'smallest') S = 0.0740 0.0574 0.0388 0.0282 0.0131 0.0066

For smaller matrices that can fit in memory as a full matrix,full(A), usingsvd(full(A))might still be quicker than圣言会or圣言会ketch。However, for truly large and sparse matrices, using圣言会or圣言会ketchbecomes necessary.

See Also

||||

Related Topics