Next: , Previous: Basics, Up: Sparse Matrices


20.2 Linear Algebra on Sparse Matrices

Octave includes a poly-morphic solver for sparse matrices, where the exact solver used to factorize the matrix, depends on the properties of the sparse matrix itself. Generally, the cost of determining the matrix type is small relative to the cost of factorizing the matrix itself, but in any case the matrix type is cached once it is calculated, so that it is not re-determined each time it is used in a linear equation.

The selection tree for how the linear equation is solve is

  1. If the matrix is diagonal, solve directly and goto 8
  2. If the matrix is a permuted diagonal, solve directly taking into account the permutations. Goto 8
  3. If the matrix is square, banded and if the band density is less than that given by spparms ("bandden") continue, else goto 4.
    1. If the matrix is tridiagonal and the right-hand side is not sparse continue, else goto 3b.
      1. If the matrix is hermitian, with a positive real diagonal, attempt Cholesky factorization using Lapack xPTSV.
      2. If the above failed or the matrix is not hermitian with a positive real diagonal use Gaussian elimination with pivoting using Lapack xGTSV, and goto 8.
    2. If the matrix is hermitian with a positive real diagonal, attempt Cholesky factorization using Lapack xPBTRF.
    3. if the above failed or the matrix is not hermitian with a positive real diagonal use Gaussian elimination with pivoting using Lapack xGBTRF, and goto 8.
  4. If the matrix is upper or lower triangular perform a sparse forward or backward substitution, and goto 8
  5. If the matrix is a upper triangular matrix with column permutations or lower triangular matrix with row permutations, perform a sparse forward or backward substitution, and goto 8
  6. If the matrix is square, hermitian with a real positive diagonal, attempt sparse Cholesky factorization using CHOLMOD.
  7. If the sparse Cholesky factorization failed or the matrix is not hermitian with a real positive diagonal, and the matrix is square, factorize using UMFPACK.
  8. If the matrix is not square, or any of the previous solvers flags a singular or near singular matrix, find a minimum norm solution using CXSPARSE1.

The band density is defined as the number of non-zero values in the matrix divided by the number of non-zero values in the matrix. The banded matrix solvers can be entirely disabled by using spparms to set bandden to 1 (i.e. spparms ("bandden", 1)).

The QR solver factorizes the problem with a Dulmage-Mendhelsohn, to separate the problem into blocks that can be treated as over-determined, multiple well determined blocks, and a final over-determined block. For matrices with blocks of strongly connected nodes this is a big win as LU decomposition can be used for many blocks. It also significantly improves the chance of finding a solution to over-determined problems rather than just returning a vector of NaN's.

All of the solvers above, can calculate an estimate of the condition number. This can be used to detect numerical stability problems in the solution and force a minimum norm solution to be used. However, for narrow banded, triangular or diagonal matrices, the cost of calculating the condition number is significant, and can in fact exceed the cost of factoring the matrix. Therefore the condition number is not calculated in these cases, and Octave relies on simpler techniques to detect singular matrices or the underlying LAPACK code in the case of banded matrices.

The user can force the type of the matrix with the matrix_type function. This overcomes the cost of discovering the type of the matrix. However, it should be noted incorrectly identifying the type of the matrix will lead to unpredictable results, and so matrix_type should be used with care.

— Function File: [n, c] = normest (a, tol)

Estimate the 2-norm of the matrix a using a power series analysis. This is typically used for large matrices, where the cost of calculating the norm (a) is prohibitive and an approximation to the 2-norm is acceptable.

tol is the tolerance to which the 2-norm is calculated. By default tol is 1e-6. c returns the number of iterations needed for normest to converge.

— Function File: [est, v] = condest (a, t)
— Function File: [est, v] = condest (a, solve, solve_t, t)
— Function File: [est, v] = condest (apply, apply_t, solve, solve_t, n, t)

Estimate the 1-norm condition number of a matrix matrix A using t test vectors using a randomized 1-norm estimator. If t exceeds 5, then only 5 test vectors are used.

If the matrix is not explicit, e.g. when estimating the condition number of a given an LU factorization, condest uses the following functions:

apply
A*x for a matrix x of size n by t.
apply_t
A'*x for a matrix x of size n by t.
solve
A \ b for a matrix b of size n by t.
solve_t
A' \ b for a matrix b of size n by t.

The implicit version requires an explicit dimension n.

condest uses a randomized algorithm to approximate the 1-norms.

condest returns the 1-norm condition estimate est and a vector v satisfying norm (A*v, 1) == norm (A, 1) * norm (v, 1) / est. When est is large, v is an approximate null vector.

References:

     
     
See also: norm, cond, onenormest.

— Loadable Function: r = spchol (a)
— Loadable Function: [r, p] = spchol (a)
— Loadable Function: [r, p, q] = spchol (a)

Compute the Cholesky factor, r, of the symmetric positive definite sparse matrix a, where

          r' * r = a.

If called with 2 or more outputs p is the 0 when r is positive definite and p is a positive integer otherwise.

If called with 3 outputs then a sparsity preserving row/column permutation is applied to a prior to the factorization. That is r is the factorization of a(q,q) such that

          r' * r = q * a * q'.

Note that splchol factorization is faster and uses less memory.

     
     
See also: spcholinv, spchol2inv, splchol.

— Loadable Function: spcholinv (a)

Use the Cholesky factorization to compute the inverse of the sparse symmetric positive definite matrix a.

     
     
See also: spchol, spchol2inv.

— Loadable Function: spchol2inv (u)

Invert a sparse symmetric, positive definite square matrix from its Cholesky decomposition, u. Note that u should be an upper-triangular matrix with positive diagonal elements. chol2inv (u) provides inv (u'*u) but it is much faster than using inv.

     
     
See also: spchol, spcholinv.

— Loadable Function: [d, rcond] = spdet (a)

Compute the determinant of sparse matrix a using UMFPACK. Return an estimate of the reciprocal condition number if requested.

— Loadable Function: [x, rcond] = spinv (a, Q)

Compute the inverse of the sparse square matrix a. Return an estimate of the reciprocal condition number if requested, otherwise warn of an ill-conditioned matrix if the reciprocal condition number is small. This function takes advantage of the sparsity of the matrix to accelerate the calculation of the inverse.

In general x will be a full matrix, and so if possible forming the inverse of a sparse matrix should be avoided. It is significantly more accurate and faster to do y = a \ b, rather than y = spinv (a) * b.

— Loadable Function: spkron (a, b)

Form the kronecker product of two sparse matrices. This is defined block by block as

          x = [a(i, j) b]

For example,

          kron(speye(3),spdiag([1,2,3]))
          
          Compressed Column Sparse (rows = 9, cols = 9, nnz = 9)
          
            (1, 1) ->  1
            (2, 2) ->  2
            (3, 3) ->  3
            (4, 4) ->  1
            (5, 5) ->  2
            (6, 6) ->  3
            (7, 7) ->  1
            (8, 8) ->  2
            (9, 9) ->  3

— Loadable Function: l = splchol (a)
— Loadable Function: [l, p] = splchol (a)
— Loadable Function: [l, p, q] = splchol (a)

Compute the Cholesky factor, l, of the symmetric positive definite sparse matrix a, where

          l * l' = a.

If called with 2 or more outputs p is the 0 when l is positive definite and l is a positive integer otherwise.

If called with 3 outputs that a sparsity preserving row/column permutation is applied to a prior to the factorization. That is l is the factorization of a(q,q) such that

          r * r' = a (q, q).

Note that splchol factorization is faster and uses less memory than spchol. splchol(a) is equivalent to spchol(a)'.

     
     
See also: spcholinv, spchol2inv, splchol.

— Loadable Function: [l, u] = splu (a)
— Loadable Function: [l, u, P] = splu (a)
— Loadable Function: [l, u, P, Q] = splu (a)
— Loadable Function: [l, u, P, Q] = splu (..., thres)
— Loadable Function: [l, u, P] = splu (..., Q)

Compute the LU decomposition of the sparse matrix a, using subroutines from UMFPACK. The result is returned in a permuted form, according to the optional return values P and Q.

Called with two or three output arguments and a single input argument, splu is a replacement for lu, and therefore the sparsity preserving column permutations Q are not performed. Called with a fourth output argument, the sparsity preserving column transformation Q is returned, such that P * a * Q = l * u.

An additional input argument thres, that defines the pivoting threshold can be given. Alternatively, the desired sparsity preserving column permutations Q can be passed. Note that Q is assumed to be fixed if there are fewer than four output arguments. Otherwise, the updated column permutations are returned as the fourth argument.

With two output arguments, returns the permuted forms of the upper and lower triangular matrices, such that a = l * u. With two or three output arguments, if a user-defined Q is given, then u * Q' is returned. The matrix is not required to be square.

     
     
See also: sparse, spinv, colamd, symamd.

— Loadable Function: spparms ()
— Loadable Function: vals = spparms ()
— Loadable Function: [keys, vals] = spparms ()
— Loadable Function: val = spparms (key)
— Loadable Function: spparms (vals)
— Loadable Function: spparms ('defaults')
— Loadable Function: spparms ('tight')
— Loadable Function: spparms (key, val)

Sets or displays the parameters used by the sparse solvers and factorization functions. The first four calls above get information about the current settings, while the others change the current settings. The parameters are stored as pairs of keys and values, where the values are all floats and the keys are one of the strings

The value of individual keys can be set with spparms (key, val). The default values can be restored with the special keyword 'defaults'. The special keyword 'tight' can be used to set the mmd solvers to attempt for a sparser solution at the potential cost of longer running time.

— Loadable Function: r = spqr (a)
— Loadable Function: r = spqr (a,0)
— Loadable Function: [c, r] = spqr (a,b)
— Loadable Function: [c, r] = spqr (a,b,0)

Compute the sparse QR factorization of a, using CSparse. As the matrix Q is in general a full matrix, this function returns the Q-less factorization r of a, such that r = chol (a' * a).

If the final argument is the scalar 0 and the number of rows is larger than the number of columns, then an economy factorization is returned. That is r will have only size (a,1) rows.

If an additional matrix b is supplied, then spqr returns c, where c = q' * b. This allows the least squares approximation of a \ b to be calculated as

          [c,r] = spqr (a,b)
          x = r \ c
     
     
See also: spchol, qr.

— Loadable Function: p = sprank (s)

Calculates the structural rank of a sparse matrix s. Note that only the structure of the matrix is used in this calculation based on a Dulmage-Mendelsohn to block triangular form. As such the numerical rank of the matrix s is bounded by sprank (s) >= rank (s). Ignoring floating point errors sprank (s) == rank (s).

     
     
See also: dmperm.

— Loadable Function: [count, h, parent, post, r] = symbfact (s, typ, mode)

Performs a symbolic factorization analysis on the sparse matrix s. Where

s
s is a complex or real sparse matrix.
typ
Is the type of the factorization and can be one of
sym
Factorize s. This is the default.
col
Factorize s' * s.
row
Factorize s * s'.
lo
Factorize s'

mode
The default is to return the Cholesky factorization for r, and if mode is 'L', the conjugate transpose of the Cholesky factorization is returned. The conjugate transpose version is faster and uses less memory, but returns the same values for count, h, parent and post outputs.

The output variables are

count
The row counts of the Cholesky factorization as determined by typ.
h
The height of the elimination tree.
parent
The elimination tree itself.
post
A sparse boolean matrix whose structure is that of the Cholesky factorization as determined by typ.


Footnotes

[1] The CHOLMOD, UMFPACK and CXSPARSE packages were written by Tim Davis and are available at http://www.cise.ufl.edu/research/sparse/