# Randomized Linear Algebra

RandomizedLinAlg.jl is a Julia package that provides some randomized algorithms for numerical linear algebra as advocated in [Halko2011].

RandomizedLinAlg.reigenFunction
reigen(A, l)

Compute the spectral (Eigen) decomposition of A using a randomized algorithm.

Arguments

• A: input matrix.
• l::Int: number of eigenpairs to find.

Output

• ::LinearAlgebra.Eigen: eigen decomposition.

Implementation note

This is a wrapper around eigen_onepass() which uses the randomized samples found using srft(l).

source
RandomizedLinAlg.rsvdFunction
rsvd(A, n, p=0)

Compute partial singular value decomposition of A using a randomized algorithm.

Arguments

A: input matrix.

n::Int: number of singular value/vector pairs to find.

p::Int=0: number of extra vectors to include in computation.

Output

::SVD: singular value decomposition.

Accuracy

This variant of the randomized singular value decomposition is the most commonly found implementation but is not recommended for accurate computations, as it often has trouble finding the n largest singular pairs, but rather finds n large singular pairs which may not necessarily be the largest.

Implementation note

This function calls rrange, which uses naive randomized rangefinding to compute a basis for a subspace of dimension n (Algorithm 4.1 of [Halko2011]), followed by svd_restricted(), which computes the exact SVD factorization on the restriction of A to this randomly selected subspace (Algorithm 5.1 of [Halko2011]).

Alternatively, you can mix and match your own randomized algorithm using any of the randomized range finding algorithms to find a suitable subspace and feeding the result to one of the routines that computes the SVD restricted to that subspace.

source
RandomizedLinAlg.rsvd_fnkzFunction
rsvd_fnkz(A, k)

Compute the randomized SVD by iterative refinement from randomly selected columns/rows [Friedland2006].

Arguments

• A: matrix whose SVD is desired;
• k::Int: desired rank of approximation (k ≤ min(m, n)).

Keywords

• l::Int = k: number of columns/rows to sample at each iteration (1 ≤ l ≤ k);
• N::Int = minimum(size(A)): maximum number of iterations;
• ϵ::Real = prod(size(A))*eps(): relative threshold for convergence, as measured by growth of the spectral norm;
• method::Symbol = :eig: problem to solve.
1. :eig: eigenproblem.
2. :svd: singular problem.
• verbose::Bool = false: print convergence information at each iteration.

Return value

SVD object of rank ≤ k.

source

## Condition number estimate

RandomizedLinAlg.rcondFunction
rcond(A, iters=1)

Estimate matrix condition number randomly.

Arguments

• A: matrix whose condition number to estimate. Must be square and

support premultiply (A*⋅) and solve (A\⋅).

• iters::Int = 1: number of power iterations to run.

Keywords

• p::Real = 0.05: probability that estimate fails to hold as an upper bound.

Output

Interval (x, y) which contains κ(A) with probability 1 - p.

Implementation note

[Dixon1983] originally describes this as a computation that can be done by computing the necessary number of power iterations given p and the desired accuracy parameter θ=y/x. However, these bounds were only derived under the assumptions of exact arithmetic. Empirically, iters≥4 has been seen to result in incorrect results in that the computed interval does not contain the true condition number. This implemention therefore makes iters an explicitly user-controllable parameter from which to infer the accuracy parameter and hence the interval containing κ(A). 

source

## Extremal eigenvalue estimates

RandomizedLinAlg.reigminFunction
reigmin(A, iters=1)

Estimate minimal eigenvalue randomly.

Arguments

• A: Matrix whose maximal eigenvalue to estimate.

Must be square and support premultiply (A*⋅).

• iters::Int=1: Number of power iterations to run. (Recommended: iters ≤ 3)

Keywords

• p::Real=0.05: Probability that estimate fails to hold as an upper bound.

Output

Interval (x, y) which contains the maximal eigenvalue of A with probability 1 - p.

References

Corollary of Theorem 1 in [Dixon1983]

source
RandomizedLinAlg.reigmaxFunction
reigmax(A, iters=1)

Estimate maximal eigenvalue randomly.

Arguments

• A: Matrix whose maximal eigenvalue to estimate.

Must be square and support premultiply (A*⋅).

• iters::Int=1: Number of power iterations to run. (Recommended: iters ≤ 3)

Keywords

• p::Real=0.05: Probability that estimate fails to hold as an upper bound.

Output

Interval (x, y) which contains the maximal eigenvalue of A with probability 1 - p.

References

Corollary of Theorem 1 in [Dixon1983]

source

## Norm estimate

RandomizedLinAlg.rnormFunction
rnorm(A, mvps)

Compute a probabilistic upper bound on the norm of a matrix A. ‖A‖ ≤ α √(2/π) maxᵢ ‖Aωᵢ‖ with probability p=α^(-mvps).

Arguments

• A: matrix whose norm to estimate.
• mvps::Int: number of matrix-vector products to compute.

Keywords

• p::Real=0.05: probability of upper bound failing.

Output

Estimate of ‖A‖.

See also rnorms for a different estimator that uses premultiplying by both A and A'.

References

Lemma 4.1 of Halko2011

source
RandomizedLinAlg.rnormsFunction
rnorms(A, iters=1)

Estimate matrix norm randomly using A'A.

Compute a probabilistic upper bound on the norm of a matrix A.

ρ = √(‖(A'A)ʲω‖/‖(A'A)ʲ⁻¹ω‖)

which is an estimate of the spectral norm of A produced by iters steps of the power method starting with normalized ω, is a lower bound on the true norm by a factor

ρ ≤ α ‖A‖

with probability greater than 1 - p, where p = 4\sqrt(n/(iters-1)) α^(-2iters).

Arguments

• A: matrix whose norm to estimate.
• iters::Int = 1: mumber of power iterations to perform.

Keywords

• p::Real = 0.05: probability of upper bound failing.
• At = A': Transpose of A.

Output

Estimate of ‖A‖.

See also rnorm for a different estimator that does not require premultiplying by A'

References

Appendix of [Liberty2007].

source

## Interpolative Decomposition

RandomizedLinAlg.idFunction
id(A, k, l)

Compute and return the interpolative decomposition of A: A ≈ B * P

Where:

• B's columns are a subset of the columns of A
• some subset of P's columns are the k x k identity, no entry of P exceeds magnitude 2, and
• ||B * P - A|| ≲ σ(A, k+1), the (k+1)st singular value of A.

Arguments

A: Matrix to factorize

k::Int: Number of columns of A to return in B

l::Int: Length of random vectors to project onto

Output

(::Interpolative): interpolative decomposition.

Implementation note

This is a hacky version of the algorithms described in \cite{Liberty2007} and \cite{Cheng2005}. The former refers to the factorization (3.1) of the latter. However, it is not actually necessary to compute this factorization in its entirely to compute an interpolative decomposition.

Instead, it suffices to find some permutation of the first k columns of Y = R * A, extract the subset of A into B, then compute the P matrix as B\A which will automatically compute P using a suitable least-squares algorithm.

The approximation we use here is to compute the column pivots of Y, rather then use the true column pivots as would be computed by a column- pivoted QR process.

References

\cite[Algorithm I]{Liberty2007}

@article{Cheng2005,
author = {Cheng, H and Gimbutas, Z and Martinsson, P G and Rokhlin, V},
doi = {10.1137/030602678},
issn = {1064-8275},
journal = {SIAM Journal on Scientific Computing},
month = jan,
number = {4},
pages = {1389--1404},
title = {On the Compression of Low Rank Matrices},
volume = {26},
year = {2005}
}`
source
• Friedland2006Friedland, Shmuel, et al. "Fast Monte-Carlo low rank approximations for matrices." System of Systems Engineering, 2006 IEEE/SMC International Conference on. IEEE, 2006.
• Halko2011Halko, Nathan, Per-Gunnar Martinsson, and Joel A. Tropp. "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions." SIAM review 53.2 (2011): 217-288.
• Dixon1983Dixon, John D. "Estimating extremal eigenvalues and condition numbers of matrices." SIAM Journal on Numerical Analysis 20.4 (1983): 812-814.
• Liberty2007Liberty, Edo, et al. "Randomized algorithms for the low-rank approximation of matrices." Proceedings of the National Academy of Sciences 104.51 (2007): 20167-20172.