Low-rank approximation: we show that providing a -approximate rank-1 approximation to (in spectral error) requires samples, which is optimal. For this question, no super-constant lower bound (either in terms of or in ) was known.
Can one recover the entries of a matrix from only matrix-vector products? If so, how many are needed? I will present randomized algorithms to recover various structured matrices, as well as theoretical results which bound the query complexity of these structured families. Moreover, a continuous generalization of query complexity describes how many pairs of forcing terms and solutions are needed to uniquely identify a Green's function corresponding to the solution operator of an elliptic PDE. I will present a recent main result, which is a theoretical guarantee on the number of input-output pairs required in elliptic PDE learning problems with high probability. The proof of this result is constructive, and leverages insights from numerical linear algebra and PDE theory.
Variants of Power Method are useful for approximating the top eigenvalues of a matrix, but these methods rely heavily on adaptivity and typically only learn the top portion of the spectrum. How well can one do if the goal is to approximate the entire spectrum via non-adaptive matrix-vector queries? For a symmetric matrix , this talk will show how to recover all eigenvalues of to within additive error using matrix-vector queries. I will also discuss a lower bound showing that matrix-vector queries are necessary for obtaining our guarantee, even if we allow adaptivity.
Matrix-vector query models have gained increased attention, and in that case that the matrix of interest is a matrix function (e.g. the matrix inverse or matrix exponential), it is common to use Krylov subspace methods as a black box for computing matrix-vector products. In this talk, we discuss how taking a look into the black-box allows further efficiencies to be obtained.
We study the problem of estimating the trace of a matrix that can only be access via Kronecker-matrix-vector products. That is, we can compute for any vector that has Kronecker structure. In particular, we study how Hutchinson's Estimator performs in this setting, proving tight rates for the number of matrix-vector products this estimator needs to find a relative error approximation to the trace of . We find an exact expression for the variance of this estimator, show this is unavoidably exponential, and conclude with evidence that a much faster non-Hutchinson algorithm may exist.