Linear algebra methods form the basis for the majority of numerical computations. Because they perform a positively elementary task, they have to satisfy strict requirements on computational efficiency and numerical robustness.
Our work has added to a growing understanding that many, widely used, linear solvers can be interpreted as performing probabilistic inference on the elements of a matrix or a vector from observations linear projections of this latent object. In particular, this is true for such foundational algorithms as the method of conjugate gradients and other iterative algorithms in the conjugate directions and projection method classes.
Our ongoing research effort focusses on ways to use these insights in the large-scale linear algebra problems encountered in machine learning. There, the most frequent linear algebra task is the least-squares problem of solving
$$ Kx = b $$
where $K$ is a very large symmetric positive definite kernel Gram matrix, i.e. a matrix arising from computing pairwise terms over a dataset. An intriguing question is how available knowledge about the structure or generative process underlying the dataset can be translated into helpful prior information for the linear solver.
Bartels, S., Hennig, P.
Probabilistic Approximate Least-Squares
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS 2016), 51, pages: 676-684, JMLR Workshop and Conference Proceedings, (Editors: Gretton, A. and Robert, C. C. ), 2016 (conference)
Hennig, P.
Probabilistic Interpretation of Linear Solvers
SIAM Journal on Optimization, 25(1):234-260, 2015 (article)