I am interested in algorithms that estimate and express uncertainty about the result of imprecise computations. Such imprecision can arise because the computational task is not analytically tractable, because a limited computational budget only allows a partial solution, or because the description of the task is itself imprecise to begin with. Probability measures provide the formal language for the description of such uncertainty. My group and I develop computer algorithms that take in and return probability measures; we call these probabilistic numerical methods.
can be found here. Don't trust it to always be up to date. If you need a bio-blurb for your event web-page or a talk introduction, here's a suggestion (sorry if this sounds like grandstanding, I've repeatedly been asked for such a text):
Philipp Hennig heads the Max Planck Research Group on Probabilistic Numerics at the MPI for Intelligent Systems in Tübingen, Germany. He studied physics in Heidelberg and London, before he moved to Cambridge, UK, where he did his PhD in David MacKay's inference group. Since that time, he is interested in the information content of computations, and mathematical notions of uncertainty for deterministic computation. Together with two colleagues from Oxford and Columbia U, he organized the inaugural workshop on Probabilistic Numerics in 2012, (re-) starting a community effort to provide a rigorous formulation of computation as the collection of information by autonomous, self-consistent agents. His work has been published in the leading venues of machine learning, as well as journals of the applied mathematics community. Together with his group, he has provided novel interpretations of classic numerical algorithms as maximum a-posteriori estimators, and used these results to create new algorithmical tools for machine learning and artificial intelligence.
In pages: 746-751 , (Editors: Draghici, S. , T.M. Khoshgoftaar, V. Palade, W. Pedrycz, M.A. Wani, X. Zhu), IEEE, Piscataway, NJ, USA, Ninth International Conference on Machine Learning and Applications (ICMLA), December 2010 (inproceedings)
We present a method for fully automated selection of treatment beam ensembles for external radiation therapy. We reformulate the beam angle selection problem as a clustering problem of locally ideal beam orientations distributed on the unit sphere. For this purpose we construct an infinite mixture of von Mises-Fisher distributions, which is suited in general for density estimation from data on the D-dimensional sphere. Using a nonparametric Dirichlet process prior, our model infers probability distributions over both the number of clusters and their parameter values. We describe an efficient Markov chain Monte Carlo inference algorithm for posterior inference from experimental data in this model. The performance of the suggested beam angle selection framework is illustrated for one intra-cranial, pancreas, and prostate case each. The infinite von Mises-Fisher mixture model (iMFMM) creates between 18 and 32 clusters, depending on the patient anatomy. This suggests to use the iMFMM directly for beam ensemble selection in robotic radio surgery, or to generate low-dimensional input for both subsequent optimization of trajectories for arc therapy and beam ensemble selection for conventional radiation therapy.
Cavendish Laboratory: University of Cambridge, July 2009 (techreport)
Many inference problems involving questions of optimality ask for the maximum or the minimum of a finite set of unknown quantities. This technical report derives the first two posterior moments of the maximum of two correlated Gaussian variables and the first two posterior moments of the two generating variables (corresponding to Gaussian approximations minimizing relative entropy). It is shown how this can be used to build a heuristic approximation to the maximum relationship over a finite set of Gaussian variables, allowing approximate inference by Expectation Propagation on such quantities.
Journal of Applied Physics , 102(12):1-8, December 2007 (article)
One knows the imaging system's properties are central to the correct interpretation of any image. In a scanning electron microscope regions of different composition generally interact in a highly nonlinear way during signal generation. Using Monte Carlo simulations we found that in resin-embedded, heavy metal-stained biological specimens staining is sufficiently dilute to allow an approximately linear treatment. We then mapped point-spread functions for backscattered-electron contrast, for primary energies of 3 and 7 keV and for different detector specifications. The point-spread functions are surprisingly well confined (both laterally and in depth) compared even to the distribution of only those scattered electrons that leave the sample again.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems