ProbAI: A Hub for the Mathematical and Computational Foundations of Probabilistic AI


Probabilistic AI involves the embedding of probability models, probabilistic reasoning and measures of uncertainty within AI methods. The ProbAI hub will develop a world leading, diverse and UK-wide research programme in probabilistic AI, that will develop the next generation of mathematically-rigorous, scalable and uncertainty-aware AI algorithms. It will have far-reaching impact across many aspects of AI, including:

(1) The sudden and rapid growth of AI systems has led to a new impetus for businesses, governments and creators of AI tools to understand and convey the inherent uncertainties in their systems. A probabilistic approach to AI provides a framework to represent and manipulate uncertainty about models and predictions and already plays a central role in scientific data analysis, robotics and cognitive science. The consequential impact arising from from such developments has the potential to be wide-ranging and substantial: from utilising a probabilistic approach for effective resource allocation (healthcare), prioritisation of actions (infrastructure planning), pattern recognition (cyber security) and the development of robust strategies to mitigate risks (finance).

(2) It is possible to gain important theoretical insights into AI models and algorithms through studying their, often probabilistic, limiting behaviour in different asymptotic scenarios. Such results can help with understanding why AI methods work, and how best to choose appropriate architectures - with the potential to substantially reduce the computational cost and carbon footprint of AI.

(3) Recent breakthroughs in generative models are based on simulating stochastic processes. There is huge potential to both use these ideas to help develop efficient and scalable probabilistic AI methods more generally; and also to improve and extend current generative models. The latter may lead to more computationally efficient and robust methods, to generative models that use different stochastic processes and are suitable for different types of data, or to novel approaches that can give a level of certainty to the output of a generative model.

(4) Models from AI are increasingly being used as emulators. For example, fitting a deep neural network to realisations of a complex computer model for the weather, can lead to more efficient approaches to forecasting the weather. However, in most applications for such methods to be used reliably requires that the emulators report a measure of uncertainty – so the user can know when the output can be trusted. Also, building on recent generalisations of Bayes updates gives new approaches to incorporate known physical constraints and other structure into these neural network emulators, leading to more robust methods that generalise better outside the training sampler and that have fewer parameters and are easier to fit.

Developing these new, practical, general-purpose probabilistic AI methods requires overcoming substantial challenges, and at their heart many of these challenges are mathematical. The hub will unify a fragmented community with interests in Probabilistic AI and bring together UK researchers across the breadth of Applied Mathematics, Computer Science, Probability and Statistics. The hub will promote the area of probabilistic AI widely, encouraging and facilitating cross-disciplinary mathematics research in AI, and has substantial flexibility to fund the involvement of researchers from across the breadth of the UK during its lifetime.

ProbAI will draw on and benefit from the well-established world-leading strength in areas relevant to probabilistic AI across different areas of Mathematics and Computer Science, with the aim of making the UK the world-leader in probabilistic AI.