Jeunes Chercheurs Jeunes Chercheuses (JCJC) grant

ECO-ML - Rethinking Modern Machine Learning Tools for a New Generation of Low-Power Large-Scale Modeling Systems

Dates: Feb 2019 – Feb 2022

Information and Communication Technologies (ICT) are constantly producing advancements that translate into a variety of societal changes including improvements to economy, better living conditions, access to education, well-being, and entertainment. The widespread use and growth of ICT, however, is posing a huge threat to the sustainability of this development, given that the energy consumption of current computing devices is growing at an uncontrolled pace.

Within ICT, machine learning is currently one of the fastest growing fields, given its pervasive use and adoption in smart cities, recommendation systems, finance, social media analysis, communication systems, and transportation. Apart from isolated application-specific attempts, the only general solution to tackle the sustainability of computations in machine learning is Google’s Tensor Processing Unit (TPU), which has been opened to general use through a cloud system in mid-February. This is an interesting and effective direction to push a transistor-based technology to address some of the issues above pertaining to the sustainability of computing for machine learning, and it is inspiring other companies and start-ups to follow this trend.

ECO-ML’s ambition is to radically change this and to propose a novel angle of attack to the sustainability of computations in machine learning. The starting point of ECO-ML is the realization that current approaches for inference and prediction with Gaussian Processes (GPs) and Deep Gaussian Processes (DGPs) are competitive with popular Deep Neural Networks (DNNs), while offering attractive flexibility and quantification of uncertainty. In the last year, we have come across the work that the French company LightOn has done on the development of novel Optical Processing Units (OPUs). OPUs perform a specific matrix operation in hardware exploiting the properties of scattering of light, so that in practice this happens at the speed of light. Not only this is the case, but the consumption of OPUs is much lower than current computing devices, while allowing for the possibility to operate with large Gaussian random matrices, orders of magnitude larger than current computing devices. GP and DGP models are perfect candidates to benefit from the principles behind OPUs, but there is need to make advancements on the design and inference of these models for this to become a reality.

We expect to produce and release the first implementation of GPs and DGPs using OPUs, and to demonstrate that this leads to considerable acceleration in model training and prediction times while reducing power consumption with respect to the state-of-the-art. We expect to advance the state-of-the-art in GP and DGP modeling and inference by developing novel model approximations and inference tailored to exploit OPU computing, but that will also trigger advances in the theory of approximation of GPs and DGPs. Finally, we expect to showcase a variety of modeling applications in environmental and life sciences, demonstrating that our approach leads to competitive performance with the state-of-the-art, while achieving sound quantification of uncertainty and fast model training and prediction times in a sustainable way. Similarly to the success that Graphical Processing Units (GPUs) enabled in the deep learning revolution, we envisage that OPUs will be a key element in making GPs the preferred choice for future large-scale modeling and accurate quantification of uncertainty tasks.

Edit this page

Avatar
Maurizio Filippone
Associate Professor and AXA Chair of Computational Statistics

My research interests include Bayesian Machine Learning, with a particular focus on Gaussian processes and Bayesian deep learning.

Next
Previous

Related