Joeri Hermans


My name is Joeri, I'm a PhD Student at ULg working on likelihood-free inference, machine learning and high performance computing with the goal to effectively apply these to domain sciences such as physics and astronomy. Previously, I worked as a Technical Student at CERN. My work at CERN mainly focused on the development of a distributed profiler, and researching / developing distributed machine learning solutions. My scientific interests are in the fields of machine learning, distributed systems, and physics. Although I'm not a physicist, I have been passionate about astronomy from a very young age, and I'm working towards combining these fields in the future.


Latest Work

Gradient Energy Matching

Distributed asynchronous SGD has become widely used for deep learning in large-scale systems, but remains notorious for its instability when increasing the number of workers. In this work, we study the dynamics of distributed asynchronous SGD under the lens of Lagrangian mechanics. Using this description, we introduce the concept of energy to describe the optimization process and derive a sufficient condition ensuring its stability as long as the collective energy induced by the active workers remains below the energy of a target synchronous process. Making use of this criterion, we derive a stable distributed asynchronous optimization procedure, GEM, that estimates and maintains the energy of the asynchronous system below or equal to the energy of sequential SGD with momentum. Experimental results highlight the stability and speedup of GEM compared to existing schemes, even when scaling to one hundred asynchronous workers. Results also indicate better generalization compared to the targeted SGD with momentum.

Accumulated Gradient Normalization

This work addresses the instability in asynchronous data parallel optimization. It does so by introducing a novel distributed optimizer which is able to efficiently optimize a centralized model under communication constraints. The optimizer achieves this by pushing a normalized sequence of first-order gradients to a parameter server. This implies that the magnitude of a worker delta is smaller compared to an accumulated gradient, and provides a better direction towards a minimum compared to first-order gradients, which in turn also forces possible implicit momentum fluctuations to be more aligned since we make the assumption that all workers contribute towards a single minima. As a result, our approach mitigates the parameter staleness problem more effectively since staleness in asynchrony induces (implicit) momentum, and achieves a better convergence rate compared to other optimizers such as asynchronous EASGD and DynSGD, which we show empirically.


Easily tag locations in deep space (directories) and easily warp (like the space drive) to them with a few keystrokes.

Recent Pictures