Stochastic gradient descents to online Newton algorithms
Antoine Godichon-Baggioni  1  
1 : UPMC
Université Pierre et Marie Curie [UPMC] - Paris VI, Université Pierre et Marie Curie (UPMC) - Paris VI

The majority of machine learning methods can be regarded as the minimization of an unavailable risk function. To optimize the latter, given samples provided in a streaming fashion, stochastic gradient descent is a common tool, but this last one can be very sensitive to ill-conditioned problems. In order to overcome this, we focus on Stochastic Newton methods, and so, without requiring the inversion of a Hessian estimate at each iteration. Under mild assumptions, which are verified for instance in the case of linear, logistic or softmax regressions, we will see that the estimates are asymptotically efficient. Numerical experiments on simulated data give the empirical evidence of the pertinence of the proposed methods, which outperform popular competitors.


Personnes connectées : 1 Vie privée
Chargement...