*Yuchen Wu (Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences & University of Chinese Academy of Sciences)
Machine learning is a hot topic in which optimization is a fundamental tool. In this paper, we consider large scale finite sum problems in machine learning, where the problem size $n$ is large, the problem dimension is not very large and the residue is very small. It is generally thought that a stochastic or incremental method can be very effective in solving problems with large problem size since it has a cost independent of the problem size $n$ per iteration. For ill-conditioning problems, a second order method should be applied. In particular, Quasi-Newton is a good choice and the Incremental Quasi-Newton(IQN) method succeeds in using Quasi-Newton method in an incremental framework. The small residual assumption encourages us to apply the Gauss-Newton method since it is simple and converges fast for zero-residual problems. The Incremental Gauss-Newton proposed here bridge a gap between deterministic and incremental Gauss-Newton method. Our method is appealing because it has a smaller computational cost per iteration than the standard Gauss-Newton method and achieves a linear convergence rate that is proportional to the residue under customary regularity assumptions. Besides this theoretical result, we show empirically on a number of well known datasets that the proposed IGN method is powerful. It is comparable to IQN in term of effective pass with less time for each iteration and a $O(nd)$ storage cost instead of $O(nd^2)$. Therefore, our method is effective in solving large scale finite sum problems with small residuals, which often appear in machine learning.