A distributed stochastic optimization algorithm with gradient-tracking and distributed heavy-ball acceleration
Bihao SUN, Jinhui HU, Dawen XIA, Huaqing LI
A distributed stochastic optimization algorithm with gradient-tracking and distributed heavy-ball acceleration
Distributed optimization has been well developed in recent years due to its wide applications in machine learning and signal processing. In this paper, we focus on investigating distributed optimization to minimize a global objective. The objective is a sum of smooth and strongly convex local cost functions which are distributed over an undirected network of n nodes. In contrast to existing works, we apply a distributed heavy-ball term to improve the convergence performance of the proposed algorithm. To accelerate the convergence of existing distributed stochastic first-order gradient methods, a momentum term is combined with a gradient-tracking technique. It is shown that the proposed algorithm has better acceleration ability than GT-SAGA without increasing the complexity. Extensive experiments on real-world datasets verify the effectiveness and correctness of the proposed algorithm.
Distributed optimization / High-performance algorithm / Multi-agent system / Machine-learning problem / Stochastic gradient
/
〈 | 〉 |