|
Frontiers of Information Technology & Electronic Engineering
ISSN 2095-9184 (print), ISSN 2095-9230 (online)
2021 Vol.22 No.11 P.1463-1476
A distributed stochastic optimization algorithm with gradient-tracking and distributed heavy-ball acceleration
Abstract: Distributed optimization has been well developed in recent years due to its wide applications in machine learning and signal processing. In this paper, we focus on investigating distributed optimization to minimize a global objective. The objective is a sum of smooth and strongly convex local cost functions which are distributed over an undirected network of n nodes. In contrast to existing works, we apply a distributed heavy-ball term to improve the convergence performance of the proposed algorithm. To accelerate the convergence of existing distributed stochastic first-order gradient methods, a momentum term is combined with a gradient-tracking technique. It is shown that the proposed algorithm has better acceleration ability than GT-SAGA without increasing the complexity. Extensive experiments on real-world datasets verify the effectiveness and correctness of the proposed algorithm.
Key words: Distributed optimization, High-performance algorithm, Multi-agent system, Machine-learning problem, Stochastic gradient
1西南大学电子信息工程学院非线性电路与智能信息处理重庆市重点实验室,中国重庆市,400715
2贵州民族大学数据科学与信息工程学院,中国贵阳市,550025
摘要:由于在机器学习和信号处理中的广泛应用,近年来分布式优化得到良好发展。本文致力于研究分布式优化以求解目标函数全局最小值。该目标是分布在个节点的无向网络上的平滑且强凸的局部成本函数总和。与已有工作不同的是,我们使用分布式重球项以提高算法的收敛性能。为使现有分布式随机一阶梯度算法的收敛加速,将动量项与梯度跟踪技术结合。仿真结果表明,在不增加复杂度的情况下,所提算法具有比GT-SAGA更高收敛速率。在真实数据集上的数值实验证明了该算法的有效性和正确性。
关键词组:
References:
Open peer comments: Debate/Discuss/Question/Opinion
<1>
DOI:
10.1631/FITEE.2000615
CLC number:
TP14
Download Full Text:
Downloaded:
10104
Download summary:
<Click Here>Downloaded:
1219Clicked:
6645
Cited:
0
On-line Access:
2024-08-27
Received:
2023-10-17
Revision Accepted:
2024-05-08
Crosschecked:
2021-04-01