A Sharp Uniform-in-Time Error Estimate for Stochastic Gradient Langevin Dynamics

Lei Li , Yuliang Wang

CSIAM Trans. Appl. Math. ›› 2025, Vol. 6 ›› Issue (4) : 711 -759.

PDF (66KB)
CSIAM Trans. Appl. Math. ›› 2025, Vol. 6 ›› Issue (4) : 711 -759. DOI: 10.4208/csiam-am.SO-2024-0039
research-article

A Sharp Uniform-in-Time Error Estimate for Stochastic Gradient Langevin Dynamics

Author information +
History +
PDF (66KB)

Abstract

We establish a sharp uniform-in-time error estimate for the stochastic gradient Langevin dynamics (SGLD), which is a widely-used sampling algorithm. Under mild assumptions, we obtain a uniform-in-time $\mathcal{O}\left({\eta }^{2}\right)$ bound for the Kullback-Leibler divergence between the SGLD iteration and the Langevin diffusion, where $\eta $ is the step size (or learning rate). Our analysis is also valid for varying step sizes. Consequently, we are able to derive an $\mathcal{O}\left(\eta \right)$ bound for the distance between the invariant measures of the SGLD iteration and the Langevin diffusion, in terms of Wasserstein or total variation distances. Our result can be viewed as a significant improvement compared with existing analysis for SGLD in related literature.

Keywords

Random batch / Euler-Maruyama scheme / Fokker-Planck equation / log-Sobolev inequality

Cite this article

Download citation ▾
Lei Li, Yuliang Wang. A Sharp Uniform-in-Time Error Estimate for Stochastic Gradient Langevin Dynamics. CSIAM Trans. Appl. Math., 2025, 6(4): 711-759 DOI:10.4208/csiam-am.SO-2024-0039

登录浏览全文

4963

注册一个新账户 忘记密码

References

AI Summary AI Mindmap
PDF (66KB)

233

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/