Adformer: an adaptive unified framework for multivariate time series forecasting

Xiaohou SHI , Xuanming ZHANG , Yuan CHANG , Yan SUN , Nadra GUIZANI

Front. Comput. Sci. ›› 2027, Vol. 21 ›› Issue (2) : 2102316

PDF (3199KB)
Front. Comput. Sci. ›› 2027, Vol. 21 ›› Issue (2) :2102316 DOI: 10.1007/s11704-025-51487-6
Artificial Intelligence
RESEARCH ARTICLE
Adformer: an adaptive unified framework for multivariate time series forecasting
Author information +
History +
PDF (3199KB)

Abstract

In multivariate time series forecasting, most existing Transformer models follow a fixed modeling paradigm; they either focus on capturing temporal patterns within each variable or on learning the interactions between variables. However, this single approach often fails to adapt to real-world time series, which exhibit complex and diverse characteristics. To this end, we propose Adformer, an adaptive and unified forecasting framework. The framework innovatively integrates a hybrid architecture capable of capturing both intra-variable and inter-variable dependencies. However, we recognize that this hybrid design faces a key challenge: when inter-variable correlations in the data are weak, forcing the model to learn these inter-variable interactions may introduce statistical noise and thus degrade forecasting performance. To enable the model to intelligently circumvent this issue, our hybrid architecture is dynamically guided by a data-driven strategy selection module. This module analyzes the input’s intrinsic correlation structure using unsupervised clustering, based on this analysis, automatically selects the optimal modeling path for the architecture—whether to focus on intra-variable patterns, inter-variable interactions, or a hybrid of both. Additionally, we introduce a frequency-aware loss function, which helps the model focus on meaningful low-frequency components and improves robustness under noisy conditions.Extensive experiments on several public benchmarks demonstrate that our adaptive framework consistently outperforms state-of-the-art methods across various forecasting tasks, showing strong generalization and robustness, and highlighting its potential as a foundation for future time series models.

Graphical abstract

Keywords

multivariate time series forecasting / transformer / frequency-aware loss / adaptive modeling

Cite this article

Download citation ▾
Xiaohou SHI, Xuanming ZHANG, Yuan CHANG, Yan SUN, Nadra GUIZANI. Adformer: an adaptive unified framework for multivariate time series forecasting. Front. Comput. Sci., 2027, 21(2): 2102316 DOI:10.1007/s11704-025-51487-6

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Nie Y, Nguyen N H, Sinthong P, Kalagnanam J. A time series is worth 64 words: long-term forecasting with transformers. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[2]

Liu Y, Hu T, Zhang H, Wu H, Wang S, Ma L, Long M. iTransformer: inverted transformers are effective for time series forecasting. In: Proceedings of the 12th International Conference on Learning Representations. 2024

[3]

Zhou H, Zhang S, Peng J, Zhang S, Li J, Xiong H, Zhang W. Informer: beyond efficient transformer for long sequence time-series forecasting. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. 2021, 11106−11115

[4]

Wu H, Xu J, Wang J, Long M. Autoformer: decomposition transformers with auto-correlation for long-term series forecasting. In: Proceedings of the 35th International Conference on Neural Information Processing Systems. 2021, 1717

[5]

Zhou T, Ma Z, Wen Q, Wang X, Sun L, Jin R. FEDformer: frequency enhanced decomposed transformer for long-term series forecasting. In: Proceedings of the 39th International Conference on Machine Learning. 2022, 27268−27286

[6]

Zeng A, Chen M, Zhang L, Xu Q. Are transformers effective for time series forecasting? In: Proceedings of the 37th AAAI Conference on Artificial Intelligence. 2023, 11121−11128

[7]

Li Z, Qi S, Li Y, Xu Z. Revisiting long-term time series forecasting: an investigation on linear mapping. 2023, arXiv preprint arXiv: 2305.10721

[8]

Wu H, Hu T, Liu Y, Zhou H, Wang J, Long M. TimesNet: temporal 2D-variation modeling for general time series analysis. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[9]

Liu Y, Li C, Wang J, Long M. Koopa: learning non-stationary time series dynamics with koopman predictors. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 538

[10]

Wang H, Pan L, Chen Z, Yang D, Zhang S, Yang Y, Liu X, Li H, Tao D. FreDF: learning to forecast in frequency domain. 2024, arXiv preprint arXiv: 2402.02399v1

[11]

Jakobs M, Saadallah A. Explainable adaptive tree-based model selection for time-series forecasting. In: Proceedings of 2023 IEEE International Conference on Data Mining (ICDM). 2023, 180−189

[12]

Benesty J, Chen J, Huang Y, Cohen I. Pearson correlation coefficient. In: Cohen I, Huang Y, Chen J, Benesty J, eds. Noise Reduction in Speech Processing. Berlin: Springer, 2009, 1−4

[13]

MacQueen J B. Some methods for classification and analysis of multivariate observations. In: Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics. 1967, 281−297

[14]

Yu M, Guo X, Chen P, Li Z, Shu Y. Towards measuring and modeling geometric structures in time series forecasting via image modality. 2025, arXiv preprint arXiv: 2507.23253

[15]

Zhou T, Ma Z, Wang X, Wen Q, Sun L, Yao T, Yin W, Jin R. FiLM: frequency improved legendre memory model for long-term time series forecasting. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 921

[16]

Piao X, Chen Z, Murayama T, Matsubara Y, Sakurai Y. Fredformer: frequency debiased transformer for time series forecasting. In: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024, 2400−2410

RIGHTS & PERMISSIONS

Higher Education Press

PDF (3199KB)

Supplementary files

Highlights

443

Accesses

0

Citation

Detail

Sections
Recommended

/