A Count Column Sketch Method for Tensor Auto-regression Parameter Estimation
Qian-Nan Lian , Ju-Li Zhang
Communications on Applied Mathematics and Computation ›› 2025, Vol. 7 ›› Issue (5) : 2137 -2155.
A Count Column Sketch Method for Tensor Auto-regression Parameter Estimation
For the tensor auto-regression (TAR) model, we consider combining the model with the multilinear systems in this work, to expand the previous methods of parameter estimation and prediction. Inspired by the SubCount Sketch (SCS) method, a Count Column Sketch (CCS) method is proposed for the parameter estimation. Compared with the SCS method, the proposed CCS method makes the parameter estimation for the underdetermined problems more efficient. At the same time, the convergence properties of the CCS method are also given. In the numerical experiments, we use the CCS method to analyze and predict the L-transform tensor auto-regressive (L-TAR) model. For underdetermined problems, the numerical results indicate that our approach is validated and less training time is required than other methods.
Randomized methods / Count Column Sketch (CCS) method / Parameter estimation / Tensor auto-regression (TAR) / L-transform / 65M10 / 75D07
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
Bai, Z.-Z., Wu, W.-T.: On relaxed greedy randomized Kaczmarz methods for solving large sparse linear systems. Appl. Math. Lett. 83, 21–26 (2018) |
| [5] |
Bai, Z.-Z., Wu, W.-T.: On greedy randomized coordinate descent methods for solving large linear least-squares problems. Numer. Linear Alg. Appl. 26(4), 1–15 (2019) |
| [6] |
Bai, Z.-Z., Wu, W.-T.: Randomized Kaczmarz iteration methods: algorithmic extensions and convergence theory. Jpn J. Indus. Appl. Math. 40, 1421–1443 (2023) |
| [7] |
Bottou, L.: Stochastic gradient descent tricks. In: Neural Networks: Tricks of the Trade. 2nd edn. Springer, Berlin, Heidelberg (2012) |
| [8] |
Cates, J., Hoover, R.C., Caudle, K., Kopp, R., Ozdemir, C.: Transform-based tensor auto regression for multilinear time series forecasting. In: 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE. 461–466 (2021) |
| [9] |
Cates, J., Hoover, R.C., Caudle, K., Ozdemir, C., Braman, K., Machette, D.: Forecasting multilinear data via transform-based tensor autoregression. arXiv:2205.12201 (2022) |
| [10] |
Charikar, M., Chen, K., Farach-Colton, M.: Finding frequent items in data streams. In: International Colloquium on Automata, Languages, and Programming. pp. 693–703 Springer, Berlin, Heidelberg. pp (2002) |
| [11] |
Cooley, J.W., Tukey, J.W.: An algorithm for the machine calculation of complex Fourier series. Math. Comput. 19(90), 297–301 (1965) |
| [12] |
Deng, J.-W., Deng, J.-L., Yin, D., Jiang, R.-H., Song, X.: TTS-norm: forecasting tensor time series via multi-way normalization. Asso. Comput. Mach. Trans. Knowl. Discov. Data. 18(1), 1–25 (2023) |
| [13] |
Gazagnadou, N., Ibrahim, M., Gower, R.M.: RidgeSketch: a fast sketching based solver for large scale ridge regression. SIAM J. Matrix Anal. Appl. 43(3), 1440–1468 (2022) |
| [14] |
Gower, R.M., Richtárik, P.: Randomized iterative methods for linear systems. SIAM J. Matrix Anal. Appl. 36(4), 1660–1690 (2015) |
| [15] |
Hill, C., Li, J., Schneider, M.J., Wells, M.T.: The tensor auto-regressive model. J Forecast. 40(4), 636–652 (2021) |
| [16] |
Indyk, P., Vakilian, A., Yuan, Y.: Learning-based low-rank approximations. arXiv:1910.13984 (2019) |
| [17] |
Kernfeld, E., Kilmer, M., Aeron, S.: Tensor-tensor products with invertible linear transforms. Linear Algebra Appl. 485, 545–570 (2015) |
| [18] |
Kilmer, M.E., Braman, K., Hao, N., Hoover, R.C.: Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 34(1), 148–172 (2013) |
| [19] |
Kilmer, M.E., Martin, C.D.: Factorization strategies for third-order tensors. Linear Algebra Appl. 435(3), 641–658 (2011) |
| [20] |
Leventhal, D., Lewis, A.S.: Randomized methods for linear constraints: convergence rates and conditioning. Math. Oper. Res. 35(3), 641–654 (2010) |
| [21] |
Li, Z., Xiao, H.: Multi-linear tensor autoregressive models. arXiv:2110.00928 (2021) |
| [22] |
Liu, S., Liu, T., Vakilian, A., Wan, Y., Woodruff, D.P.: On learned sketches for randomized numerical linear algebra. arXiv:2007.09890 (2020) |
| [23] |
Liu, Y., Liu, J., Long, Z., Zhu, C.: Tensor Computation for Data Analysis. Springer, Switzerland (2022) |
| [24] |
Pagh, R.: Compressed matrix multiplication. Assoc. Comput. Mach. Trans. Comput. Theo. 5(3), 1–17 (2013) |
| [25] |
Pham, N., Pagh, R.: Fast and scalable polynomial kernels via explicit feature maps. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp.239–247 (2013) |
| [26] |
Saad, Y.: Iterative Methods for Sparse Linear Systems, 2nd edn. Society for Industrial and Applied Mathematics, Philadelphia (2003) |
| [27] |
Shi, Y., Anandkumar, A.: Higher-order count sketch: dimensionality reduction that retains efficient tensor operations. arXiv:1901.11261 (2019) |
| [28] |
Stich, S.U., Muller, C.L., Gartner, B.: Optimization of convex functions with random pursuit. SIAM J. Opt. 23(2), 1284–1309 (2013) |
| [29] |
Strohmer, T., Vershynin, R.: A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl. 15(2), 262–278 (2009) |
| [30] |
Tang, L., Yu, Y., Zhang, Y., Li, H.: Sketch-and-project methods for tensor linear systems. Numer. Linear Algebra Appl. 30(2), e2470 (2023) |
| [31] |
Wang, D., Zheng, Y., Li, G.: High-dimensional low-rank tensor autoregressive time series modeling. J. Econ. 238(1), 105544 (2023) |
| [32] |
Zhang, J.-H., Guo, J.-H.: On relaxed greedy randomized coordinate descent methods for solving large linear least-squares problems. Appl. Numer. Math. 157, 372–384 (2020) |
Shanghai University
/
| 〈 |
|
〉 |