DOI: https://doi.org/10.26089/NumMet.v25r323

Alternating minimization method for low-rank entrywise approximation of tensors in canonical polyadic format

Authors

  • Stanislav V. Morozov

Keywords:

Chebyshev norm,
canonical polyadic,
alternating minimization

Abstract

The approximation of tensors in low-parametric format is an important component in many mathematical modelling and data analysis tasks. One of the most popular low-parametric representations for tensors is the canonical polyadic (CP) decomposition. Nowadays, most of the algorithms for CP approximation aim to construct the approximation in Frobenius norm, however, some applications require entrywise approximation. In this paper, we propose an alternating minimization method to obtain low-rank approximation of tensors in the canonical polyadic format in the Chebyshev norm. Through an extensive evaluation, we demonstrate the effectiveness of the proposed algorithm.


Published

2024-09-09

Issue

Section

Methods and algorithms of computational mathematics and their applications

Author Biography

Stanislav V. Morozov


References

  1. B. N. Khoromskij and C. Schwab, “Tensor-Structured Galerkin Approximation of Parametric and Stochastic Elliptic PDEs,” SIAM J. Sci. Comput. 33 (1), 364-385 (2011).
    doi 10.1137/100785715
  2. J. D. Eshelby, “Energy Relations and the Energy-Momentum Tensor in Continuum Mechanics,” in Fundamental Contributions to the Continuum Theory of Evolving Phase Interfaces in Solids (Springer, Berlin, 1999), pp. 82-119.
  3. S. A. Matveev, D. A. Zheltkov, E. E. Tyrtyshnikov, and A. P. Smirnov, “Tensor Train Versus Monte Carlo for the Multicomponent Smoluchowski Coagulation Equation,” J. Comput. Phys. 316, 164-179 (2016).
    doi 10.1016/j.jcp.2016.04.025
  4. H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “A Survey of Multilinear Subspace Learning for Tensor Data,” Pattern Recognit. 44 (7), 1540-1551 (2011).
    doi 10.1016/j.patcog.2011.01.004
  5. F. L. Hitchcock, “Multiple Invariants and Generalized Rank of a P-Way Matrix or Tensor,” J. Math. Phys. 7 (1-4), 39-79 (1928).
    doi 10.1002/sapm19287139
  6. R. A. Harshman, Foundations of the PARAFAC Procedure: Models and Conditions for an “Explanatory” Multimodal Factor Analysis (UCLA Working Papers in Phonetics, Ann Arbor, 1970), Vol. 16.
  7. L. R. Tucker, “Some Mathematical Notes on Three-Mode Factor Analysis,” Psychometrika 31 (3), 279-311 (1966).
    doi 10.1007/BF02289464
  8. L. De Lathauwer, B. De Moor, and J. Vandewalle, “A Multilinear Singular Value Decomposition,” SIAM J. Matrix Anal. Appl. 21 (4), 1253-1278 (2000).
    doi 10.1137/S0895479896305696
  9. I. V. Oseledets, “Tensor-Train Decomposition,” SIAM J. Sci. Comput. 33 (5) 2295-2317 (2011).
    doi 10.1137/090752286
  10. T. Shi and A. Townsend, “On the Compressibility of Tensors,” SIAM J. Matrix Anal. Appl. 42 (1), 275-298 (2021).
    doi 10.1137/20M1316639
  11. V. Strassen, “Gaussian Elimination Is Not Optimal,” Numerische Mathematik 13 (4), 354-356 (1969).
    doi 10.1007/BF02165411
  12. A. Cichocki, D. Mandic, L. De Lathauwer, et al., “Tensor Decompositions for Signal Processing Applications: from Two-Way to Multiway Component Analysis,” IEEE Signal Process. Mag. 32 (2), 145-163 (2015).
    doi 10.1109/MSP.2013.2297439
  13. V. Lebedev, Y. Ganin, M. Rakhuba, et al., “Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition,”
    https://arxiv.org/pdf/1412.6553 . Cited August 29, 2024.
  14. J. Haastad, “Tensor Rank Is NP-Complete,” J. Algo. 11 (4), 644-654 (1990).
    doi 10.1016/0196-6774(90)90014-6
  15. M. J. Mohlenkamp, “Musings on Multilinear Fitting,” Linear Algebra Its Appl. 438 (2), 834-852 (2013).
    doi 10.1016/j.laa.2011.04.019
  16. M. Udell and A. Townsend, “Why Are Big Data Matrices Approximately Low Rank?’’ SIAM J. Math. Data Sci. 1 (1), 144-160 (2019).
    doi 10.1137/18M1183480
  17. S. Budzinskiy, “Entrywise Tensor-Train Approximation of Large Tensors Via Random Embeddings,”
    https://arxiv.org/pdf/2403.11768 . Cited August 29, 2024.
  18. N. Gillis and Y. Shitov, “Low-Rank Matrix Approximation in the Infinity Norm,” Linear Algebra Its Appl. 581, 367-382 (2019).
    doi 10.1016/j.laa.2019.07.017
  19. V. Daugavet, “Uniform Approximation of a Function of Two Variables, Tabulated as the Product of Functions of a Single Variable,” Zh. Vychisl. Mat. Mat. Fiz. 11 (2), 289-303 (1971) [USSR Comput. Math. Math. Phys. 11 (2), 1-16 (1971)].
    doi 10.1016/0041-5553(71)90160-1
  20. N. Zamarashkin, S. Morozov, and E. Tyrtyshnikov, “On the Best Approximation Algorithm by Low-Rank Matrices in Chebyshev’s Norm,” Zh. Vychisl. Mat. Mat. Fiz. 62 (5), 723-741 (2022) [USSR Comput. Math. Math. Phys. 62 (5), 701-718 (2022)].
    doi 10.1134/S0965542522050141
  21. S. Morozov, M. Smirnov, and N. Zamarashkin, “On the Optimal Rank-1 Approximation of Matrices in the Chebyshev Norm,” Linear Algebra Its Appl. 679, 4-29 (2023).
    doi 10.1016/j.laa.2023.09.007
  22. S. Budzinskiy, “Quasioptimal Alternating Projections and Their Use in Low-Rank Approximation of Matrices and Tensors,”
    https://arxiv.org/pdf/2308.16097 . Cited August 29, 2024.
  23. S. Budzinskiy, “On the Distance to Low-Rank Matrices in the Maximum Norm,” Linear Algebra Its Appl. 688, 44-58 (2024).
    doi 10.1016/j.laa.2024.02.012