DOI: https://doi.org/10.26089/NumMet.v19r437

Scalability evaluation of iterative algorithms for supercomputer simulation of physical processes

Authors

  • N.A. Ezhova
  • L.B. Sokolinsky

Keywords:

iterative algorithm
BSF parallel computation model
scalability estimation
speedup
parallel efficiency
Jacobi method
cluster computing systems

Abstract

This paper is devoted to the development of a methodology for evaluating the scalability of compute-intensive iterative algorithms used for simulating complex physical processes on supercomputer systems. The proposed methodology is based on the BSF (Bulk Synchronous Farm) parallel computation model, which makes it possible to predict the upper scalability bound of an iterative algorithm in early stages of its design. The BSF model assumes the representation of the algorithm in the form of operations on lists using high-order functions. Two classes of representations are considered: BSF-M (Map BSF) and BSF-MR (Map-Reduce BSF). The proposed methodology is described by the example of solving a system of linear equations by the Jacobi method. For the Jacobi method, two iterative algorithms are constructed: Jacobi-M based on the BSF-M representation and Jacobi-MR based on the BSF-MR representation. Analytical estimations of the speedup, parallel efficiency and upper scalability bound are obtained for these algorithms using the BSF cost metrics on multi-processor computing systems with distributed memory. These algorithms are implemented on C++ language using the BSF program skeleton and MPI parallel programming library. The results of large-scale computational experiments performed on a cluster computing system are discussed. Based on the experimental results, an analysis of the adequacy of estimations obtained analytically using the BSF cost metric is made.


Published

2018-12-24

Issue

Section

Section 1. Numerical methods and applications

Author Biographies

N.A. Ezhova

L.B. Sokolinsky


References

  1. K. Borodulin et al., “Towards Digital Twins Cloud Platform,” in Proc. 10th Int. Conf. on Utility and Cloud Computing (ACM Press, New York, 2017), pp. 209-210.
  2. G. Bilardi and A. Pietracaprina, “Models of Computation, Theoretical,” in Encyclopedia of Parallel Computing (Springer, Boston, 2011), pp. 1150-1158.
  3. J. F. JaJa, “PRAM (Parallel Random AccessMachines),” in Encyclopedia of Parallel Computing (Sptinger, Boston, 2011), pp. 1608-1615.
  4. L. G. Valiant, “A Bridging Model for Parallel Computation,” Commun. ACM 33 (8), 103-111.
  5. D. Culler, R. Karp, D. Patterson, et al., “LogP: Towards a Realistic Model of Parallel Computation,” in Proc. 4th ACM SIGPLAN Symp. on Principles and Practice of Parallel Programming, San Diego, USA, May 19-22, 1993 (ACM Press, New York, 1993),
    doi 10.1145/155332.155333
  6. M. Forsell and V. Lepp854nen, “An Extended PRAM-NUMA Model of Computation for TCF Programming,” in Proc. 2012 IEEE 26th International Parallel and Distributed Processing Symp. Workshops, Shanghai, China, May 21-25, 2012 (IEEE Press, Washington, DC, 2012), pp. 786-793.
  7. A. V. Gerbessiotis, “Extending the BSP Model for Multi-Core and Out-of-Core Computing: MBSP,” Parallel Comput. 41, 90-102 (2015).
  8. F. Lu , J. Song, and Y. Pang, “HLognGP: A Parallel Computation Model for GPU Clusters,” Concurr. Comp-Pract. E. 27 (17), 4880-4896 (2015).
  9. N. A. Ezhova and L. B. Sokolinsky, “Parallel Сomputational Model for Multiprocessor Systems with Distributed Memory,” Vestn. South Ural State Univ. Ser. Vychisl. Mat. Inf. 7 (2), 32-49 (2018).
  10. L. B. Sokolinsky, “Analytical Estimation of the Scalability of Iterative Numerical Algorithms on Distributed Memory Multiprocessors,” Lobachevskii J. Math. 39 {4), 571-575 (2018).
  11. L. M. Silva and R. Buyya, “Parallel Programming Models and Paradigms,” in High Performance Cluster Computing: Architectures and Systems (Prentice Hall, Upper Saddle River, 1999), Vol. 2, pp. 4-27.
  12. F. Darema, “SPMD Computational Model,” in Encyclopedia of Parallel Computing (Springer, Boston, 2011), pp. 1933-1943.
  13. S. Sahni and G. Vairaktarakis, “The Master-Slave Paradigm in Parallel Computer and Industrial Settings,” J. Global Optim. 9 (3-4}, 357-377 (1996).
  14. I. M. Sokolinskaya and L. B. Sokolinsky, “Scalability Evaluation of the NSLP Algorithm for Solving Non-Stationary Linear Programming Problems on Cluster Computing Systems,” in Proc. Int. Conf. on Russian Supercomputing Days, Moscow, Russia, September 25-26, 2017 (Mosk. Gos. Univ., Moscow, 2017), pp. 319-332.
  15. M. I. Cole, “Parallel Programming with List Homomorphisms,” Parallel Proc. Lett. 5 (2), 191-203 (1995).
  16. H. Rutishauser, “The Jacobi Method for Real Symmetric Matrices,” in Handbook for Automatic Computation. Vol 2. Linear Algebra (Springer, Heidelberg, 1971), pp. 202-211.
  17. C. G. J. Jacobi, “Ueber eine neue Auflösungsart der bei der Methode der kleinsten Quadrate vorkommenden line854ren Gleichungen,” Astronomische Nachrichten 22 (20), 297-306 (1845).
  18. H. H. Goldstine, F. J. Murray, and J. von Neumann, “The Jacobi Method for Real Symmetric Matrices,” J. ACM 6 (1), 59-96 (1959).
  19. X. I. A. Yang and R. Mittal, “Acceleration of the Jacobi Iterative Method by Factors Exceeding 100 Using Scheduled Relaxation,” J. Comput. Phys. 274, 695-708 (2014).
  20. J. E. Adsuara et al., “Scheduled Relaxation Jacobi Method: Improvements and Applications,” J. Comput. Phys. 321, 369-413 (2016).
  21. P. S. Kostenetskiy and A. Y. Safonov, “SUSU Supercomputer Resources,” in Proc. 10th Annual Int. Scientific Conf. on Parallel Computing Technologies (PCT 2016), Arkhangelsk, Russia, March 29-31, 2016 CEUR Workshop Proc. 1576, 561-573 (2016).