DOI: https://doi.org/10.26089/NumMet.v16r451

A multilevel approach to algorithm and software design for exaflops supercomputers

Authors

  • B.M. Glinskiy
  • I.M. Kulikov
  • A.V. Snytnikov
  • I.G. Chernykh
  • D.V. Weins

Keywords:

exascale computing
co-design
energy efficiency
agent simulation

Abstract

A strategy is proposed for the development of algorithms and software for exaflops supercomputers. This strategy consists of three stages. The first stage is the co-design understood as considering the architecture of the supercomputer at all steps of the development of the code. The second stage is the forward-looking development of algorithms and software for the most promising exaflops supercomputers. The forward-looking development is based on the simulation of the algorithm behavior within a given supercomputer architecture. The third stage is the estimation of energy efficiency of the algorithm with various implementations for a particular architecture or for different supercomputer architectures. The proposed approach is illustrated by the examples of solving two problems from astrophysics and plasma physics.


Published

2015-09-21

Issue

Section

Section 1. Numerical methods and applications

Author Biographies

B.M. Glinskiy

I.M. Kulikov

A.V. Snytnikov

I.G. Chernykh

D.V. Weins


References

  1. B. M. Glinskiy, I. M. Kulikov, A. Snytnikov, et al., “Co-Design of Parallel Numerical Methods for Plasma Physics and Astrophysics,” Supercomput. Front. Innov. 1 (3), 88-98 (2014).
  2. A. Sivasubramaniam, A. Singla, U. Ramachandran, and H. Venkateswaran, “A Simulation-Based Scalability Study of Parallel Systems,” J. Parallel Distrib. Comput. 22 (3), 411-426 (1994).
  3. G. Racherla, S. E. Killian, L. D. Fife, et al., “PARSIT: A Parallel Algorithm Reconfiguration Simulation Tool,” in Proc. IEEE Int. Conf. on High Performance Computing, New Delhi, India, December 27-30, 1995 ,
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.67.9802&rep=rep1&type=pdf . Cited October 25, 2015.
  4. R. M. D’Souza, M. Lysenko, S. Marino, and D. Kirschner, “Data-Parallel Algorithms for Agent-Based Model Simulation of Tuberculosis on Graphics Processing Units,” in Proc. Spring Simulation Multiconference (SpringSim’09), San Diego, USA, March 22-27, 2009 ,
    http://dl.acm.org/citation.cfm?id=1639831 . Cited October 25, 2015.
  5. M. T. Goodrich, Simulating Parallel Algorithms in the MapReduce Framework with Applications to Parallel Computational Geometry , arXiv preprint: 1004.4708v1 [cs.DS] (Cornell Univ. Library, Ithaca, 2010), available at
    http://arxiv.org/abs/1004.4708/.
  6. P. Kogge (Ed.), ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems , DARPA Report.
    http://www.cse.nd.edu/Reports/2008/TR-2008-13.pdf . Cited October 25, 2015.
  7. V. P. Ivannikov, S. S. Gaisaryan, A. I. Avetisyan, and V. A. Padaryan, “Estimation of Dynamical Characteristics of a Parallel Program on a Model,” Programmirovanie, No. 4, 21-37 (2006) [Program. Comput. Software 32 (4), 203-214 (2006)].
  8. B. Glinsky, A. Rodionov, M. Marchenko, et al., “Scaling the Distributed Stochastic Simulation to Exaflop Supercomputers,” in Proc. 2012 IEEE 14th Int. Conf. on High Performance Computing and Communication and 2012 IEEE 9th Int. Conf. on Embedded Software and Systems (IEEE Press, Washington, 2012), pp. 1131-1136.
  9. B. M. Glinskii, A. S. Rodionov, M. A. Marchenko, et al., “Agent-Oriented Approach to Simulate Exaflop Supercomputer with Application to Distributed Stochastic Simulation,” Vestn. South Ural State Univ. Ser. Mat. Model. Programm., No 18, 93-106 (2012).
  10. P. Larsson, Energy-Efficient Software Guidelines , White Paper for the Intel Software Solutions Group (2008).
    https://software.intel.com/sites/default/files/2d/e2/1332 . Cited October 25, 2015.
  11. S. Albers and H. Fujiwara, “Energy-Efficient Algorithms for Flow Time Minimization,” ACM Trans. Algorithms 3 (4) (2007).
    doi 10.1145/1290672.1290686
  12. R. Bianchini and R. Rajamony, “Power and Energy Management for Server Systems,” Computer 37 (11), 68-74 (2004).
  13. E. J. Weyuker and F. I. Vokolos, “Experience with Performance Testing of Software Systems: Issues, an Approach, and Case Study’’ IEEE Trans. Softw. Eng. 2000. 26 (12), 1147-1156 (2000).
  14. E. Capra, C. Francalanci, and S. A. Slaughter, “Is Software, “Green’’? Application Development Environments and Energy Efficiency in Open Source Applications,” Inform. Software Tech. 54 (1), 60-71 (2012).
  15. V. P. Stulov, Lectures on Gas Dynamics (Fizmatlit, Moscow, 2004) [in Russian].
  16. Ye. A. Bondar, A. A. Shevyrin, Y. S. Chen, et al., “Direct Monte Carlo Simulation of High-Temperature Chemical Reactions in Air,” Thermophys. Aeromech. 2013. 20 (5), 553-564.
  17. M. A. Marchenko, “A Study of a Parallel Statistical Modelling Algorithm for Solution of the Nonlinear Coagulation Equation,” Russ. J. Numer. Anal. Math. Modelling 23 (6), 597-613.
  18. S. Ono, Y. Noguchi, R. Sahara, et al., “TOMBO: All-Electron Mixed-Basis Approach to Condensed Matter Physics,” Comput. Phys. Commun. 189, 20-30 (2015).
  19. S. K. Godunov, Lectures on Modern Aspects of Linear Algebra (Nauchnaya Kniga, Novosibirsk, 2002) [in Russian].
  20. L. N. Trefethen and M. Embree, Spectra and Pseudospectra: The Behavior of Nonnormal Matrices and Operators (Princeton Univ. Press, Princeton, 2005).
  21. R. C. Gonzalez, Digital Image Processing (Prentice Hall, Upper Saddle River, 2008).
  22. J. R. Parker, Algorithms for Image Processing and Computer Vision (Wiley, New York, 2010).
  23. S. A. Goreinov, I. V. Oseledets, D. V. Savostyanov, et al., “How to Find a Good Submatrix,” in Matrix Methods: Theory, Algorithms, Applications (World Scientific, Hackensack, 2010), pp. 247-256.
  24. V. P. Il’in, “Parallel Processes at the Stages of Petaflops Modeling,” Vychisl. Metody Programm. 12, 103-109 (2011).
  25. Ya. L. Gurieva and V. P. Il’in, “Parallel Approaches and Technologies of Domain Decomposition Methods,” Zap. Nauchn. Sem. POMI 428, 89-106 (2014) [J. Math. Sci. 207 (5), 724-735 (2015)].
  26. V. P. Il’in, “Biconjugate Direction Methods in Krylov Subspaces,” Sib. J. Ind. Mat. XI} (4), 47-60 (2008) [J. Appl. Ind. Math. 4 (1), 65-78 (2010)].
  27. A. Kalinkin, Yu. M. Laevsky, and S. Gololobov, “2D Fast Poisson Solver for High-Performance Computing,” in Lecture Notes in Computer Science (Springer, Heidelberg, 2009), Vol. 5698, pp. 112-120.
  28. A. V. Terekhov, “Parallel Dichotomy Algorithm for Solving Tridiagonal System of Linear Equations with Multiple Right-Hand Sides,” Parallel Comput. 36 (8), 423-438 (2010).
  29. N. Guglielmi and E. Hairer, “Implementing Radau IIA Methods for Stiff Delay Differential Equations,” Computing 67 (1), 1-12.
  30. S. K. Godunov and I. M. Kulikov, “Computation of Discontinuous Solutions of Fluid Dynamics Equations with Entropy Nondecrease Guarantee,” Zh. Vychisl. Mat. Mat. Fiz. 54 (6), 1008-1021 (2014) [Comput. Math. Math. Phys. 54 (6), 1012-1024 (2014)].
  31. P. L. Roe and D. S. Balsara, “Notes on the Eigensystem of Magnetohydrodynamics,” SIAM J. Appl. Math. 56 (1), 57-67 (1996).
  32. N. L. Mitchell, E. I. Vorobyov, and G. Hensler, “Collisionless Stellar Hydrodynamics as an Efficient Alternative to N-body Methods,” Mon. Not. R. Astron. Soc. 428 (3), 2674-2687 (2013).
  33. I. Kulikov, “GPUPEGAS: A New GPU-Accelerated Hydrodynamic Code for Numerical Simulations of Interacting Galaxies,” Astrophys. J. Suppl. Ser. 214 (1), 1-12 (2014).
  34. J. M. Ibánez, I. Cordero-Carrión, and J. A. Miralles, “On Numerical Relativistic Hydrodynamics and Barotropic Equations of State,” Class. Quantum Grav. 29 (2012).
    doi 10.1088/0264-9381/29/15/157001
  35. S. K. Godunov, “About Inclusion of Maxwell’s Equations in Systems Relativistic of the Invariant Equations,” Zh. Vychisl. Mat. Mat. Fiz. 53 (8), 1356-1359 (2013) [Comput. Math. Math. Phys. 53 (8), 1179-1182 (2013)].
  36. S. K. Godunov, “Thermodynamic Formalization of the Fluid Dynamics Equations for a Charged Dielectric in an Electromagnetic Field,” Zh. Vychisl. Mat. Mat. Fiz. 52 (5), 916-929 (2012) [Comput. Math. Math. Phys. 52 (5), 787-799 (2012)].
  37. S. K. Godunov and E. I. Romenskii, Elements of Continuum Mechanics and Conservation Laws (Nauchnaya Kniga, Novosibirsk, 1998) [in Russian].
  38. A. S. Roman’kov and E. I. Romenskii, “The Runge-Kutta/WENO Method for Solving Equations for Small-Amplitude Wave Propagation in a Saturated Porous Medium,” Sib. Zh. Vychisl. Mat. 17 (3), 259-271 (2014) [Numer. Anal. Appl. 7 (3), 215-226 (2014)].
  39. V. Springel, “E pur si muove: Galilean-Invariant Cosmological Hydrodynamical Simulations on a Moving Mesh,” Mon. Not. R. Astron. Soc. 401 (2), 791-851 (2010).
  40. J. W. Murphy and A. Burrows, “BETHE-Hydro: An Arbitrary Lagrangian-Eulerian Multi-Dimensional Hydrodynamics Code for Astrophysical Simulations,” Astrophys. J. Suppl. Ser. 179, 209-241 (2008).
  41. N. P. Chuev, “Construction of a Three-Dimensional Evolutionary Differential Model of Polytropic Self-Gravitating Gas Dynamics,” Vestn. Ural’sk. Univ. Putei Soobshcheniya 9 (1), 14-21 (2011).
  42. M. A. Kraeva and V. E. Malyshkin, “Assembly Technology for Parallel Realization of Numerical Models on MIMD-Multicomputers,” Future Gener. Comput. Syst. 17 (6), 755-765 (2001).
  43. R. A. Gingold and J. J. Monaghan, “Smoothed Particle Hydrodynamics: Theory and Application to Non-Spherical Stars,” Mon. Not. R. Astron. Soc. 181 (3), 375-389 (1977).
  44. X. J. Li, F. Mo, X. H. Wang, et al., “Numerical Study on Mechanism of Explosive Welding,” Sci. Technol. Weld. Join. 17 (1), 36-41 (2012).
  45. D. Podkorytov, A. Rodionov, O. Sokolova, and A. Yurgenson, “Using Agent-Oriented Simulation System AGNES for Evaluation of Sensor Networks,” in Lecture Notes in Computer Science (Springer, Heidelberg, 2010), Vol. 6235, pp. 247-250.