MPI-IO application: an implementation of a mine ventilation model

Authors

  • B.L. Petushkeev

Keywords:

high-performance computing
hydrodynamics
filtration
parallel I/O
MPI-IO
scalability
data management

Abstract

The high performance of supercomputers not only allows one to solve today problems faster but also allows one to formulate other problems, earlier not impossible to solve. However, this requires a revision of a programming model because of an increasing amount of the resulting data. MPI being a de facto a standard for distributed memory computing systems allows many options to actually program parallel I/O operations since the publication of the MPI-2 standard definition. However, such capabilities are often neglected when considering the design of applications for simulating a large problem. In this paper we describe some results of practical usage of MPI I/O in the context of gas filtration simulations through a porous mine medium. We also discuss the challenges presented by the scalability of application and performance of file subsystems and also describe how we perform a single process optimization and a parallel code optimization. All tests were run on a SKIF-Siberia computing system, a 566 CPU cluster located at Tomsk State University. This work was supported by the Russian Foundation for Basic Research (project N 08-08-12029-ofi).


Published

2009-04-06

Issue

Section

Section 2. Programming

Author Biography

B.L. Petushkeev


References

  1. Васенин И.М., Петушкеев Б.Л. Опыт применения параллельных вычислений к проблематике шахтной вентиляции // IV Сибирская школа-семинар по параллельным и высокопроизводительным вычислениям. Томск, 2007. 116-122.
  2. Сайт МВЦ ТГУ (http://www.skif.tsu.ru).
  3. ROMIO. A high-performance, portable MPI-IO implementation (http://www.mcs.anl.gov/romio).
  4. Yu H. On developing BlueGene/L MPI-IO with high performance
  5. Thakur R., Gropp W., Lusk E. An abstract-device interface for implementing portable parallel-I/O interfaces // Proc. of the 6th Symposium on the Frontiers of Massively Parallel Computation. 1996. 180-187.
  6. Kimpe D., Lani A., Quintino T., Vandewalle S., Poedts S., Deconinck H. A Study of real world I/O performance in parallel scientific computing // PARA 2006, LNCS 4699. 2007. 871-881.
  7. Borrill J., Oliker L., Shalf J., Shan H. Investigation of leading HPC I/O performance using a scientific-application derived benchmark // Int. Conf. for High-Performance Computing Networking Storage. Reno, 2007.
  8. Васенин И.М., Петушкеев Б.Л. Задача об ускорении решения совместных уравнений газовой динамики и переноса // Известия ВУЗов. Физика. 2007. № 9/2. 274-281.
  9. Intel Architecture Software Developer’s Manual. Volume 3: System Programming(http://www.intel.com/products/processor/manuals/index.htm).
  10. Message Passing Interface Forum, MPI-2: Extensions to the Message-Passing Interface, July 1997 (http://www.mpi-forum.org/docs/docs.html).
  11. Thakur R., Gropp W., Lusk E. Optimizing noncontiguous access in MPI-IO // Parallel Computing. 2002. 28. 83-105.
  12. Thakur R., Gropp W., Lusk E. A case for using MPI’s derived datatypes to improve I/O performance // Proc. of SC98: High Performance Networking and Computing. San Jose, 1998.
  13. Thakur R., Gropp W., Lusk E. Data sieving and collective I/O in ROMIO // Proc. of the 7th symposium on the Frontiers of Massively Parallel Computation. Annapolis, 1999. 182-189.
  14. Cui Н., Moore R., Olsen K., Chourasia A., Maechling P., Minster B., Day S., Hu Y., Zhu J., Majumdar A., Jordan T. Enabling very-large scale earthquake simulations on parallel machines // ICCS 2007, Part I, LNCS 4487. 46-53.
  15. Cook A., Cabot W., Welcome M., Williams P., Miller B., de Supinski B., Yates R. Tera-scalable algorithms for variable-density elliptic hydrodynamics with spectral accuracy // Proc IEEE/ACM SC05. Seattle, 2005.