Full Text:   <998>

Summary:  <1010>

CLC number: TP311

On-line Access: 2022-04-22

Received: 2018-07-07

Revision Accepted: 2018-09-14

Crosschecked: 2018-10-15

Cited: 0

Clicked: 1567

Citations:  Bibtex RefMan EndNote GB/T7714


Ze-yao Mo


-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2018 Vol.19 No.10 P.1251-1260


Extreme-scale parallel computing: bottlenecks and strategies

Author(s):  Ze-yao Mo

Affiliation(s):  CAEP Software Center for High Performance Numerical Simulation, Beijing 100088, China; more

Corresponding email(s):   zeyao_mo@iapcm.ac.cn

Key Words:  Extreme scale, Numerical simulation, Parallel computing, Supercomputers

Ze-yao Mo. Extreme-scale parallel computing: bottlenecks and strategies[J]. Frontiers of Information Technology & Electronic Engineering, 2018, 19(10): 1251-1260.

@article{title="Extreme-scale parallel computing: bottlenecks and strategies",
author="Ze-yao Mo",
journal="Frontiers of Information Technology & Electronic Engineering",
publisher="Zhejiang University Press & Springer",

%0 Journal Article
%T Extreme-scale parallel computing: bottlenecks and strategies
%A Ze-yao Mo
%J Frontiers of Information Technology & Electronic Engineering
%V 19
%N 10
%P 1251-1260
%@ 2095-9184
%D 2018
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1800421

T1 - Extreme-scale parallel computing: bottlenecks and strategies
A1 - Ze-yao Mo
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 19
IS - 10
SP - 1251
EP - 1260
%@ 2095-9184
Y1 - 2018
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1800421

Extreme-scale numerical simulations seriously demand extreme parallel computing capabilities. To address the challenges of these capabilities toward exascale, we systematically analyze the major bottlenecks of parallel computing research from three perspectives: computational scale, computing efficiency, and programming productivity. For these bottlenecks, we propose a series of urgent key issues and coping strategies. This study will be useful in synchronizing development between the numerical computing capability and supercomputer peak performance.




Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article


[1]Amarasinghe S, Hall M, Lethin R, et al., 2011. mboxExascale programming challenges. Technical Report of the Workshop on Exascale Programming Challenges.

[2]Ashby S, Beckman P, Chen J, et al., 2011. The opportunities and challenges of exascale computing. Summary Report of the Advanced Scientific Computing Advisory Committee Subcommittee.

[3]Balay S, Gropp WD, McInnes LC, et al., 1997. Efficient management of parallelism in object-oriented numerical software libraries. In: Arge E, Bruaset AM, Langtangen HP (Eds.), Modern Software Tools for Scientific Computing. Birkhauser Boston Inc., Cambridge, USA.

[4]Campos C, Roman JE, 2012. Strategies for spectrum slicing based on restarted Lanczos methods. Numer Algor, 60(2):279-295.

[5]Cao X, Mo Z, Liu X, et al., 2011. Parallel implementation of fast multipole method based on JASMIN. Sci China Inform Sci, 54(4):757-766 (in Chinese).

[6]Chung IH, Lee CR, Zhou J, et al., 2011. Hierarchical mapping for HPC applications. IEEE Int Symp on Parallel and Distributed Processing Workshops and PhD Forum, p.1815-1823.

[7]Cooley JW, Tukey JW, 1965. An algorithm for the machine calculation of complex Fourier series. Math Comput, 19(90):297-301.

[8]Darve E, 2000. The fast multipole method: numerical implementation. J Comput Phys, 160(1):195-240.

[9]Dolean V, Jolivet P, Nataf F, 2015. An Introduction to Domain Decomposition Methods: Algorithms, Theory, and Parallel Implementation. Society for Industrial and Applied Mathematics, Philadelphia, USA.

[10]Dongarra J, Foster I, Fox G, et al., 2003. The Sourcebook of Parallel Computing. Morgan Kaufmann Publishers Inc., San Francisco, USA.

[11]Dubey A, Almgren A, Bell J, et al., 2014. A survey of high level frameworks in block-structured adaptive mesh refinement packages. J Parall Distr Comput, 74(12):3217-3227.

[12]Engheta N, Murphy WD, Rokhlin V, et al., 1992. The fast multipole method ({FMM) for electromagnetic scattering problems}. IEEE Trans Antenn Propag, 40(6):634-641.

[13]Falgout RD, Yang UM, 2002. Hypre: a library of high performance pre-conditioners. Int Conf on Computational Science, p.632-641.

[14]Fu H, He C, Chen B, et al., 2017. 18.9-Pflops nonlinear earthquake simulation on Sunway TaihuLight: enabling depiction of 18-Hz and 8-meter scenarios. Int Conf for High Performance Computing, Networking, Storage, and Analysis, p.1-12.

[15]Hennessy JL, Patterson DA, 2003. Computer Architecture: a Quantitative Approach. Morgan Kaufmann Publishers Inc., San Francisco, USA.

[16]Hernandez V, Roman JE, Vidal V, 2005. SLEPc: a scalable and flexible toolkit for the solution of eigenvalue problems. ACM Trans Math Softw, 31(3):351-362.

[17]Heroux MA, Bartlett RA, Howle VE, et al., 2005. An overview of the Trilinos project. ACM Trans Math Softw, 31(3):397-423.

[18]Johansen H, McInnes LC, Bernholdt DE, et al., 2014. Software productivity for extreme-scale science. DOE Workshop Report.

[19]Keyes DE, Mcinnes LC, Woodward CS, et al., 2013. Multiphysics simulations: challenges and opportunities. Int J High Perform Comput Appl, 27(1):4-83.

[20]Knoll DA, Keyes DE, 2004. Jacobian-free Newton-Krylov methods: a survey of approaches and applications. J Comput Phys, 193(2):357-397.

[21]Li J, Zhang X, Tan G, et al., 2013. SMAT: an input adaptive sparse matrix-vector multiplication auto-tuner. ACM SIGPLAN Not, 48(6):117-126.

[22]Liu X, Yang Z, Yang Y, 2018. A nested partitioning load balancing algorithm for Tianhe-2. J Comput Res Devel, 55(2):418-425.

[23]Lucas R, Ang J, Bergman K, et al., 2014. DOE Advanced Scientific Computing Advisory Subcommittee report: top 10 exascale research challenges.

[24]Mo Z, 2014. Domain-specific programming model for high performance scientific and engineering computation. Commun CCF, 10(1):8-12 (in Chinese).

[25]Mo Z, 2015. Progress on high performance programming framework for numerical simulation. E-Sci Technol Appl, 6(4):11-19 (in Chinese).

[26]Mo Z, 2016. High performance programming frameworks for numerical simulation. Nat Sci Rev, 3(1):28-29.

[27]Mo Z, Zhang A, Cao X, et al., 2010. JASMIN: a parallel software infrastructure for scientific computing. Front Comput Sci China, 4(4):480-488.

[28]Mo Z, Zhang A, Liu Q, et al., 2015. Research on the components and practices for domain-specific parallel programming models for numerical simulation. Sci Sin Inform, 45(3):385-397 (in Chinese).

[29]Mo Z, Zhang A, Liu Q, et al., 2016. Parallel algorithm and parallel programming: from specialty to generality as well as software reuse. Sci Sin Inform, 46(10):1392-1410 (in Chinese).

[30]Pei W, Zhu S, 2009. Scientific computing for laser fusion. Physics, 38(8):559-568 (in Chinese).

[31]Reed DA, Bajcsy R, Fernandez MA, et al., 2005. Computational science: ensuring America’s competitiveness. Research Report No. ADA462840. President>’s Information Technology Advisory Committee. http://www.dtic.mil/dtic/tr/fulltext/u2/a462840.pdf

[32]Rossinelli D, Hejazialhosseini B, Hadjidoukas P, et al., 2013. 11 Pflop/s simulations of cloud cavitation collapse. Int Conf on High Performance Computing, Networking, Storage, and Analysis, p.1-13.

[33]Rudi J, Malossi ACI, Isaac T, et al., 2015. An extreme-scale implicit solver for complex PDEs: highly heterogeneous flow in Earth’s mantle. Int Conf for High Performance Computing, Networking, Storage, and Analysis, p.1-12.

[34]Saad T, Darwish M, 2009. A high scalability parallel algebraic multigrid solver. In: Deconinck H, Dick E (Eds.), Computational Fluid Dynamics. Springer Berlin Heidelberg, p.231-236.

[35]Saad Y, 2003. Iterative Methods for Sparse Linear Systems (2nd Ed.). Society for Industrial and Applied Mathematics, Philadelphia, USA.

[36]Sarkar V, Budimlic Z, Kulkani M, 2016. 2014 runtime systems Summit. Runtime Systems Report.

[37]Shaw DE, Grossman JP, Bank JA, et al., 2014. Anton 2: raising the bar for performance and programmability in a special-purpose molecular dynamics supercomputer. Int Conf for High Performance Computing, Networking, Storage, and Analysis, p.41-53.

[38]Tian R, Zhou M, Wang J, et al., 2018. A challenging dam structural analysis: large-scale implicit thermo-mechanical coupled contact simulation on Tianhe-2. Comput Mech, p.1-21.

[39]Vuduc R, Demmel JW, Yelick KA, 2005. OSKI: a library of automatically tuned sparse matrix kernels. J Phys Conf Ser, 16:521-530.

[40]Wissink AM, Hornung RD, Kohn SR, et al., 2001. Large scale parallel structured AMR calculations using the SAMRAI framework. ACM/IEEE Conf on Supercomputing, p.6.

[41]Xu X, Mo Z, 2017. Algebraic interface-based coarsening AMG pre-conditioner for multi-scale sparse matrices with applications to radiation hydrodynamics computation. Numer Linear Algebra Appl, 24(2):e2078.

[42]Yang C, Xue W, Fu H, et al., 2016. 10M-core scalable fully-implicit solver for non-hydrostatic atmospheric dynamics. Int Conf for High Performance Computing, Networking, Storage, and Analysis, p.1-12.

[43]Yang X, 2012. Sixty years of parallel computing. Comput Eng Sci, 34(8):1-10 (in Chinese).

[44]Zhao Z, Zhou H, Ma H, et al., 2014. Numerical simulation and verification of electromagnetic pulse effect of PIN diode limiter. High Power Laser Particle Beams, 26(6):81-85 (in Chinese).

Open peer comments: Debate/Discuss/Question/Opinion


Please provide your name, email address and a comment

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2022 Journal of Zhejiang University-SCIENCE