CLC number: TP311
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2018-10-15
Cited: 0
Clicked: 3627
Dhabaleswar Panda, Xiao-yi Lu, Hari Subramon. Networking and communication challenges for post-exascale systems[J]. Frontiers of Information Technology & Electronic Engineering, 2018, 19(10): 1230-1235.
@article{title="Networking and communication challenges for post-exascale systems",
author="Dhabaleswar Panda, Xiao-yi Lu, Hari Subramon",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="19",
number="10",
pages="1230-1235",
year="2018",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1800631"
}
%0 Journal Article
%T Networking and communication challenges for post-exascale systems
%A Dhabaleswar Panda
%A Xiao-yi Lu
%A Hari Subramon
%J Frontiers of Information Technology & Electronic Engineering
%V 19
%N 10
%P 1230-1235
%@ 2095-9184
%D 2018
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1800631
TY - JOUR
T1 - Networking and communication challenges for post-exascale systems
A1 - Dhabaleswar Panda
A1 - Xiao-yi Lu
A1 - Hari Subramon
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 19
IS - 10
SP - 1230
EP - 1235
%@ 2095-9184
Y1 - 2018
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1800631
Abstract: With the significant advancement in emerging processor, memory, and networking technologies, exascale systems will become available in the next few years (2020–2022). As the exascale systems begin to be deployed and used, there will be a continuous demand to run next-generation applications with finer granularity, finer time-steps, and increased data sizes. Based on historical trends, next-generation applications will require post-exascale systems during 2025–2035. In this study, we focus on the networking and communication challenges for post-exascale systems. Firstly, we present an envisioned architecture for post-exascale systems. Secondly, the challenges are summarized from different perspectives: heterogeneous networking technologies, high-performance communication and synchronization protocols, integrated support with accelerators and field-programmable gate arrays, fault-tolerance and quality-of-service support, energy-aware communication schemes and protocols, software-defined networking, and scalable communication protocols with heterogeneous memory and storage. Thirdly, we present the challenges in designing efficient programming model support for high-performance computing, big data, and deep learning on these systems. Finally, we emphasize the critical need for co-designing runtime with upper layers on these systems to achieve the maximum performance and scalability.
[1]ASCAC Subcommittee on Exascale Computing, 2010. The Opportunities and Challenges of Exascale Computing. https://science.energy.gov/media/ascr/ascac/pdf/ reports/Exascale_subcommittee_report.pdf
[2]Biswas R, Lu XY, Panda DK, 2018. Accelerating tensorFlow with adaptive RDMA-based gRPC. 25th IEEE Int Conf on High Performance Computing, Data, and Analytic.
[3]Cui YF, Moore R, Olsen K, et al., 2007. Enabling very-large scale earthquake simulations on parallel machines. In: Shi Y, van Albada GD, Dongarra J, et al. (Eds.), Computational Science. Springer Berlin Heidelberg, p.46-53.
[4]Energy Government, 2011. Workshop on Terabits Networks for Extreme Scale Science. https://science.energy.gov/ /media/ascr/pdf/program-linebreak documents/docs/Terabit_networks_workshop_report.linebreak pdf
[5]Graham RL, Bureddy D, Lui P, et al., 2016. Scalable Hierarchical Aggregation Protocol (SHArP): a hardware architecture for efficient data reduction. Proc 1st Workshop on Optimization of Communication in HPC, p.1-10.
[6]Intel, 2016. Intel Omni-Path Architecture Driving Exascale Computing and HPC. https://www.intel.com/content/www/us/en/high-linebreak performance-computing-fabrics/omni-path-driving-linebreak exascale-computing.html
[7]Li RZ, DeTar C, Gottlieb S, et al., 2017. MILC code performance on high end CPU and GPU supercomputer clusters. http://cn.arxiv.org/abs/1712.00143
[8]Lu XY, Shankar D, Gugnani S, et al., 2016. High-performance design of Apache Spark with RDMA and its benefits on various workloads. Proc IEEE Int Conference on Big Data, p.253-262.
[9]Mellanox BlueField, 2017. Multicore System on Chip. http://www.mellanox.com/related-docs/npu-multicore-linebreak processors/PB_Bluefield_SoC.pdf
[10]NVMe Express, 2016. NVMe over Fabrics. http://www.nvmexpress.org/wp-content/uploads/linebreak NVMe_Over_Fabrics.pdf
[11]ORNL, 2018. Summit: America's Newest and Smartest Supercomputer. https://www.olcf.ornl.gov/summit/
[12]Rahman MWU, Lu XY, Islam NS, et al., 2014. {HOMR: a hybrid approach to exploit maximum overlapping in MapReduce over high performance interconnects}. Proc 28th ACM Int Conf on Supercomputing, p.33-42.
[13]Rajachandrasekar R, Jaswani J, Subramoni H, et al., 2012. Minimizing network contention in InfiniBand clusters with a QoS-aware data-staging framework. IEEE Int Conf on Cluster Computing, p.329-336.
[14]Sarvestani AMK, Bailey C, Austin J, 2018. Performance analysis of a 3D wireless massively parallel computer. J Sens Actuat Netw, 7(2):18.
[15]Shankar D, Lu X, Islam N, et al., 2016. High-performance hybrid key-value store on modern clusters with RDMA interconnects and SSDs: non-blocking extensions, designs, and benefits. IEEE Int Parallel and Distributed Processing Symp, p.393-402.
[16]Subramoni H, Lai P, Sur S, et al., 2010. Improving application performance and predictability using multiple virtual lanes in modern multi-core InfiniBand clusters. Int Conf on Parallel Processing.
Open peer comments: Debate/Discuss/Question/Opinion
<1>