Full Text:   <2487>

CLC number: TP311

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2018-10-15

Cited: 0

Clicked: 3627

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Xiao-yi Lu

http://orcid.org/0000-0001-7581-8905

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2018 Vol.19 No.10 P.1230-1235

http://doi.org/10.1631/FITEE.1800631


Networking and communication challenges for post-exascale systems


Author(s):  Dhabaleswar Panda, Xiao-yi Lu, Hari Subramon

Affiliation(s):  Department of Computer Science and Engineering, The Ohio State University, Ohio 43210, USA

Corresponding email(s):   panda@cse.ohio-state.edu, luxi@cse.ohio-state.edu, subramon@cse.ohio-state.edu

Key Words:  Networking, Communication, Synchronization, Post-exascale, Programming model, Big data, High-performance computing (HPC), Deep learning, Quality of service (QoS), Accelerator


Dhabaleswar Panda, Xiao-yi Lu, Hari Subramon. Networking and communication challenges for post-exascale systems[J]. Frontiers of Information Technology & Electronic Engineering, 2018, 19(10): 1230-1235.

@article{title="Networking and communication challenges for post-exascale systems",
author="Dhabaleswar Panda, Xiao-yi Lu, Hari Subramon",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="19",
number="10",
pages="1230-1235",
year="2018",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1800631"
}

%0 Journal Article
%T Networking and communication challenges for post-exascale systems
%A Dhabaleswar Panda
%A Xiao-yi Lu
%A Hari Subramon
%J Frontiers of Information Technology & Electronic Engineering
%V 19
%N 10
%P 1230-1235
%@ 2095-9184
%D 2018
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1800631

TY - JOUR
T1 - Networking and communication challenges for post-exascale systems
A1 - Dhabaleswar Panda
A1 - Xiao-yi Lu
A1 - Hari Subramon
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 19
IS - 10
SP - 1230
EP - 1235
%@ 2095-9184
Y1 - 2018
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1800631


Abstract: 
With the significant advancement in emerging processor, memory, and networking technologies, exascale systems will become available in the next few years (2020–2022). As the exascale systems begin to be deployed and used, there will be a continuous demand to run next-generation applications with finer granularity, finer time-steps, and increased data sizes. Based on historical trends, next-generation applications will require post-exascale systems during 2025–2035. In this study, we focus on the networking and communication challenges for post-exascale systems. Firstly, we present an envisioned architecture for post-exascale systems. Secondly, the challenges are summarized from different perspectives: heterogeneous networking technologies, high-performance communication and synchronization protocols, integrated support with accelerators and field-programmable gate arrays, fault-tolerance and quality-of-service support, energy-aware communication schemes and protocols, software-defined networking, and scalable communication protocols with heterogeneous memory and storage. Thirdly, we present the challenges in designing efficient programming model support for high-performance computing, big data, and deep learning on these systems. Finally, we emphasize the critical need for co-designing runtime with upper layers on these systems to achieve the maximum performance and scalability.

超百亿亿级系统面临的网络和通信挑战

摘要:由于新兴处理器、内存和网络技术的显著进步,百亿亿级系统将在未来几年(2020-2022)推出。随着百亿亿级系统被配置和使用,具有更细粒度、更短时间步长和更大数据量的下一代应用程序将被持续需求。从发展趋势看,2025-2035年间,下一代应用程序将需要超百亿亿级系统。本文关注超百亿亿级系统在网络和通信方面面临的挑战。首先,提出超百亿亿级系统的设想架构。其次,从不同方面阐述面临的挑战,包括多种网络技术、高性能通信和同步协议、加速器和现场可编程门户阵列的集成支持、容错和服务质量支持、能量感知通信方案和协议、软件定义网络以及多种内存和存储器的可扩展通信协议。再次,指出在这些系统上进行支持高性能计算、大数据和深度学习的高效编程模型设计面临的挑战。最后,强调了这些系统的上层共同设计运行时间的关键需求,以实现最优性能和可扩展性。

关键词:网络;通信;同步;超百亿亿级;编程模型;大数据;高性能计算;深度学习;服务质量;加速器

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]ASCAC Subcommittee on Exascale Computing, 2010. The Opportunities and Challenges of Exascale Computing. https://science.energy.gov/media/ascr/ascac/pdf/ reports/Exascale_subcommittee_report.pdf

[2]Biswas R, Lu XY, Panda DK, 2018. Accelerating tensorFlow with adaptive RDMA-based gRPC. 25th IEEE Int Conf on High Performance Computing, Data, and Analytic.

[3]Cui YF, Moore R, Olsen K, et al., 2007. Enabling very-large scale earthquake simulations on parallel machines. In: Shi Y, van Albada GD, Dongarra J, et al. (Eds.), Computational Science. Springer Berlin Heidelberg, p.46-53.

[4]Energy Government, 2011. Workshop on Terabits Networks for Extreme Scale Science. https://science.energy.gov/ /media/ascr/pdf/program-linebreak documents/docs/Terabit_networks_workshop_report.linebreak pdf

[5]Graham RL, Bureddy D, Lui P, et al., 2016. Scalable Hierarchical Aggregation Protocol (SHArP): a hardware architecture for efficient data reduction. Proc 1st Workshop on Optimization of Communication in HPC, p.1-10.

[6]Intel, 2016. Intel Omni-Path Architecture Driving Exascale Computing and HPC. https://www.intel.com/content/www/us/en/high-linebreak performance-computing-fabrics/omni-path-driving-linebreak exascale-computing.html

[7]Li RZ, DeTar C, Gottlieb S, et al., 2017. MILC code performance on high end CPU and GPU supercomputer clusters. http://cn.arxiv.org/abs/1712.00143

[8]Lu XY, Shankar D, Gugnani S, et al., 2016. High-performance design of Apache Spark with RDMA and its benefits on various workloads. Proc IEEE Int Conference on Big Data, p.253-262.

[9]Mellanox BlueField, 2017. Multicore System on Chip. http://www.mellanox.com/related-docs/npu-multicore-linebreak processors/PB_Bluefield_SoC.pdf

[10]NVMe Express, 2016. NVMe over Fabrics. http://www.nvmexpress.org/wp-content/uploads/linebreak NVMe_Over_Fabrics.pdf

[11]ORNL, 2018. Summit: America's Newest and Smartest Supercomputer. https://www.olcf.ornl.gov/summit/

[12]Rahman MWU, Lu XY, Islam NS, et al., 2014. {HOMR: a hybrid approach to exploit maximum overlapping in MapReduce over high performance interconnects}. Proc 28th ACM Int Conf on Supercomputing, p.33-42.

[13]Rajachandrasekar R, Jaswani J, Subramoni H, et al., 2012. Minimizing network contention in InfiniBand clusters with a QoS-aware data-staging framework. IEEE Int Conf on Cluster Computing, p.329-336.

[14]Sarvestani AMK, Bailey C, Austin J, 2018. Performance analysis of a 3D wireless massively parallel computer. J Sens Actuat Netw, 7(2):18.

[15]Shankar D, Lu X, Islam N, et al., 2016. High-performance hybrid key-value store on modern clusters with RDMA interconnects and SSDs: non-blocking extensions, designs, and benefits. IEEE Int Parallel and Distributed Processing Symp, p.393-402.

[16]Subramoni H, Lai P, Sur S, et al., 2010. Improving application performance and predictability using multiple virtual lanes in modern multi-core InfiniBand clusters. Int Conf on Parallel Processing.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE