Dec 2018, Volume 19 Issue 10
    

  • Select all
  • Editorial
    Zuo-ning CHEN, Jack DONGARRA, Zhi-wei XU
    2018, 19(10): 1203-1208. https://doi.org/10.1631/FITEE.1830000
  • Perspectives
    Xiaobo Sharon HU, Michael NIEMIER
    2018, 19(10): 1209-1223. https://doi.org/10.1631/FITEE.1800466

    AsMoore’s law based device scaling and accompanying performance scaling trends are slowing down, there is increasing interest in new technologies and computational models for fast and more energy-efficient information processing. Meanwhile, there is growing evidence that, with respect to traditional Boolean circuits and von Neumann processors, it will be challenging for beyond-CMOS devices to compete with the CMOS technology. Exploiting unique characteristics of emerging devices, especially in the context of alternative circuit and architectural paradigms, has the potential to offer orders of magnitude improvement in terms of power, performance, and capability. To take full advantage of beyond-CMOS devices, cross-layer efforts spanning from devices to circuits to architectures to algorithms are indispensable. This study examines energy-efficient neural network accelerators for embedded applications in this context. Several deep neural network accelerator designs based on cross-layer efforts spanning from alternative device technologies, circuit styles, to architectures are highlighted. Application-level benchmarking studies are presented. The discussions demonstrate that cross-layer efforts indeed can lead to orders of magnitude gain towards achieving extreme-scale energy-efficient processing.

  • Perspectives
    Xiang-hui XIE, Xun JIA
    2018, 19(10): 1224-1229. https://doi.org/10.1631/FITEE.1800424

    The ever-increasing need for high performance in scientific computation and engineering applications will push high-performance computing beyond the exascale. As an integral part of a supercomputing system, highperformance processors and their architecture designs are crucial in improving system performance. In this paper, three architecture design goals for high-performance processors beyond the exascale are introduced, including effective performance scaling, efficient resource utilization, and adaptation to diverse applications. Then a high-performance many-core processor architecture with scalar processing and application-specific acceleration (Massa) is proposed, which aims to achieve the above three goals by employing the techniques of distributed computational resources and application-customized hardware. Finally, some future research directions regarding the Massa architecture are discussed.

  • Perspectives
    Dhabaleswar PANDA, Xiao-yi LU, Hari SUBRAMONI
    2018, 19(10): 1230-1235. https://doi.org/10.1631/FITEE.1800631

    With the significant advancement in emerging processor, memory, and networking technologies, exascale systems will become available in the next few years (2020–2022). As the exascale systems begin to be deployed and used, there will be a continuous demand to run next-generation applications with finer granularity, finer time-steps, and increased data sizes. Based on historical trends, next-generation applications will require postexascale systems during 2025–2035. In this study, we focus on the networking and communication challenges for post-exascale systems. Firstly, we present an envisioned architecture for post-exascale systems. Secondly, the challenges are summarized from different perspectives: heterogeneous networking technologies, high-performance communication and synchronization protocols, integrated support with accelerators and field-programmable gate arrays, fault-tolerance and quality-of-service support, energy-aware communication schemes and protocols, softwaredefined networking, and scalable communication protocols with heterogeneous memory and storage. Thirdly, we present the challenges in designing efficient programming model support for high-performance computing, big data, and deep learning on these systems. Finally, we emphasize the critical need for co-designing runtime with upper layers on these systems to achieve the maximum performance and scalability.

  • Perspective
    Xiang-ke LIAO, Kai LU, Can-qun YANG, Jin-wen LI, Yuan YUAN, Ming-che LAI, Li-bo HUANG, Ping-jing LU, Jian-bin FANG, Jing REN, Jie SHEN
    2018, 19(10): 1236-1244. https://doi.org/10.1631/FITEE.1800494

    High-performance computing (HPC) is essential for both traditional and emerging scientific fields, enabling scientific activities to make progress. With the development of high-performance computing, it is foreseeable that exascale computing will be put into practice around 2020. As Moore’s law approaches its limit, high-performance computing will face severe challenges when moving from exascale to zettascale, making the next 10 years after 2020 a vital period to develop key HPC techniques. In this study, we discuss the challenges of enabling zettascale computing with respect to both hardware and software. We then present a perspective of future HPC technology evolution and revolution, leading to our main recommendations in support of zettascale computing in the coming future.

  • Perspective
    Ning-hui SUN, Yun-gang BAO, Dong-rui FAN
    2018, 19(10): 1245-1250. https://doi.org/10.1631/FITEE.1800501

    In recent years, the advent of emerging computing applications, such as cloud computing, artificial intelligence, and the Internet of Things, has led to three common requirements in computer system design: high utilization, high throughput, and low latency. Herein, these are referred to as the requirements of ‘high-throughput computing (HTC)’. We further propose a new indicator called ‘sysentropy’ for measuring the degree of chaos and uncertainty within a computer system. We argue that unlike the designs of traditional computing systems that pursue high performance and low power consumption, HTC should aim at achieving low sysentropy. However, from the perspective of computer architecture, HTC faces two major challenges that relate to (1) the full exploitation of the application’s data parallelism and execution concurrency to achieve high throughput, and (2) the achievement of low latency, even in the cases at which severe contention occurs in data paths with high utilization. To overcome these two challenges, we introduce two techniques: on-chip data flow architecture and labeled von Neumann architecture. We build two prototypes that can achieve high throughput and low latency, thereby significantly reducing sysentropy.

  • Perspective
    Ze-yao MO
    2018, 19(10): 1251-1260. https://doi.org/10.1631/FITEE.1800421

    Extreme-scale numerical simulations seriously demand extreme parallel computing capabilities. To address the challenges of these capabilities toward exascale, we systematically analyze the major bottlenecks of parallel computing research from three perspectives: computational scale, computing efficiency, and programming productivity. For these bottlenecks, we propose a series of urgent key issues and coping strategies. This study will be useful in synchronizing development between the numerical computing capability and supercomputer peak performance.

  • Perspective
    Ji-dong ZHAI, Wen-guang CHEN
    2018, 19(10): 1261-1266. https://doi.org/10.1631/FITEE.1800442

    Exascale systems have been under development for quite some time and will be available for use in a few years. It is time to think about future post-exascale systems. There are many main challenges with regard to future post-exascale systems, such as processor architecture, programming, storage, and interconnect. In this study, we discuss three significant programming challenges for future post-exascale systems: heterogeneity, parallelism, and fault tolerance. Based on our experience of programming on current large-scale systems, we propose several potential solutions for these challenges. Nevertheless, more research efforts are needed to solve these problems.

  • Perspective
    Guang-wen YANG, Hao-huan FU
    2018, 19(10): 1267-1272. https://doi.org/10.1631/FITEE.1800459

    With various exascale systems in different countries planned over the next three to five years, developing application software for such unprecedented computing capabilities and parallel scaling becomes a major challenge. In this study, we start our discussion with the current 125-Pflops Sunway TaihuLight system in China and its related application challenges and solutions. Based on our current experience with Sunway TaihuLight, we provide a projection into the next decade and discuss potential challenges and possible trends we would probably observe in future high performance computing software.

  • Research
    Wei HU, Guang-ming LIU, Yan-huang JIANG
    2018, 19(10): 1273-1290. https://doi.org/10.1631/FITEE.1601450

    As the scale of supercomputers rapidly grows, the reliability problem dominates the system availability. Existing fault tolerance mechanisms, such as periodic checkpointing and process redundancy, cannot effectively fix this problem. To address this issue, we present a new fault tolerance framework using process replication and prefetching (FTRP), combining the benefits of proactive and reactive mechanisms. FTRP incorporates a novel cost model and a new proactive fault tolerance mechanism to improve the application execution efficiency. The novel cost model, called the ‘work-most’ (WM) model, makes runtime decisions to adaptively choose an action from a set of fault tolerance mechanisms based on failure prediction results and application status. Similar to program locality, we observe the failure locality phenomenon in supercomputers for the first time. In the new proactive fault tolerance mechanism, process replication with process prefetching is proposed based on the failure locality, significantly avoiding losses caused by the failures regardless of whether they have been predicted. Simulations with real failure traces demonstrate that the FTRP framework outperforms existing fault tolerance mechanisms with up to 10% improvement in application efficiency for common failure prediction accuracy, and is effective for petascale systems and beyond.

  • Research
    Wen-bing HAN, Xiao-gang CHEN, Shun-fen LI, Ge-zi LI, Zhi-tang SONG, Da-gang LI, Shi-yan CHEN
    2018, 19(10): 1291-1302. https://doi.org/10.1631/FITEE.1700061

    The emerging memory technologies, such as phase change memory (PCM), provide chances for highperformance storage of I/O-intensive applications. However, traditional software stack and hardware architecture need to be optimized to enhance I/O efficiency. In addition, narrowing the distance between computation and storage reduces the number of I/O requests and has become a popular research direction. This paper presents a novel PCMbased storage system. It consists of the in-storage processing enabled file system (ISPFS) and the configurable parallel computation fabric in storage, which is called an in-storage processing (ISP) engine. On one hand, ISPFS takes full advantage of non-volatile memory (NVM)’s characteristics, and reduces software overhead and data copies to provide low-latency high-performance random access. On the other hand, ISPFS passes ISP instructions through a command file and invokes the ISP engine to deal with I/O-intensive tasks. Extensive experiments are performed on the prototype system. The results indicate that ISPFS achieves 2 to 10 times throughput compared to EXT4. Our ISP solution also reduces the number of I/O requests by 97% and is 19 times more efficient than software implementation for I/O-intensive applications.