Journal home Browse Featured articles

Featured articles

  • Select all
  • RESEARCH ARTICLE
    Mizhipeng ZHANG, Chentao WU, Jie LI, Minyi GUO
    Frontiers of Computer Science, 2025, 19(1): 191101. https://doi.org/10.1007/s11704-023-3209-3

    Blockchain as a decentralized storage technology is widely used in many fields. It has extremely strict requirements for reliability because there are many potentially malicious nodes. Generally, blockchain is a chain storage structure formed by interconnecting blocks

    A block is consist of a block header and a block body. The metadata is stored in block header and data is stored in block body.

    , which are stored by full replication method, where each node stores a replica of all blocks and the data consistency is maintained by the consensus protocol. To decrease the storage overhead, previous approaches such as BFT-Store and Partition Chain store blocks via erasure codes. However, existing erasure coding based methods utilize static encoding schema to tolerant f malicious nodes, but in the typical cases, the number of malicious nodes is much smaller than f as described in previous literatures. Using redundant parities to tolerate excessive malicious nodes introduces unnecessary storage overhead.

    To solve the above problem, we propose Dynamic-EC, which is a Dynamic Erasure Coding method in permissioned blockchain systems. The key idea of Dynamic-EC is to reduce the storage overhead by dynamically adjusting the total number of parities according to the risk level of the whole system, which is determined by the number of perceived malicious nodes, while ensuring the system reliability. To demonstrate the effectiveness of Dynamic-EC, we conduct several experiments on an open source blockchain software Tendermint. The results show that, compared to the state-of-the-art erasure coding methods, Dynamic-EC reduces the storage overhead by up to 42%, and decreases the average write latency of blocks by up to 25%, respectively.

  • ANNOUNCEMENT
    Frontiers of Computer Science, 2023, 17(6): 176001. https://doi.org/10.1007/s11704-023-3998-4
  • RESEARCH ARTICLE
    Zhao-Hui LI, Xin-Yu FENG
    Frontiers of Computer Science, 2024, 18(6): 186208. https://doi.org/10.1007/s11704-023-2774-9

    Though obstruction-free progress property is weaker than other non-blocking properties including lock-freedom and wait-freedom, it has advantages that have led to the use of obstruction-free implementations for software transactional memory (STM) and in anonymous and fault-tolerant distributed computing. However, existing work can only verify obstruction-freedom of specific data structures (e.g., STM and list-based algorithms).

    In this paper, to fill this gap, we propose a program logic that can formally verify obstruction-freedom of practical implementations, as well as verify linearizability, a safety property, at the same time. We also propose informal principles to extend a logic for verifying linearizability to verifying obstruction-freedom. With this approach, the existing proof for linearizability can be reused directly to construct the proof for both linearizability and obstruction-freedom.Finally, we have successfully applied our logic to verifying a practical obstruction-free double-ended queue implementation in the first classic paper that has proposed the definition of obstruction-freedom.

  • REVIEW ARTICLE
    Jinyang GUO, Lu ZHANG, José ROMERO HUNG, Chao LI, Jieru ZHAO, Minyi GUO
    Frontiers of Computer Science, 2023, 17(5): 175106. https://doi.org/10.1007/s11704-022-2127-0

    Cloud vendors are actively adopting FPGAs into their infrastructures for enhancing performance and efficiency. As cloud services continue to evolve, FPGA (field programmable gate array) systems would play an even important role in the future. In this context, FPGA sharing in multi-tenancy scenarios is crucial for the wide adoption of FPGA in the cloud. Recently, many works have been done towards effective FPGA sharing at different layers of the cloud computing stack.

    In this work, we provide a comprehensive survey of recent works on FPGA sharing. We examine prior art from different aspects and encapsulate relevant proposals on a few key topics. On the one hand, we discuss representative papers on FPGA resource sharing schemes; on the other hand, we also summarize important SW/HW techniques that support effective sharing. Importantly, we further analyze the system design cost behind FPGA sharing. Finally, based on our survey, we identify key opportunities and challenges of FPGA sharing in future cloud scenarios.

  • REVIEW ARTICLE
    Rong ZENG, Xiaofeng HOU, Lu ZHANG, Chao LI, Wenli ZHENG, Minyi GUO
    Frontiers of Computer Science, 2022, 16(6): 166106. https://doi.org/10.1007/s11704-020-0072-3

    With the demand of agile development and management, cloud applications today are moving towards a more fine-grained microservice paradigm, where smaller and simpler functioning parts are combined for providing end-to-end services. In recent years, we have witnessed many research efforts that strive to optimize the performance of cloud computing system in this new era. This paper provides an overview of existing works on recent system performance optimization techniques and classify them based on their design focuses. We also identify open issues and challenges in this important research direction.

  • RESEARCH ARTICLE
    Yao SONG, Limin XIAO, Liang WANG, Guangjun QIN, Bing WEI, Baicheng YAN, Chenhao ZHANG
    Frontiers of Computer Science, 2022, 16(5): 165105. https://doi.org/10.1007/s11704-021-0353-5

    Wide-area high-performance computing is widely used for large-scale parallel computing applications owing to its high computing and storage resources. However, the geographical distribution of computing and storage resources makes efficient task distribution and data placement more challenging. To achieve a higher system performance, this study proposes a two-level global collaborative scheduling strategy for wide-area high-performance computing environments. The collaborative scheduling strategy integrates lightweight solution selection, redundant data placement and task stealing mechanisms, optimizing task distribution and data placement to achieve efficient computing in wide-area environments. The experimental results indicate that compared with the state-of-the-art collaborative scheduling algorithm HPS+, the proposed scheduling strategy reduces the makespan by 23.24%, improves computing and storage resource utilization by 8.28% and 21.73% respectively, and achieves similar global data migration costs.

  • RESEARCH ARTICLE
    Ruchika MALHOTRA, Kusum LATA
    Frontiers of Computer Science, 2022, 16(4): 164205. https://doi.org/10.1007/s11704-021-0127-0

    As the complexity of software systems is increasing; software maintenance is becoming a challenge for software practitioners. The prediction of classes that require high maintainability effort is of utmost necessity to develop cost-effective and high-quality software. In research of software engineering predictive modeling, various software maintainability prediction (SMP) models are evolved to forecast maintainability. To develop a maintainability prediction model, software practitioners may come across situations in which classes or modules requiring high maintainability effort are far less than those requiring low maintainability effort. This condition gives rise to a class imbalance problem (CIP). In this situation, the minority classes’ prediction, i.e., the classes demanding high maintainability effort, is a challenge. Therefore, in this direction, this study investigates three techniques for handling the CIP on ten open-source software to predict software maintainability. This empirical investigation supports the use of resampling with replacement technique (RR) for treating CIP and develop useful models for SMP.

  • LETTER
    Hongyu KUANG, Jian WANG, Ruilin LI, Chao FENG, YunFei SU, Xing ZHANG
    Frontiers of Computer Science, 2022, 16(2): 162201. https://doi.org/10.1007/s11704-020-0312-6
  • RESEARCH ARTICLE
    Changpeng ZHU, Bo HAN, Yinliang ZHAO
    Frontiers of Computer Science, 2022, 16(3): 163101. https://doi.org/10.1007/s11704-021-0118-1

    Container-based virtualization techniques are becoming an alternative to traditional virtual machines, due to less overhead and better scaling. As one of the most widely used open-source container orchestration systems, Kubernetes provides a built-in mechanism, that is, horizontal pod autoscaler (HPA), for dynamic resource provisioning. By default, scaling pods only based on CPU utilization, a single performance metric, HPA may create more pods than actually needed. Through extensive measurements of a containerized n-tier application benchmark, RUBBoS, we find that excessive pods consume more CPU and memory and even deteriorate response times of applications, due to interference. Furthermore, a Kubernetes service does not balance incoming requests among old pods and new pods created by HPA, due to stateful HTTP. In this paper, we propose a bi-metric approach to scaling pods by taking into account both CPU utilization and utilization of a thread pool, which is a kind of important soft resource in Httpd and Tomcat. Our approach collects the utilization of CPU and memory of pods. Meanwhile, it makes use of ELBA, a milli-bottleneck detector, to calculate queue lengths of Httpd and Tomcat pods and then evaluate the utilization of their thread pools. Based on the utilization of both CPU and thread pools, our approach could scale up less replicas of Httpd and Tomcat pods, contributing to a reduction of hardware resource utilization. At the same time, our approach leverages preStop hook along with liveness and readiness probes to relieve load imbalance among old Tomcat pods and new ones. Based on the containerized RUBBoS, our experimental results show that the proposed approach could not only reduce the usage of CPU and memory by as much as 14% and 24% when compared with HPA, but also relieve the load imbalance to reduce average response time of requests by as much as 80%. Our approach also demonstrates that it is better to scale pods by multiple metrics rather than a single one.

  • RESEARCH ARTICLE
    Guozhen ZHANG, Yi LIU, Hailong YANG, Jun XU, Depei QIAN
    Frontiers of Computer Science, 2021, 15(6): 156107. https://doi.org/10.1007/s11704-020-0190-y

    As the mean-time-between-failures (MTBF) continues to decline with the increasing number of components on large-scale high performance computing (HPC) systems, program failures might occur during the execution period with high probability. Ensuring successful execution of the HPC programs has become an issue that the unprivileged users should be concerned. From the user perspective, if the program failure cannot be detected and handled in time, it would waste resources and delay the progress of program execution. Unfortunately, the unprivileged users are unable to perform program state checking due to execution control by the job management system as well as the limited privilege. Currently, automated tools for supporting user-level failure detection and autorecovery of parallel programs in HPC systems are missing. This paper proposes an innovative method for the unprivileged user to achieve failure detection of job execution and automatic resubmission of failed jobs. The state checker in our method is encapsulated as an independent job to reduce interference with the user jobs. In addition, we propose a dual-checker mechanism to improve the robustness of our approach.We implement the proposed method as a tool named automatic re-launcher (ARL) and evaluate it on the Tianhe-2 system. Experiment results show that ARL can detect the execution failures effectively on Tianhe-2 system. In addition, the communication and performance overhead caused by ARL is negligible. The good scalability of ARL makes it applicable for large-scale HPC systems.