Journal home Browse Featured articles

Featured articles

  • Select all
  • RESEARCH ARTICLE
    Dongjie CHEN, Yanyan JIANG, Chang XU, Xiaoxing MA
    Frontiers of Computer Science, 2021, 15(4): 154206. https://doi.org/10.1007/s11704-020-9501-6

    Exploring the interleaving space of a multithreaded program to efficiently detect concurrency bugs is important but also difficult because of the astronomically many thread schedules. This paper presents a novel framework to decompose a thread schedule generator that explores the interleaving space into the composition of a basic generator and its extension under the “small interleaving hypothesis”. Under this framework, we in-depth analyzed research work on interleaving space exploration, illustrated how to design an effective schedule generator, and shed light on future research opportunities.

  • RESEARCH ARTICLE
    Bing WEI, Limin XIAO, Bingyu ZHOU, Guangjun QIN, Baicheng YAN, Zhisheng HUO
    Frontiers of Computer Science, 2021, 15(3): 153102. https://doi.org/10.1007/s11704-020-9344-1

    With the advent of new computing paradigms, parallel file systems serve not only traditional scientific computing applications but also non-scientific computing applications, such as financial computing, business, and public administration. Parallel file systems provide storage services for multiple applications. As a result, various requirements need to be met. However, parallel file systems usually provide a unified storage solution, which cannot meet specific application needs. In this paper, an extended file handle scheme is proposed to deal with this problem. The original file handle is extended to record I/O optimization information, which allows file systems to specify optimizations for a file or directory based on workload characteristics. Therefore, fine-grained management of I/O optimizations can be achieved. On the basis of the extended file handle scheme, data prefetching and small file optimization mechanisms are proposed for parallel file systems. The experimental results show that the proposed approach improves the aggregate throughput of the overall system by up to 189.75%.

  • REVIEW ARTICLE
    Ibrahim ALSEADOON, Aakash AHMAD, Adel ALKHALIL, Khalid SULTAN
    Frontiers of Computer Science, 2021, 15(2): 152204. https://doi.org/10.1007/s11704-019-8166-5

    Mobile computing has fast emerged as a pervasive technology to replace the old computing paradigms with portable computation and context-aware communication. Existing software systems can be migrated (while preserving their data and logic) to mobile computing platforms that support portability, context-sensitivity, and enhanced usability. In recent years, some research and development efforts have focused on a systematic migration of existing software systems to mobile computing platforms.

    To investigate the research state-of-the-art on the migration of existing software systems to mobile computing platforms. We aim to analyze the progression and impacts of existing research, highlight challenges and solutions that reflect dimensions of emerging and futuristic research.

    We followed evidence-based software engineering (EBSE) method to conduct a systematic mapping study (SMS) of the existing research that has progressed over more than a decade (25 studies published from 1996–2017).We have derived a taxonomical classification and a holistic mapping of the existing research to investigate its progress, impacts, and potential areas of futuristic research and development.

    The SMS has identified three types of migration namely Static, Dynamic, and State-based Migration of existing software systems to mobile computing platforms.Migration to mobile computing platforms enables existing software systems to achieve portability, context-sensitivity, and high connectivity. However, mobile systems may face some challenges such as resource poverty, data security, and privacy. The emerging and futuristic research aims to support patterns and tool support to automate the migration process. The results of this SMS can benefit researchers and practitioners–by highlighting challenges, solutions, and tools, etc., –to conceptualize the state-ofthe- art and futuristic trends that support migration of existing software to mobile computing.

  • RESEARCH ARTICLE
    Han Yao HUANG, Kyung Tae KIM, Hee Yong YOUN
    Frontiers of Computer Science, 2021, 15(1): 151101. https://doi.org/10.1007/s11704-020-9153-6

    Wireless sensor network (WSN) is effective for monitoring the target environment,which consists of a large number of sensor nodes of limited energy. An efficient medium access control (MAC) protocol is thus imperative to maximize the energy efficiency and performance of WSN. The most existing MAC protocols are based on the scheduling of sleep and active period of the nodes, and do not consider the relationship between the load condition and performance. In this paper a novel scheme is proposed to properly determine the duty cycle of the WSN nodes according to the load,which employs the Q-learning technique and function approximation with linear regression. This allows low-latency energy-efficient scheduling for a wide range of traffic conditions, and effectively overcomes the limitation of Q-learning with the problem of continuous state-action space. NS3 simulation reveals that the proposed scheme significantly improves the throughput, latency, and energy efficiency compared to the existing fully active scheme and S-MAC.

  • RESEARCH ARTICLE
    Zhenghui HU, Wenjun WU, Jie LUO, Xin WANG, Boshu LI
    Frontiers of Computer Science, 2020, 14(6): 146207. https://doi.org/10.1007/s11704-019-8418-4

    Quality assessment is a critical component in crowdsourcing-based software engineering (CBSE) as software products are developed by the crowd with unknown or varied skills and motivations. In this paper, we propose a novel metric called the project score to measure the performance of projects and the quality of products for competitionbased software crowdsourcing development (CBSCD) activities. To the best of our knowledge, this is the first work to deal with the quality issue of CBSE in the perspective of projects instead of contests. In particular, we develop a hierarchical quality evaluation framework for CBSCD projects and come up with two metric aggregation models for project scores. The first model is a modified squale model that can locate the software modules of poor quality, and the second one is a clustering-based aggregationmodel, which takes different impacts of phases into account. To test the effectiveness of the proposed metrics, we conduct an empirical study on TopCoder, which is a famous CBSCD platform. Results show that the proposed project score is a strong indicator of the performance and product quality of CBSCD projects.We also find that the clustering-based aggregation model outperforms the Squale one by increasing the percentage of the performance evaluation criterion of aggregation models by an additional 29%. Our approach to quality assessment for CBSCD projects could potentially facilitate software managers to assess the overall quality of a crowdsourced project consisting of programming contests.

  • RESEARCH ARTICLE
    Shiqing ZHANG, Zheng QIN, Yaohua YANG, Li SHEN, Zhiying WANG
    Frontiers of Computer Science, 2020, 14(3): 143101. https://doi.org/10.1007/s11704-018-7386-4

    Despite the increasing investment in integrated GPU and next-generation interconnect research, discrete GPU connected by PCIe still account for the dominant position of the market, the management of data communication between CPU and GPU continues to evolve. Initially, the programmer explicitly controls the data transfer between CPU and GPU. To simplify programming and enable systemwide atomic memory operations, GPU vendors have developed a programming model that provides a single, virtual address space for accessing all CPU and GPU memories in the system. The page migration engine in this model automatically migrates pages between CPU and GPU on demand. To meet the needs of high-performanceworkloads, the page size tends to be larger. Limited by low bandwidth and high latency interconnects compared to GDDR, larger page migration has longer delay, which may reduce the overlap of computation and transmission, waste time to migrate unrequested data, block subsequent requests, and cause serious performance decline. In this paper, we propose partial page migration that only migrates the requested part of a page to reduce the migration unit, shorten the migration latency, and avoid the performance degradation of the full page migration when the page becomes larger. We show that partial page migration is possible to largely hide the performance overheads of full page migration. Compared with programmer controlled data transmission, when the page size is 2MB and the PCIe bandwidth is 16GB/sec, full page migration is 72.72× slower, while our partial page migration achieves 1.29× speedup. When the PCIe bandwidth is changed to 96GB/sec, full page migration is 18.85× slower, while our partial page migration provides 1.37× speedup. Additionally, we examine the performance impact that PCIe bandwidth and migration unit size have on execution time, enabling designers to make informed decisions.

  • REVIEW ARTICLE
    Xibin DONG, Zhiwen YU, Wenming CAO, Yifan SHI, Qianli MA
    Frontiers of Computer Science, 2020, 14(2): 241-258. https://doi.org/10.1007/s11704-019-8208-z

    Despite significant successes achieved in knowledge discovery, traditional machine learning methods may fail to obtain satisfactory performances when dealing with complex data, such as imbalanced, high-dimensional, noisy data, etc. The reason behind is that it is difficult for these methods to capture multiple characteristics and underlying structure of data. In this context, it becomes an important topic in the data mining field that how to effectively construct an efficient knowledge discovery and mining model. Ensemble learning, as one research hot spot, aims to integrate data fusion, data modeling, and data mining into a unified framework. Specifically, ensemble learning firstly extracts a set of features with a variety of transformations. Based on these learned features, multiple learning algorithms are utilized to produce weak predictive results. Finally, ensemble learning fuses the informative knowledge from the above results obtained to achieve knowledge discovery and better predictive performance via voting schemes in an adaptive way. In this paper, we review the research progress of the mainstream approaches of ensemble learning and classify them based on different characteristics. In addition, we present challenges and possible research directions for each mainstream approach of ensemble learning, and we also give an extra introduction for the combination of ensemble learning with other machine learning hot spots such as deep learning, reinforcement learning, etc.

  • RESEARCH ARTICLE
    Yixuan TANG, Zhilei REN, Weiqiang KONG, He JIANG
    Frontiers of Computer Science, 2020, 14(1): 1-20. https://doi.org/10.1007/s11704-019-8231-0

    Compilers are widely-used infrastructures in accelerating the software development, and expected to be trustworthy. In the literature, various testing technologies have been proposed to guarantee the quality of compilers. However, there remains an obstacle to comprehensively characterize and understand compiler testing. To overcome this obstacle, we propose a literature analysis framework to gain insights into the compiler testing area. First, we perform an extensive search to construct a dataset related to compiler testing papers. Then, we conduct a bibliometric analysis to analyze the productive authors, the influential papers, and the frequently tested compilers based on our dataset. Finally, we utilize association rules and collaboration networks to mine the authorships and the communities of interests among researchers and keywords. Some valuable results are reported. We find that the USA is the leading country that contains the most influential researchers and institutions. The most active keyword is “random testing”. We also find that most researchers have broad interests within small-scale collaborators in the compiler testing area.

  • RESEARCH ARTICLE
    Libo FENG, Hui ZHANG, Wei-Tek TSAI, Simeng SUN
    Frontiers of Computer Science, 2019, 13(6): 1151-1165. https://doi.org/10.1007/s11704-018-6345-4

    Blockchain(BC), as an emerging distributed database technology with advanced security and reliability, has attracted much attention from experts who devoted to e-finance, intellectual property protection, the internet of things (IoT) and so forth. However, the inefficient transaction processing speed, which hinders the BC’s widespread, has not been well tackled yet. In this paper, we propose a novel architecture, called Dual-Channel Parallel Broadcast model (DCPB), which could address such a problem to a greater extent by using three methods which are dual communication channels, parallel pipeline processing and block broadcast strategy. In the dual-channel model, one channel processes transactions, and the other engages in the execution of BFT. The parallel pipeline processing allows the system to operate asynchronously. The block generation strategy improves the efficiency and speed of processing. Extensive experiments have been applied to BeihangChain, a simplified prototype for BC system, illustrates that its transaction processing speed could be improved to 16K transaction per second which could well supportmany real-world scenarios such as BC-based energy trading system andMicro-film copyright trading system in CCTV.

  • PERSPECTIVE
    Lihong LI
    Frontiers of Computer Science, 2019, 13(5): 911-912. https://doi.org/10.1007/s11704-019-9901-7