Jun 2021, Volume 15 Issue 3
    

  • Select all
  • RESEARCH ARTICLE
    Ahmer Khan JADOON, Jing LI, Licheng WANG

    Automotive cyber physical systems (CPSs) are ever more utilizing wireless technology for V2X communication as a potential way out for challenges regarding collision detection, wire strap up troubles and collision avoidance. However, security is constrained as a result of the energy and performance limitations of modern wireless systems. Accordingly, the need for efficient secret key generation and management mechanism for secured communication among computationally weak wireless devices has motivated the introduction of new authentication protocols. Recently, there has been a great interest in physical layer based secret key generation schemes by utilizing channel reciprocity. Consequently, it is observed that the sequence generated by two communicating parties contain mismatched bits which need to be reconciled by exchanging information over a public channel. This can be an immense security threat as it may let an adversary attain and recover segments of the key in known channel conditions. We proposed Hopper-Blum based physical layer (HB-PL) authentication scheme in which an enhanced physical layer key generation method integrates the Hopper-Blum (HB) authentication protocol. The information collected from the shared channel is used as secret keys for the HB protocol and the mismatched bits are used as the induced noise for learning parity with noise (LPN) problem. The proposed scheme aims to provide a way out for bit reconciliation process without leakage of information over a public channel. Moreover, HB protocol is computationally efficient and simple which helps to reduce the number of exchange messages during the authentication process. We have performed several experiments which show that our proposed design can generate secret keys with improved security strength and high performance in comparison to the current authentication techniques. Our scheme requires less than 55 exchange messages to achieve more than 95% of correct authentication.

  • REVIEW ARTICLE
    Wei ZHENG, Ying WU, Xiaoxue WU, Chen FENG, Yulei SUI, Xiapu LUO, Yajin ZHOU

    This paper presents a comprehensive survey on the development of Intel SGX (software guard extensions) processors and its applications. With the advent of SGX in 2013 and its subsequent development, the corresponding research works are also increasing rapidly. In order to get a more comprehensive literature review related to SGX, we have made a systematic analysis of the related papers in this area. We first search through five large-scale paper retrieval libraries by keywords (i.e., ACM Digital Library, IEEE/IET Electronic Library, SpringerLink, Web of Science, and Elsevier Science Direct). We read and analyze a total of 128 SGX-related papers. The first round of extensive study is conducted to classify them. The second round of intensive study is carried out to complete a comprehensive analysis of the paper from various aspects. We start with the working environment of SGX and make a conclusive summary of trusted execution environment (TEE).We then focus on the applications of SGX. We also review and study multifarious attack methods to SGX framework and some recent security improvementsmade on SGX. Finally, we summarize the advantages and disadvantages of SGX with some future research opportunities. We hope this review could help the existing and future research works on SGX and its application for both developers and users.

  • RESEARCH ARTICLE
    Abhishek MAJUMDAR, Arpita BISWAS, Atanu MAJUMDER, Sandeep Kumar SOOD, Krishna Lal BAISHNAB

    Over the last few years, the need of a cloud environment with the ability to detect illegal behaviours along with a secured data storage capability has increased largely. This study presents such a secured cloud storage framework comprising of a deoxyribonucleic acid (DNA) based encryption key which has been generated to make the framework unbreakable, thus ensuring a better and secured distributed cloud storage environment. Furthermore, this work proposes a novel DNAbased encryption technique inspired by the biological characteristics of DNA and the protein synthesis mechanism. The introduced DNA based model also has an additional advantage of being able to decide on selecting suitable storage servers from an existing pool of storage servers on which the data must be stored. A fuzzy-based technique for order of preference by similarity to ideal solution (TOPSIS) multi-criteria decisionmaking (MCDM) model has been employed to achieve the above-mentioned goal. This can decide the set of suitable storage servers and also results in a reduction in execution time by keeping up the level of security to an improved grade. This study also investigates and analyzes the strength of the proposed S-Box and encryption technique against some standard criteria and benchmarks, such as avalanche effect, correlation coefficient, information entropy, linear probability, and differential probability etc. After the avalanche effect analysis, the average change in cipher-text has been found to be 51.85%. Moreover, thorough security, sensitivity and functionality analysis show that the proposed scheme guarantees high security with robustness.

  • REVIEW ARTICLE
    Bin GUO, Yasan DING, Yueheng SUN, Shuai MA, Ke LI, Zhiwen YU

    The widespread fake news in social networks is posing threats to social stability, economic development, and political democracy, etc. Numerous studies have explored the effective detection approaches of online fake news, while few works study the intrinsic propagation and cognition mechanisms of fake news. Since the development of cognitive science paves a promising way for the prevention of fake news, we present a new research area called Cognition Security (CogSec), which studies the potential impacts of fake news on human cognition, ranging from misperception, untrusted knowledge acquisition, targeted opinion/attitude formation, to biased decision making, and investigates the effective ways for fake news debunking. CogSec is a multidisciplinary research field that leverages the knowledge from social science, psychology, cognition science, neuroscience, AI and computer science. We first propose related definitions to characterize CogSec and review the literature history. We further investigate the key research challenges and techniques of CogSec, including humancontent cognition mechanism, social influence and opinion diffusion, fake news detection, and malicious bot detection. Finally, we summarize the open issues and future research directions, such as the cognition mechanism of fake news, influence maximization of fact-checking information, early detection of fake news, fast refutation of fake news, and so on.

  • REVIEW ARTICLE
    Ning LIU, Zhongpai GAO, Jia WANG, Guangtao ZHAI

    Industry and academia have been making great efforts in improving refresh rates and resolutions of display devices to meet the ever increasing needs of consumers for better visual quality. As a result, many modern displays have spatial and temporal resolutions far beyond the discern capability of human visual systems. Thus, leading to the possibility of using those display-eye redundancies for innovative usages. Temporal/ spatial psycho-visual modulation (TPVM/SPVM) was proposed to exploit those redundancies to generate multiple visual percepts for different viewers or to transmit non-visual data to computing devices without affecting normal viewing. This paper reviews the STPVM technology from both conceptual and algorithmic perspectives, with exemplary applications in multiview display, display with visible light communication, etc. Some possible future research directions are also identified.

  • RESEARCH ARTICLE
    Hongbo ZHANG, Xin GAO, Jixiang DU, Qing LEI, Lijie YANG

    Photomosaic images are composite images composed of many small images called tiles. In its overall visual effect, a photomosaic image is similar to the target image, and photomosaics are also called “montage art”. Noisy blocks and the loss of local information are the major obstacles in most methods or programs that create photomosaic images. To solve these problems and generate a photomosaic image in this study, we propose a tile selection method based on errorminimization. A photomosaic image can be generated by partitioning the target image in a rectangular pattern, selecting appropriate tile images, and then adding them with a weight coefficient. Based on the principles of montage art, the quality of the generated photomosaic image can be evaluated by both global and local error. Under the proposed framework, via an error function analysis, the results show that selecting a tile image using a global minimum distance minimizes both the global error and the local error simultaneously. Moreover, the weight coefficient of the image superposition can be used to adjust the ratio of the global and local errors. Finally, to verify the proposed method, we built a new photomosaic creation dataset during this study. The experimental results show that the proposed method achieves a lowmean absolute error and that the generated photomosaic images have a more artistic effect than do the existing approaches.

  • REVIEW ARTICLE
    Edje E. ABEL, Muhammad Shafie Abd LATIFF

    Cloud internet of things (IoT) is an emerging technology that is already impelling the daily activities of our lives. However, the enormous resources (data and physical features of things) generated from Cloud-enabled IoT sensing devices are lacking suitable managerial approaches. Existing research surveys on Cloud IoT mainly focused on its fundamentals, definitions and layered architecture as well as security challenges. Going by the current literature, none of the existing researches is yet to provide a detailed analysis on the approaches deployed to manage the heterogeneous and dynamic resource data generated by sensor devices in the cloud-enabled IoT paradigm.nHence, to bridge this gap, the existing algorithms designed to manage resource data on various CloudIoT application domains are investigated and analyzed. The emergence of CloudIoT, followed by previous related survey articles in this field, which motivated the current study is presented. Furthermore, the utilization of simulation environment, highlighting the programming languages and a brief description of the simulation packages adopted to design and evaluate the performance of the algorithms are examined. The utilization of diverse network communication protocols and gateways to aid resource dissemination in the cloud-enabled IoT network infrastructure are also discussed. The future work as discussed in previous researches, which pave the way for future research directions in this field is also presented, and ends with concluding remarks.

  • LETTER
    Wei LI, Yuefei SUI, Yuhui WANG
  • RESEARCH ARTICLE
    Cungen CAO, Lanxi HU, Yuefei SUI

    A sequent is a pair (Γ, Δ), which is true under an assignment if either some formula in Γ is false, or some formula in Δ is true. In L3-valued propositional logic, a multisequent is a triple Δ|Θ|Γ, which is true under an assignment if either some formula in Δ has truth-value t, or some formula in Θ has truth-value m, or some formula in Γ has truth-value f. Correspondingly there is a sound and complete Gentzen deduction system G for multisequents which is monotonic. Dually, a comultisequent is a triple Δ : Θ : Γ, which is valid if there is an assignment v in which each formula in Δ has truth-value≠t, each formula in Θ has truth-value≠m, and each formula in Γ has truth-value≠f. Correspondingly there is a sound and complete Gentzen deduction system G for co-multisequents which is nonmonotonic.

  • RESEARCH ARTICLE
    Yixuan CAO, Dian CHEN, Zhengqi XU, Hongwei LI, Ping LUO

    Most existing researches on relation extraction focus on binary flat relations like BornIn relation between a Person and a Location. But a large portion of objective facts described in natural language are complex, especially in professional documents in fields such as finance and biomedicine that require precise expressions. For example, “the GDP of the United States in 2018 grew 2.9% compared with 2017” describes a growth rate relation between two other relations about the economic index, which is beyond the expressive power of binary flat relations. Thus, we propose the nested relation extraction problem and formulate it as a directed acyclic graph (DAG) structure extraction problem. Then, we propose a solution using the Iterative Neural Network which extracts relations layer by layer. The proposed solution achieves 78.98 and 97.89 F1 scores on two nested relation extraction tasks, namely semantic cause-and-effect relation extraction and formula extraction. Furthermore, we observe that nested relations are usually expressed in long sentences where entities are mentioned repetitively, which makes the annotation difficult and errorprone. Hence, we extend our model to incorporate a mentioninsensitive mode that only requires annotations of relations on entity concepts (instead of exact mentions) while preserving most of its performance. Our mention-insensitive model performs better than the mention sensitive model when the random level in mention selection is higher than 0.3.

  • RESEARCH ARTICLE
    Houda AKREMI, Sami ZGHAL

    Although recent studies on the Semantic Web have focused on crisp ontologies and knowledge representation, they have paid less attention to imprecise knowledge. However, the results of these studies constitute a Semantic Web that can answer requests almost perfectly with respect to precision. Nevertheless, they ensure low recall. As such, we propose in this research work a new generic approach of fuzzification that which allows a semantic representation of crisp and fuzzy data in a domain ontology. In the framework of our real case study, the obtained illustrate that our approach is highly better than the crisp one in terms of completeness, comprehensiveness, generality, comprehension and shareability.

  • RESEARCH ARTICLE
    Yu HU, Tiezheng NIE, Derong SHEN, Yue KOU, Ge YU

    Biomedical entity alignment, composed of two subtasks: entity identification and entity-concept mapping, is of great research value in biomedical text mining while these techniques are widely used for name entity standardization, information retrieval, knowledge acquisition and ontology construction.

    Previous works made many efforts on feature engineering to employ feature-basedmodels for entity identification and alignment. However, the models depended on subjective feature selection may suffer error propagation and are not able to utilize the hidden information.With rapid development in healthrelated research, researchers need an effective method to explore the large amount of available biomedical literatures.

    Therefore, we propose a two-stage entity alignment process, biomedical entity exploring model, to identify biomedical entities and align them to the knowledge base interactively. The model aims to automatically obtain semantic information for extracting biomedical entities and mining semantic relations through the standard biomedical knowledge base. The experiments show that the proposed method achieves better performance on entity alignment. The proposed model dramatically improves the F1 scores of the task by about 4.5% in entity identification and 2.5% in entity-concept mapping.

  • LETTER
    Jing LI, Xuejun LIU, Daoqiang ZHANG
  • RESEARCH ARTICLE
    Huiying ZHANG, Yu ZHANG, Xin GENG

    Age estimation plays an important role in humancomputer interaction system. The lack of large number of facial images with definite age label makes age estimation algorithms inefficient. Deep label distribution learning (DLDL) which employs convolutional neural networks (CNN) and label distribution learning to learn ambiguity from ground-truth age and adjacent ages, has been proven to outperform current state-of-the-art framework. However, DLDL assumes a rough label distribution which covers all ages for any given age label. In this paper, a more practical label distribution paradigm is proposed: we limit age label distribution that only covers a reasonable number of neighboring ages. In addition, we explore different label distributions to improve the performance of the proposed learning model. We employ CNN and the improved label distribution learning to estimate age. Experimental results show that compared to the DLDL, our method is more effective for facial age recognition.

  • RESEARCH ARTICLE
    Weibei FAN, Jianxi FAN, Zhijie HAN, Peng LI, Yujie ZHANG, Ruchuan WANG

    The foundation of information society is computer interconnection network, and the key of information exchange is communication algorithm. Finding interconnection networks with simple routing algorithm and high fault-tolerant performance is the premise of realizing various communication algorithms and protocols. Nowadays, people can build complex interconnection networks by using very large scale integration (VLSI) technology. Locally exchanged twisted cubes, denoted by (s + t + 1)-dimensional LeTQs,t, which combines the merits of the exchanged hypercube and the locally twisted cube. It has been proved that the LeTQs,t has many excellent properties for interconnection networks, such as fewer edges, lower overhead and smaller diameter. Embeddability is an important indicator to measure the performance of interconnection networks. We mainly study the fault tolerant Hamiltonian properties of a faulty locally exchanged twisted cube, LeTQs,t − ( fv + fe), with faulty vertices fv and faulty edges fe. Firstly, we prove that an LeTQs,t can tolerate up to s−1 faulty vertices and edges when embedding a Hamiltonian cycle, for s≥2, t≥3, and s≤t. Furthermore, we also prove another result that there is a Hamiltonian path between any two distinct fault-free vertices in a faulty LeTQs,twith up to (s − 2) faulty vertices and edges. That is, we show that LeTQs,t is (s−1)-Hamiltonian and (s−2)- Hamiltonian-connected. The results are proved to be optimal in this paper with at most (s − 1)-fault-tolerant Hamiltonicity and (s − 2) fault-tolerant Hamiltonian connectivity of LeTQs,t.

  • RESEARCH ARTICLE
    Tingting CHEN, Haikun LIU, Xiaofei LIAO, Hai JIN

    Emerging byte-addressable non-volatile memory (NVM) technologies offer higher density and lower cost than DRAM, at the expense of lower performance and limited write endurance. There have been many studies on hybrid NVM/DRAMmemory management in a single physical server. However, it is still an open problem on how to manage hybrid memories efficiently in a distributed environment. This paper proposes Alloy, a memory resource abstraction and data placement strategy for an RDMA-enabled distributed hybrid memory pool (DHMP). Alloy provides simple APIs for applications to utilize DRAM or NVM resource in the DHMP, without being aware of the hardware details of the DHMP. We propose a hotness-aware data placement scheme, which combines hot data migration, data replication and write merging together to improve application performance and reduce the cost of DRAM. We evaluate Alloy with several micro-benchmark workloads and public benchmark workloads. Experimental results show that Alloy can significantly reduce the DRAM usage in the DHMP by up to 95%, while reducing the total memory access time by up to 57% compared with the state-of-the-art approaches.

  • RESEARCH ARTICLE
    Bing WEI, Limin XIAO, Bingyu ZHOU, Guangjun QIN, Baicheng YAN, Zhisheng HUO

    With the advent of new computing paradigms, parallel file systems serve not only traditional scientific computing applications but also non-scientific computing applications, such as financial computing, business, and public administration. Parallel file systems provide storage services for multiple applications. As a result, various requirements need to be met. However, parallel file systems usually provide a unified storage solution, which cannot meet specific application needs. In this paper, an extended file handle scheme is proposed to deal with this problem. The original file handle is extended to record I/O optimization information, which allows file systems to specify optimizations for a file or directory based on workload characteristics. Therefore, fine-grained management of I/O optimizations can be achieved. On the basis of the extended file handle scheme, data prefetching and small file optimization mechanisms are proposed for parallel file systems. The experimental results show that the proposed approach improves the aggregate throughput of the overall system by up to 189.75%.