Feb 2022, Volume 16 Issue 1
    

  • Select all
    Architecture
  • RESEARCH ARTICLE
    Xiaolong LIU, Jinchao LIANG, De-Yu LIU, Riqing CHEN, Shyan-Ming YUAN

    It is of great significance for headquarters in warfare to address the weapon-target assignment (WTA) problem with distributed computing nodes to attack targets simultaneously from different weapon units. However, the computing nodes on the battlefield are vulnerable to be attacked and the communication environment is usually unreliable. To solve the WTA problems in unreliable environments, this paper proposes a scheme based on decentralized peer-to-peer architecture and adapted artificial bee colony (ABC) optimization algorithm. In the decentralized architecture, the peer computing node is distributed to each weapon units and the packet loss rate is used to simulate the unreliable communication environment. The decisions made in each peer node will be merged into the decision set to carry out the optimal decision in the decentralized system by adapted ABC algorithm. The experimental results demonstrate that the decentralized peer-to-peer architecture perform an extraordinary role in the unreliable communication environment. The proposed scheme preforms outstanding results of enemy residual value (ERV) with the packet loss rate in the range from 0 to 0.9.

  • Artificial Intelligence
  • RESEARCH ARTICLE
    Xiaoqin ZHANG, Huimin MA, Xiong LUO, Jian YUAN

    In actor-critic reinforcement learning (RL) algorithms, function estimation errors are known to cause ineffective random exploration at the beginning of training, and lead to overestimated value estimates and suboptimal policies. In this paper, we address the problem by executing advantage rectification with imperfect demonstrations, thus reducing the function estimation errors. Pretraining with expert demonstrations has been widely adopted to accelerate the learning process of deep reinforcement learning when simulations are expensive to obtain. However, existing methods, such as behavior cloning, often assume the demonstrations contain other information or labels with regard to performances, such as optimal assumption, which is usually incorrect and useless in the real world. In this paper, we explicitly handle imperfect demonstrations within the actor-critic RL frameworks, and propose a new method called learning from imperfect demonstrations with advantage rectification (LIDAR). LIDAR utilizes a rectified loss function to merely learn from selective demonstrations, which is derived from a minimal assumption that the demonstrating policies have better performances than our current policy. LIDAR learns from contradictions caused by estimation errors, and in turn reduces estimation errors. We apply LIDAR to three popular actor-critic algorithms, DDPG, TD3 and SAC, and experiments show that our method can observably reduce the function estimation errors, effectively leverage demonstrations far from the optimal, and outperform state-of-the-art baselines consistently in all the scenarios.

  • RESEARCH ARTICLE
    Xinyu TONG, Ziao YU, Xiaohua TIAN, Houdong GE, Xinbing WANG

    Electronic devices require the printed circuit board (PCB) to support the whole structure, but the assembly of PCBs suffers from welding problem of the electronic components such as surface mounted devices (SMDs) resistors. The automated optical inspection (AOI) machine, widely used in industrial production, can take the image of PCBs and examine the welding issue. However, the AOI machine could commit false negative errors and dedicated technicians have to be employed to pick out those misjudged PCBs. This paper proposes a machine learning based method to improve the accuracy of AOI. In particular, we propose an adjacent pixel RGB value based method to pre-process the image from the AOI machine and build a customized deep learning model to classify the image. We present a practical scheme including two machine learning procedures to mitigate AOI errors.We conduct experiments with the real dataset from a production line for three months, the experimental results show that our method can reduce the rate of misjudgment from 0.3%–0.5% to 0.02%–0.03%, which is meaningful for thousands of PCBs each containing thousands of electronic components in practice.

  • RESEARCH ARTICLE
    Yi REN, Ning XU, Miaogen LING, Xin GENG

    Multimodal machine learning (MML) aims to understand the world from multiple related modalities. It has attracted much attention as multimodal data has become increasingly available in real-world application. It is shown that MML can perform better than single-modal machine learning, since multi-modalities containing more information which could complement each other. However, it is a key challenge to fuse the multi-modalities in MML. Different from previous work, we further consider the side-information, which reflects the situation and influences the fusion of multi-modalities. We recover multimodal label distribution (MLD) by leveraging the side-information, representing the degree to which each modality contributes to describing the instance. Accordingly, a novel framework named multimodal label distribution learning (MLDL) is proposed to recover the MLD, and fuse the multimodalities with its guidance to learn an in-depth understanding of the jointly feature representation. Moreover, two versions of MLDL are proposed to deal with the sequential data. Experiments on multimodal sentiment analysis and disease prediction show that the proposed approaches perform favorably against state-of-the-art methods.

  • RESEARCH ARTICLE
    Xiangsheng LI, Yiqun LIU, Jiaxin MAO

    Relevance estimation is one of the core concerns of information retrieval (IR) studies. Although existing retrieval models gained much success in both deepening our understanding of information seeking behavior and building effective retrieval systems, we have to admit that the models work in a rather different manner from how humans make relevance judgments. Users’ information seeking behaviors involve complex cognitive processes, however, the majority of these behavior patterns are not considered in existing retrieval models. To bridge the gap between practical user behavior and retrieval model, it is essential to systematically investigate user cognitive behavior during relevance judgement and incorporate these heuristics into retrieval models. In this paper, we aim to formally define a set of basic user reading heuristics during relevance judgement and investigate their corresponding modeling strategies in retrieval models. Further experiments are conducted to evaluate the effectiveness of different reading heuristics for improving ranking performance. Based on a large-scale Web search dataset, we find that most reading heuristics can improve the performance of retrieval model and establish guidelines for improving the design of retrieval models with humaninspired heuristics. Our study sheds light on building retrieval model from the perspective of cognitive behavior.

  • RESEARCH ARTICLE
    Huiqun WANG, Di HUANG, Yunhong WANG

    In this paper, we propose a novel and effective approach, namely GridNet, to hierarchically learn deep representation of 3D point clouds. It incorporates the ability of regular holistic description and fast data processing in a single framework, which is able to abstract powerful features progressively in an efficient way.Moreover, to capture more accurate internal geometry attributes, anchors are inferred within local neighborhoods, in contrast to the fixed or the sampled ones used in existing methods, and the learned features are thus more representative and discriminative to local point distribution. GridNet delivers very competitive results compared with the state of the art methods in both the object classification and segmentation tasks.

  • Theoretical Computer Science
  • RESEARCH ARTICLE
    Peng ZHANG

    There are many optimization problems having the following common property: Given a total task consisting of many subtasks, the problem asks to find a solution to complete only part of these subtasks. Examples include the k-Forest problem and the k-Multicut problem, etc. These problems are called partial optimization problems, which are often NP-hard. In this paper, we systematically study the LP-rounding plus greed approach, a method to design approximation algorithms for partial optimization problems. The approach is simple, powerful and versatile. We show how to use this approach to design approximation algorithms for the k-Forest problem, the k-Multicut problem, the k-Generalized connectivity problem, etc.

  • Networks and Communication
  • RESEARCH ARTICLE
    Shuodi ZU, Xiangyang LUO, Fan ZHANG

    Location based services (LBS) are widely utilized, and determining the location of users’ IP is the foundation for LBS. Constrained by unstable delay and insufficient landmarks, the existing geolocation algorithms have problems such as low geolocation accuracy and uncertain geolocation error, difficult to meet the requirements of LBS for accuracy and reliability. A new IP geolocation algorithm based on router error training is proposed in this manuscript to improve the accuracy of geolocation results and obtain the current geolocation error range. Firstly, bootstrapping is utilized to divide the landmark data into training set and verification set, and /24 subnet distribution is utilized to extend the training set. Secondly, the path detection is performed on nodes in the three data sets respectively to extract the metropolitan area network (MAN) of the target city, and the geolocation result and error of each router in MAN are obtained by training the detection results. Finally, the MAN is utilized to get the target’s location. Based on China’s 24,254 IP geolocation experiments, the proposed algorithm has higher geolocation accuracy and lower median error than existing typical geolocation algorithms LBG, SLG, NNG and RNBG, and in most cases the difference is less than 10km between estimated error and actual error.

  • RESEARCH ARTICLE
    Yuanrun FANG, Fu XIAO, Biyun SHENG, Letian SHA, Lijuan SUN

    With the development of the Internet of Things (IoT) and the popularization of commercial WiFi, researchers have begun to use commercial WiFi for human activity recognition in the past decade. However, cross-scene activity recognition is still difficult due to the different distribution of samples in different scenes. To solve this problem, we try to build a cross-scene activity recognition system based on commercial WiFi. Firstly, we use commercial WiFi devices to collect channel state information (CSI) data and use the Bi-directional long short-termmemory (BiLSTM) network to train the activity recognition model. Then, we use the transfer learning mechanism to transfer the model to fit another scene. Finally, we conduct experiments to evaluate the performance of our system, and the experimental results verify the accuracy and robustness of our proposed system. For the source scene, the accuracy of the model trained from scratch can achieve over 90%. After transfer learning, the accuracy of cross-scene activity recognition in the target scene can still reach 90%.

  • RESEARCH ARTICLE
    Arpita BISWAS, Abhishek MAJUMDAR, Soumyabrata DAS, Krishna Lal BAISHNAB

    With the advent of modern technologies, IoT has become an alluring field of research. Since IoT connects everything to the network and transmits big data frequently, it can face issues regarding a large amount of energy loss. In this respect, this paper mainly focuses on reducing the energy loss problem and designing an energy-efficient data transfer scenario between IoT devices and clouds. Consequently, a layered architectural framework for IoT-cloud transmission has been proposed that endorses the improvement in energy efficiency, network lifetime and latency. Furthermore, an Opposition based Competitive Swarm Optimizer oriented clustering approach named OCSO-CA has been proposed to get the optimal set of clusters in the IoT device network. The proposed strategy will help in managing intra-cluster and intercluster data communications in an energy-efficient way. Also, a comparative analysis of the proposed approach with the stateof-the-art optimization algorithms for clustering has been performed.

  • Information Systems
  • RESEARCH ARTICLE
    Binbin ZHOU, Longbiao CHEN, Fangxun ZHOU, Shijian LI, Sha ZHAO, Gang PAN

    Crime risk prediction is helpful for urban safety and citizens’ life quality. However, existing crime studies focused on coarse-grained prediction, and usually failed to capture the dynamics of urban crimes. The key challenge is data sparsity, since that 1) not all crimes have been recorded, and 2) crimes usually occur with low frequency. In this paper, we propose an effective framework to predict fine-grained and dynamic crime risks in each road using heterogeneous urban data. First, to address the issue of unreported crimes, we propose a cross-aggregation soft-impute (CASI) method to deal with possible unreported crimes. Then, we use a novel crime risk measurement to capture the crime dynamics from the perspective of influence propagation, taking into consideration of both time-varying and location-varying risk propagation. Based on the dynamically calculated crime risks, we design contextual features (i.e., POI distributions, taxi mobility, demographic features) from various urban data sources, and propose a zero-inflated negative binomial regression (ZINBR) model to predict future crime risks in roads. The experiments using the real-world data from New York City show that our framework can accurately predict road crime risks, and outperform other baseline methods.

  • LETTER
    Hao WANG, Zhengquan XU, Xiaoshan ZHANG, Xiao PENG, Kaiju LI
  • LETTER
    Peng FANG, Fang WANG, Zhan SHI, Dan FENG, Qianxu YI, Xianghao XU, Yongxuan ZHANG
  • Information Security
  • LETTER
    Ziyuan LI, Huimei WANG, Jian LIU, Ming XIAN
  • RESEARCH ARTICLE
    Lei WU, Fuyou MIAO, Keju MENG, Xu WANG

    Secret sharing (SS) is part of the essential techniques in cryptography but still faces many challenges in efficiency and security. Currently, SS schemes based on the Chinese Remainder Theorem (CRT) are either low in the information rate or complicated in construction. To solve the above problems, 1) a simple construction of an ideal (t, n)-SS scheme is proposed based on CRT for a polynomial ring. Compared with Ning’s scheme, it is much more efficient in generating n pairwise coprime modular polynomials during the scheme construction phase. Moreover, Shamir’s scheme is also a special case of our scheme. To further improve the security, 2) a common-factor-based (t, n)-SS scheme is proposed in which all shareholders share a common polynomial factor. It enables both the verification of received shares and the establishment of a secure channel among shareholders during the reconstruction phase. As a result, the scheme is resistant to eavesdropping and modification attacks by outside adversaries.

  • RESEARCH ARTICLE
    Zhusen LIU, Zhenfu CAO, Xiaolei DONG, Xiaopeng ZHAO, Haiyong BAO, Jiachen SHEN

    Incorporation of fog computing with low latency, preprocession (e.g., data aggregation) and location awareness, can facilitate fine-grained collection of smart metering data in smart grid and promotes the sustainability and efficiency of the grid. Recently, much attention has been paid to the research on smart grid, especially in protecting privacy and data aggregation. However, most previous works do not focus on privacy-preserving data aggregation and function computation query on enormous data simultaneously in smart grid based on fog computation. In this paper, we construct a novel verifiable privacy-preserving data collection scheme supporting multi-party computation(MPC), named VPDC-MPC, to achieve both functions simultaneously in smart grid based on fog computing. VPDC-MPC realizes verifiable secret sharing of users’ data and data aggregation without revealing individual reports via practical cryptosystem and verifiable secret sharing scheme. Besides, we propose an efficient algorithm for batch verification of share consistency and detection of error reports if the external adversaries modify the SMs’ report. Furthermore, VPDC-MPC allows both the control center and users with limited resources to obtain arbitrary arithmetic analysis (not only data aggregation) via secure multi-party computation between cloud servers in smart grid. Besides, VPDC-MPC tolerates fault of cloud servers and resists collusion. We also present security analysis and performance evaluation of our scheme, which indicates that even with tradeoff on computation and communication overhead, VPDC-MPC is practical with above features.

  • RESEARCH ARTICLE
    Bingbing JIANG

    López-Alt et al.(STOC12) put forward a primitive called multi-key fully homomorphic encryption (MKFHE), in which each involved party encrypts their own data using keys that are independently and randomly chosen whereby arbitrary computations can be performed on these encrypted data by a final collector. Subsequently, several superior schemes based on the standard assumption (LWE) were proposed. Most of these schemes were constructed by expanding a fresh GSW-ciphertext or BGV-ciphertext under a single key to a new sametype ciphertext of the same message under a combination of associated parties’ keys. Therefore, the new ciphertext’s size grew more or less linearly with an increase in the number of parties. In this paper, we proposed a novel and simple scheme of MKFHE based on LWE without increasing the size of the ciphertext in the two non-collusion server model. In other words, each party first independently shares their own data between two servers and each server only needs a one-round communication with another to construct a ciphertext of the same plaintext under a sum of associated parties’ keys. Our new ciphertext under multiple keys has the same size as that of the original one with only one-round communication between two servers. The communication complexity is O(kmlogq)-bit, where k is the number of input ciphertexts involved, m is the size of a GSW-ciphertext and q is a modulus. In conclusion, we proved that our scheme is CPA-secure against semi-honest adversaries.

  • RESEARCH ARTICLE
    Zhixin ZENG, Xiaodi WANG, Yining LIU, Liang CHANG

    Data aggregation has been widely researched to address the privacy concern when data is published, meanwhile, data aggregation only obtains the sum or average in an area. In reality, more fine-grained data brings more value for data consumers, such as more accurate management, dynamic priceadjusting in the grid system, etc. In this paper, a multi-subset data aggregation scheme for the smart grid is proposed without a trusted third party, in which the control center collects the number of users in different subsets, and obtains the sum of electricity consumption in each subset, meantime individual user’s data privacy is still preserved. In addition, the dynamic and flexible user management mechanism is guaranteed with the secret key negotiation process among users. The analysis shows MSDA not only protects users’ privacy to resist various attacks but also achieves more functionality such as multi-subset aggregation, no reliance on any trusted third party, dynamicity. And performance evaluation demonstrates that MSDA is efficient and practical in terms of communication and computation overhead.

  • RESEARCH ARTICLE
    Lein HARN, Chingfang HSU, Zhe XIA

    A (t, n) threshold secret sharing scheme is a fundamental tool in many security applications such as cloud computing and multiparty computing. In conventional threshold secret sharing schemes, like Shamir’s scheme based on a univariate polynomial, additional communication key share scheme is needed for shareholders to protect the secrecy of their shares if secret reconstruction is performed over a network. In the secret reconstruction, the threshold changeable secret sharing (TCSS) allows the threshold to be a dynamic value so that if some shares have been compromised in a given time, it needs more shares to reconstruct the secret. Recently, a new secret sharing scheme based on a bivariate polynomial is proposed in which shares generated initially by a dealer can be used not only to reconstruct the secret but also to protect the secrecy of shares when the secret reconstruction is performed over a network. In this paper, we further extend this scheme to enable it to be a TCSS without any modification. Our proposed TCSS is dealer-free and non-interactive. Shares generated by a dealer in our scheme can serve for three purposes, (a) to reconstruct a secret; (b) to protect the secrecy of shares if secret reconstruction is performed over a network; and (c) to enable the threshold changeable property.

  • Interdisciplinary
  • RESEARCH ARTICLE
    Suyu MEI

    Rapidly identifying protein complexes is significant to elucidate the mechanisms of macromolecular interactions and to further investigate the overlapping clinical manifestations of diseases. To date, existing computational methods majorly focus on developing unsupervised graph clustering algorithms, sometimes in combination with prior biological insights, to detect protein complexes from protein-protein interaction (PPI) networks. However, the outputs of these methods are potentially structural or functional modules within PPI networks. These modules do not necessarily correspond to the actual protein complexes that are formed via spatiotemporal aggregation of subunits. In this study, we propose a computational framework that combines supervised learning and dense subgraphs discovery to predict protein complexes. The proposed framework consists of two steps. The first step reconstructs genome-scale protein co-complex networks via training a supervised learning model of l2-regularized logistic regression on experimentally derived co-complexed protein pairs; and the second step infers hierarchical and balanced clusters as complexes from the co-complex networks via effective but computationally intensive k-clique graph clustering method or efficient maximum modularity clustering (MMC) algorithm. Empirical studies of cross validation and independent test show that both steps achieve encouraging performance. The proposed framework is fundamentally novel and excels over existing methods in that the complexes inferred from protein cocomplex networks are more biologically relevant than those inferred from PPI networks, providing a new avenue for identifying novel protein complexes.