There is a wide range of interdisciplinary intersections between cyber security and artificial intelligence (AI). On one hand, AI technologies, such as deep learning, can be introduced into cyber security to construct smart models for implementing malware classification and intrusion detection and threating intelligence sensing. On the other hand, AI models will face various cyber threats, which will disturb their sample, learning, and decisions. Thus, AI models need specific cyber security defense and protection technologies to combat adversarial machine learning, preserve privacy in machine learning, secure federated learning, etc. Based on the above two aspects, we review the intersection of AI and cyber security. First, we summarize existing research efforts in terms of combating cyber attacks using AI, including adopting traditional machine learning methods and existing deep learning solutions. Then, we analyze the counterattacks from which AI itself may suffer, dissect their characteristics, and classify the corresponding defense methods. Finally, from the aspects of constructing encrypted neural network and realizing a secure federated deep learning, we expatiate the existing research on how to build a secure AI system.
The Competition for Authenticated Encryption: Security, Applicability, and Robustness (CAESAR) supported by the National Institute of Standards and Technology (NIST) is an ongoing project calling for submissions of authenticated encryption (AE) schemes. The competition itself aims at enhancing both the design of AE schemes and related analysis. The design goal is to pursue new AE schemes that are more secure than advanced encryption standard with Galois/counter mode (AES-GCM) and can simultaneously achieve three design aspects: security, applicability, and robustness. The competition has a total of three rounds and the last round is approaching the end in 2018. In this survey paper, we first introduce the requirements of the proposed design and the progress of candidate screening in the CAESAR competition. Second, the candidate AE schemes in the final round are classified according to their design structures and encryption modes. Third, comprehensive performance and security evaluations are conducted on these candidates. Finally, the research trends of design and analysis of AE for the future are discussed.
Nowadays, cyberspace has become a vital part of social infrastructure. With the rapid development of the scale of networks, applications and services have become enriched, and the bearing function of the underlying network devices (such as switches and routers) has also been extended. To promote the dynamics architecture, high-level security, and high quality of service of the network, control network architecture forward separation is a development trend of the networking technology. Currently, software-defined networking (SDN) is one of the most popular and promising technologies. In SDN, high-level strategies are deployed by the proprietary equipment, which is used to guide the data forwarding of the network equipment. This can reduce many complicated functions of the network equipment and improve the flexibility and operability of the implementation and deployment of new network technologies and protocols. However, this novel networking technology faces novel challenges in term of architecture and security. The aim of this study is to offer a comprehensive review of the state-of-the-art research on novel advances of programmable SDN, and to highlight what has been investigated and what remains to be addressed, particularly, in terms of architecture and security.
With more large-scale scientific computing tasks being delivered to cloud computing platforms, cloud workflow systems are designed for managing and arranging these complicated tasks. However, multi-tenant coexistence service mode of cloud computing brings serious security risks, which will threaten the normal execution of cloud workflows. To strengthen the security of cloud workflows, a mimic cloud computing task execution system for scientific workflows is proposed. The idea of mimic defense contains mainly three aspects: heterogeneity, redundancy, and dynamics. For heterogeneity, the diversities of physical servers, hypervisors, and operating systems are integrated to build a robust system framework. For redundancy, each sub-task of the workflow will be executed simultaneously by multiple executors. Considering efficiency and security, a delayed decision mechanism is proposed to check the results of task execution. For dynamics, a dynamic task scheduling mechanism is devised for switching workflow execution environment and shortening the life cycle of executors, which can confuse the adversaries and purify task executors. Experimental results show that the proposed system can effectively strengthen the security of cloud workflow execution.
The rapid growth of mobile Internet technologies has induced a dramatic increase in mobile payments as well as concomitant mobile transaction fraud. As the first step of mobile transactions, bankcard enrollment on mobile devices has become the primary target of fraud attempts. Although no immediate financial loss is incurred after a fraud attempt, subsequent fraudulent transactions can be quickly executed and could easily deceive the fraud detection systems if the fraud attempt succeeds at the bankcard enrollment step. In recent years, financial institutions and service providers have implemented rule-based expert systems and adopted short message service (SMS) user authentication to address this problem. However, the above solution is inadequate to face the challenges of data loss and social engineering. In this study, we introduce several traditional machine learning algorithms and finally choose the improved gradient boosting decision tree (GBDT) algorithm software library for use in a real system, namely, XGBoost. We further expand multiple features based on analysis of the enrollment behavior and plan to add historical transactions in future studies. Subsequently, we use a real card enrollment dataset covering the year 2017, provided by a worldwide payment processor. The results and framework are adopted and absorbed into a new design for a mobile payment fraud detectionsystem within the Chinese payment processor.
The current boom in the Internet of Things (IoT) is changing daily life in many ways, from wearable devices to connected vehicles and smart cities. We used to regard fog computing as an extension of cloud computing, but it is now becoming an ideal solution to transmit and process large-scale geo-distributed big data. We propose a Byzantine fault-tolerant networking method and two resource allocation strategies for IoT fog computing. We aim to build a secure fog network, called “SIoTFog,” to tolerate the Byzantine faults and improve the efficiency of transmitting and processing IoT big data. We consider two cases, with a single Byzantine fault and with multiple faults, to compare the performances when facing different degrees of risk. We choose latency, number of forwarding hops in the transmission, and device use rates as the metrics. The simulation results show that our methods help achieve an efficient and reliable fog network.
Private set intersection (PSI) allows two parties to compute the intersection of their private sets while revealing nothing except the intersection. With the development of fog computing, the need has arisen to delegate PSI on outsourced datasets to the fog. However, the existing PSI schemes are based on either fully homomorphic encryption (FHE) or pairing computation. To the best of our knowledge, FHE and pairing operations consume a huge amount of computational resource. It is therefore an untenable scenario for resource-limited clients to carry out these operations. Furthermore, these PSI schemes cannot be applied to fog computing due to some inherent problems such as unacceptable latency and lack of mobility support. To resolve this problem, we first propose a novel primitive called “faster fog-aided private set intersection with integrity preserving”, where the fog conducts delegated intersection operations over encrypted data without the decryption capacity. One of our technical highlights is to reduce the computation cost greatly by eliminating the FHE and pairing computation. Then we present a concrete construction and prove its security required under some cryptographic assumptions. Finally, we make a detailed theoretical analysis and simulation, and compare the results with those of the state-of-the-art schemes in two respects: communication overhead and computation overhead. The theoretical analysis and simulation show that our scheme is more efficient and practical.