The Internet of Things (IoT) has led to rapid growth in smart cities. However, IoT botnet-based attacks against smart city systems are becoming more prevalent. Detection methods for IoT botnet-based attacks have been the subject of extensive research, but the identification of early-stage behaviour of the IoT botnet prior to any attack remains a largely unexplored area that could prevent any attack before it is launched. Few studies have addressed the early stages of IoT botnet detection using monolithic deep learning algorithms that could require more time for training and detection. We, however, propose an edge-based deep learning system for the detection of the early stages of IoT botnets in smart cities. The proposed system, which we call EDIT (Edge-based Detection of early-stage IoT Botnet), aims to detect abnormalities in network communication traffic caused by early-stage IoT botnets based on the modular neural network (MNN) method at multi-access edge computing (MEC) servers. MNN can improve detection accuracy and efficiency by leveraging parallel computing on MEC. According to the findings, EDIT has a lower false-negative rate compared to a monolithic approach and other studies. At the MEC server, EDIT takes as little as 16 ms for the detection of an IoT botnet.
A Link Flooding Attack (LFA) is a special type of Denial-of-Service (DoS) attack in which the attacker sends out a huge number of requests to exhaust the capacity of a link on the path the traffic comes to a server. As a result, user traffic cannot reach the server. As a result, DoS and degradation of Quality-of-Service (QoS) occur. Because the attack traffic does not go to the victim, protecting the legitimate traffic alone is hard for the victim. The victim can protect its legitimate traffic by using a special type of router called filter router (FR). An FR can receive server filters and apply them to block a link incident to it. An FR probabilistically appends its own IP address to packets it forwards, and the victim uses that information to discover the traffic topology. By analyzing traffic rates and paths, the victim identifies some links that may be congested. The victim needs to select some of these possible congested links (PCLs) and send a filter to the corresponding FR so that legitimate traffic avoids congested paths. In this paper, we formulate two optimization problems for blocking the least number of PCLs so that the legitimate traffic goes through a non-congested path. We consider the scenario where every user has at least one non-congested shortest path in the first problem. We extend the first problem to a scenario where there are some users whose shortest paths are all congested. We transform the original problem to the vertex separation problem to find the links to block. We use a custom-built Java multi-threaded simulator and conduct extensive simulations to support our solutions.
With the increasing global mobile data traffic and daily user engagement, technologies, such as mobile crowdsensing, benefit hugely from the constant data flows from smartphone and IoT owners. However, the device users, as data owners, urgently require a secure and fair marketplace to negotiate with the data consumers. In this paper, we introduce a novel federated data acquisition market that consists of a group of local data aggregators (LDAs); a number of data owners; and, one data union to coordinate the data trade with the data consumers. Data consumers offer each data owner an individual price to stimulate participation. The mobile data owners naturally cooperate to gossip about individual prices with each other, which also leads to price fluctuation. It is challenging to analyse the interactions among the data owners and the data consumers using traditional game theory due to the complex price dynamics in a large-scale heterogeneous data acquisition scenario. Hence, we propose a data pricing strategy based on mean-field game (MFG) theory to model the data owners’ cost considering the price dynamics. We then investigate the interactions among the LDAs by using the distribution of price, namely the mean-field term. A numerical method is used to solve the proposed pricing strategy. The evaluations demonstrate that the proposed pricing strategy efficiently allows the data owners from multiple LDAs to reach an equilibrium on data quantity to sell regarding the current individual price scheme. The result further demonstrates that the influential LDAs determine the final price distribution. Last but not least, it shows that cooperation among mobile data owners leads to optimal social welfare even with the additional cost of information exchange.
Attention deficit disorder is a frequently observed symptom in individuals with autism spectrum disorder (ASD). This condition can present significant obstacles for those affected, manifesting in challenges such as sustained focus, task completion, and the management of distractions. These issues can impede learning, social interactions, and daily functioning. This complexity of symptoms underscores the need for tailored approaches in both educational and therapeutic settings to support individuals with ASD effectively. In this study, we have expanded upon our initial virtual reality (VR) prototype, originally created for attention therapy, to conduct a detailed statistical analysis. Our objective was to precisely identify and measure any significant differences in attention-related outcomes between sessions and groups. Our study found that heart rate (HR) and electrodermal activity (EDA) were more responsive to attention shifts than temperature. The ‘Noise’ and ‘Score’ strategies significantly affected eye openness, with the ASD group showing more responsiveness. The control group had smaller pupil sizes, and the ASD group’s pupil size increased notably when switching strategies in Session 1. Distraction log data showed that both ‘Noise’ and ‘Object Opacity’ strategies influenced attention patterns, with the ‘Red Vignette’ strategy showing a significant effect only in the ASD group. The responsiveness of HR and EDA to attention shifts and the changes in pupil size could serve as valuable physiological markers to monitor and guide these interventions. These findings further support evidence that VR has positive implications for helping those with ASD, allowing for more tailored personalized interventions with meaningful impact.
Federated Learning (FL) is currently a widely used collaborative learning framework, and the distinguished feature of FL is that the clients involved in training do not need to share raw data, but only transfer the model parameters to share knowledge, and finally get a global model with improved performance. However, recent studies have found that sharing model parameters may still lead to privacy leakage. From the shared model parameters, local training data can be reconstructed and thus lead to a threat to individual privacy and security. We observed that most of the current attacks are aimed at client-specific data reconstruction, while limited attention is paid to the information leakage of the global model. In our work, we propose a novel FL attack based on shared model parameters that can deduce the data distribution of the global model. Different from other FL attacks that aim to infer individual clients’ raw data, the data distribution inference attack proposed in this work shows that the attackers can have the capability to deduce the data distribution information behind the global model. We argue that such information is valuable since the training data behind a well-trained global model indicates the common knowledge of a specific task, such as social networks and e-commerce applications. To implement such an attack, our key idea is to adopt a deep reinforcement learning approach to guide the attack process, where the RL agent adjusts the pseudo-data distribution automatically until it is similar to the ground truth data distribution. By a carefully designed Markov decision proces (MDP) process, our implementation ensures our attack can have stable performance and experimental results verify the effectiveness of our proposed inference attack.
Fine-tuning is a popular approach to solve the few-shot object detection problem. In this paper, we attempt to introduce a new perspective on it. We formulate the few-shot novel tasks as a type of distribution shifted from its ground-truth distribution. We introduce the concept of imaginary placeholder masks to show that this distribution shift is essentially a composite of in-distribution (ID) and out-of-distribution(OOD) shifts. Our empirical investigation results show that it is significant to balance the trade-off between adapting to the available few-shot distribution and keeping the distribution-shift robustness of the pre-trained model. We explore improvements in the few-shot fine-tuning transfer in the few-shot object detection (FSOD) settings from three aspects. First, we explore the LinearProbe-Finetuning (LP-FT) technique to balance this trade-off to mitigate the feature distortion problem. Second, we explore the effectiveness of utilizing the protection freezing strategy for query-based object detectors to keep their OOD robustness. Third, we try to utilize ensembling methods to circumvent the feature distortion. All these techniques are integrated into a whole method called BIOT (Balanced ID-OOD Transfer). Evaluation results show that our method is simple yet effective and general to tap the FSOD potential of query-based object detectors. It outperforms the current SOTA method in many FSOD settings and has a promising scaling capability.
The potential of cloud computing, an emerging concept to minimize the costs associated with computing has recently drawn the interest of a number of researchers. The fast advancements in cloud computing techniques led to the amazing arrival of cloud services. But data security is a challenging issue for modern civilization. The main issues with cloud computing are cloud security as well as effective cloud distribution over the network. Increasing the privacy of data with encryption methods is the greatest approach, which has highly progressed in recent times. In this aspect, sanitization is also the process of confidentiality of data. The goal of this work is to present a deep learning-assisted data sanitization procedure for data security. The proposed data sanitization process involves the following steps: data preprocessing, optimal key generation, deep learning-assisted key fine-tuning, and Kronecker product. Here, the data preprocessing considers original data as well as the extracted statistical feature. Key generation is the subsequent process, for which, a self-adaptive Namib beetle optimization (SANBO) algorithm is developed in this research. Among the generated keys, appropriate keys are fine-tuned by the improved Deep Maxout classifier. Then, the Kronecker product is done in the sanitization process. Reversing the sanitization procedure will yield the original data during the data restoration phase. The study part notes that the suggested data sanitization technique guarantees cloud data security against malign attacks. Also, the analysis of proposed work in terms of restoration effectiveness and key sensitivity analysis is also done.
In the Internet of Things (IoT), a large number of devices are connected using a variety of communication technologies to ensure that they can communicate both physically and over the network. However, devices face the challenge of a single point of failure, a malicious user may forge device identity to gain access and jeopardize system security. In addition, devices collect and transmit sensitive data, and the data can be accessed or stolen by unauthorized user, leading to privacy breaches, which posed a significant risk to both the confidentiality of user information and the protection of device integrity. Therefore, in order to solve the above problems and realize the secure transmission of data, this paper proposed EBIAS, a secure and efficient blockchain-based identity authentication scheme designed for IoT devices. First, EBIAS combined the Elliptic Curve Cryptography (ECC) algorithm and the SHA-256 algorithm to achieve encrypted communication of the sensitive data. Second, EBIAS integrated blockchain to tackle the single point of failure and ensure the integrity of the sensitive data. Finally, we performed security analysis and conducted sufficient experiment. The analysis and experimental results demonstrate that EBIAS has certain improvements on security and performance compared with the previous schemes, which further proves the feasibility and effectiveness of EBIAS.
The combination of blockchain and Internet of Things technology has made significant progress in smart agriculture, which provides substantial support for data sharing and data privacy protection. Nevertheless, achieving efficient interactivity and privacy protection of agricultural data remains a crucial issues. To address the above problems, we propose a blockchain-assisted federated learning-driven support vector machine (BAFL-SVM) framework to realize efficient data sharing and privacy protection. The BAFL-SVM is composed of the FedSVM-RiceCare module and the FedPrivChain module. Specifically, in FedSVM-RiceCare, we utilize federated learning and SVM to train the model, improving the accuracy of the experiment. Then, in FedPrivChain, we adopt homomorphic encryption and a secret-sharing scheme to encrypt the local model parameters and upload them. Finally, we conduct a large number of experiments on a real-world dataset of rice pests and diseases, and the experimental results show that our framework not only guarantees the secure sharing of data but also achieves a higher recognition accuracy compared with other schemes.
We present a consensus mechanism in this paper that is designed specifically for supply chain blockchains, with a core focus on establishing trust among participating stakeholders through a novel reputation-based approach. The prevailing consensus mechanisms, initially crafted for cryptocurrency applications, prove unsuitable for the unique dynamics of supply chain systems. Unlike the broad inclusivity of cryptocurrency networks, our proposed mechanism insists on stakeholder participation rooted in process-specific quality criteria. The delineation of roles for supply chain participants within the consensus process becomes paramount. While reputation serves as a well-established quality parameter in various domains, its nuanced impact on non-cryptocurrency consensus mechanisms remains uncharted territory. Moreover, recognizing the primary role of efficient block verification in blockchain-enabled supply chains, our work introduces a comprehensive reputation model. This model strategically selects a leader node to orchestrate the entire block mining process within the consensus. Additionally, we innovate with a Schnorr Multisignature-based block verification mechanism seamlessly integrated into our proposed consensus model. Rigorous experiments are conducted to evaluate the performance and feasibility of our pioneering consensus mechanism, contributing valuable insights to the evolving landscape of blockchain technology in supply chain applications.
With the rapid advancement of cloud technologies, cloud services have enormously contributed to the cloud community for application development life-cycle. In this context, Kubernetes has played a pivotal role as a cloud computing tool, enabling developers to adopt efficient and automated deployment strategies. Using Kubernetes as an orchestration tool and a cloud computing system as a manager of the infrastructures, developers can boost the development and deployment process. With cloud providers such as GCP, AWS, Azure, and Oracle offering Kubernetes services, the availability of both x86 and ARM platforms has become evident. However, while x86 currently dominates the market, ARM-based solutions have seen limited adoption, with only a few individuals actively working on ARM deployments. This study explores the efficiency and cost-effectiveness of implementing Kubernetes on different CPU platforms. By comparing the performance of x86 and ARM platforms, this research seeks to ascertain whether transitioning to ARM presents a more advantageous option for Kubernetes deployments. Through a comprehensive evaluation of scalability, cost, and overall performance, this study aims to shed light on the viability of leveraging ARM on different CPUs by providing valuable insights.
With the rapid advancement of virtual reality, dynamic gesture recognition technology has become an indispensable and critical technique for users to achieve human-computer interaction in virtual environments. The recognition of dynamic gestures is a challenging task due to the high degree of freedom and the influence of individual differences and the change of gesture space. To solve the problem of low recognition accuracy of existing networks, an improved dynamic gesture recognition algorithm based on ResNeXt architecture is proposed. The algorithm employs three-dimensional convolution techniques to effectively capture the spatiotemporal features intrinsic to dynamic gestures. Additionally, to enhance the model’s focus and improve its accuracy in identifying dynamic gestures, a lightweight convolutional attention mechanism is introduced. This mechanism not only augments the model’s precision but also facilitates faster convergence during the training phase. In order to further optimize the performance of the model, a deep attention submodule is added to the convolutional attention mechanism module to strengthen the network’s capability in temporal feature extraction. Empirical evaluations on EgoGesture and NvGesture datasets show that the accuracy of the proposed model in dynamic gesture recognition reaches 95.03% and 86.21%, respectively. When operating in RGB mode, the accuracy reached 93.49% and 80.22%, respectively. These results underscore the effectiveness of the proposed algorithm in recognizing dynamic gestures with high accuracy, showcasing its potential for applications in advanced human-computer interaction systems.
As network and information systems become widely adopted across industries, cybersecurity concerns have grown more prominent. Among these concerns, insider threats are considered particularly covert and destructive. Insider threats refer to malicious insiders exploiting privileged access to networks, systems, and data to intentionally compromise organizational security. Detecting these threats is challenging due to the complexity and variability of user behavior data, combined with the subtle and covert nature of insider actions. Traditional detection methods often fail to capture both long-term dependencies and short-term fluctuations in time-series data, which are crucial for identifying anomalous behaviors. To address these issues, this paper introduces the Test-Time Training (TTT) model for the first time in the field of insider threat detection, and proposes a detection method based on the TTT-ECA-ResNet model. First, the dataset is preprocessed. TTT is applied to extract long-term dependencies in features, effectively capturing dynamic sequence changes. The Residual Network, incorporating the Efficient Channel Attention mechanism, is used to extract local feature patterns, capturing relationships between different positions in time-series data. Finally, a Linear layer is employed for more precise detection of insider threats. The proposed approaches were evaluated using the CMU CERT Insider Threat Dataset, achieving an AUC of 98.75% and an F1-score of 96.81%. The experimental results demonstrate the effectiveness of the proposed methods, outperforming other state-of-the-art approaches.