Knowledge extraction from sensitive data often needs collaborative work. Statistical databases are generated from such data and shared among various stakeholders. In this context, the ownership protection of shared data becomes important. Wa-termarking is emerging to be a very effective tool for imposing ownership rights on various digital data formats. Watermarking of such datasets may bring distortions in the data. Consequently, the extracted knowledge may be inaccurate. These distortions are controlled by the usability constraints, which in turn limit the available bandwidth for watermarking. Large bandwidth ensures robustness; however, it may degrade the quality of the data. Such a situation can be resolved by optimizing the available bandwidth subject to the usability constraints. Optimization techniques, particularly bioinspired techniques, have become a preferred choice for solving such issues during the past few years. In this paper, we investigate the usability of various optimization schemes for identifying the maximum available bandwidth to achieve two objectives: (1) preserving the knowledge stored in the data; (2) maximizing the available bandwidth subject to the usability constraints to achieve maximum robustness. The first objective is achieved with a usability constraint model, which ensures that the knowledge is not compromised as a result of watermark em-bedding. The second objective is achieved by finding the maximum bandwidth subject to the usability constraints specified in the first objective. The performance of optimization schemes is evaluated using different metrics.
The current massive use of digital communications demands a secure link by using an embedded system (ES) with data encryption at the protocol level. The serial peripheral interface (SPI) protocol is commonly used by manufacturers of ESs and integrated circuits for applications in areas such as wired and wireless communications. We present the design and experimental implementation of a chaotic encryption and decryption algorithm applied to the SPI communication protocol. The design of the chaotic encryption algorithm along with its counterpart in the decryption is based on the chaotic Hénon map and two methods for blur and permute (in combination with DNA sequences). The SPI protocol is configured in 16 bits to synchronize a transmitter and a receiver considering a symmetric key. Results are experimentally proved using two low-cost dsPIC microcontrollers as ESs. The SPI digital-to-analog converter is used to process, acquire, and reconstruct confidential messages based on its properties for digital signal processing. Finally, security of the cryptogram is proved by a statistical test. The digital processing capacity of the algorithm is validated by dsPIC microcontrollers.
We propose a novel approach called the robust fractional-order proportional-integral-derivative (FOPID) controller, to stabilize a perturbed nonlinear chaotic system on one of its unstable fixed points. The stability analysis of the nonlinear chaotic system is made based on the proportional-integral-derivative actions using the bifurcation diagram. We extract an initial set of controller parameters, which are subsequently optimized using a quadratic criterion. The integral and derivative fractional orders are also identified by this quadratic criterion. By applying numerical simulations on two nonlinear systems, namely the multi-scroll Chen system and the Genesio-Tesi system, we show that the fractional PIλDμ controller provides the best closed-loop system performance in stabilizing the unstable fixed points, even in the presence of random perturbation.
Non-volatile random-access memory (NVRAM) technology is maturing rapidly and its byte-persistence feature allows the design of new and efficient fault tolerance mechanisms. In this paper we propose the versionized process (VerP), a new process model based on NVRAM that is natively non-volatile and fault tolerant. We introduce an intermediate software layer that allows us to run a process directly on NVRAM and to put all the process states into NVRAM, and then propose a mechanism to versionize all the process data. Each piece of the process data is given a special version number, which increases with the modification of that piece of data. The version number can effectively help us trace the modification of any data and recover it to a consistent state after a system crash. Compared with traditional checkpoint methods, our work can achieve fine-grained fault tolerance at very little cost.
As we approach the exascale era in supercomputing, designing a balanced computer system with a powerful computing ability and low power requirements has becoming increasingly important. The graphics processing unit (GPU) is an accelerator used widely in most of recent supercomputers. It adopts a large number of threads to hide a long latency with a high energy efficiency. In contrast to their powerful computing ability, GPUs have only a few megabytes of fast on-chip memory storage per streaming multiprocessor (SM). The GPU cache is inefficient due to a mismatch between the throughput-oriented execution model and cache hierarchy design. At the same time, current GPUs fail to handle burst-mode long-access latency due to GPU’s poor warp scheduling method. Thus, benefits of GPU’s high computing ability are reduced dramatically by the poor cache management and warp scheduling methods, which limit the system performance and energy efficiency. In this paper, we put forward a coordinated warp scheduling and locality-protected (CWLP) cache allocation scheme to make full use of data locality and hide latency. We first present a locality-protected cache allocation method based on the instruction program counter (LPC) to promote cache performance. Specifically, we use a PC-based locality detector to collect the reuse information of each cache line and employ a prioritised cache allocation unit (PCAU) which coordinates the data reuse information with the time-stamp information to evict the lines with the least reuse possibility. Moreover, the locality information is used by the warp scheduler to create an intelligent warp reordering scheme to capture locality and hide latency. Simulation results show that CWLP provides a speedup up to 19.8% and an average improvement of 8.8% over the baseline methods.
Feature selection is an important approach to dimensionality reduction in the field of text classification. Because of the difficulty in handling the problem that the selected features always contain redundant information, we propose a new simple feature selection method, which can effectively filter the redundant features. First, to calculate the relationship between two words, the definitions of word frequency based relevance and correlative redundancy are introduced. Furthermore, an optimal feature selection (OFS) method is chosen to obtain a feature subset FS1. Finally, to improve the execution speed, the redundant features in FS1 are filtered by combining a predetermined threshold, and the filtered features are memorized in the linked lists. Experiments are carried out on three datasets (WebKB, 20-Newsgroups, and Reuters-21578) where in support vector machines and naïve Bayes are used. The results show that the classification accuracy of the proposed method is generally higher than that of typical tradi-tional methods (information gain, improved Gini index, and improved comprehensively measured feature selection) and the OFS methods. Moreover, the proposed method runs faster than typical mutual information-based methods (improved and normalized mutual information-based feature selections, and multilabel feature selection based on maximum dependency and minimum redundancy) while simultaneously ensuring classification accuracy. Statistical results validate the effectiveness of the proposed method in handling redundant information in text classification.
Actively pushing design knowledge to designers in the design process, what we call ‘knowledge push’, can help im-prove the efficiency and quality of intelligent product design. A knowledge push technology usually includes matching of related knowledge and proper pushing of matching results. Existing approaches on knowledge matching commonly have a lack of intel-ligence. Also, the pushing of matching results is less personalized. In this paper, we propose a knowledge push technology based on applicable probability matching and multidimensional context driving. By building a training sample set, including knowledge description vectors, case feature vectors, and the mapping Boolean matrix, two probability values, application and non-application, were calculated via a Bayesian theorem to describe the matching degree between knowledge and content. The push results were defined by the comparison between two probability values. The hierarchical design content models were built to filter the knowledge in push results. The rules of personalized knowledge push were sorted by multidimensional contexts, which include design knowledge, design context, design content, and the designer. A knowledge push system based on intellectualized design of CNC machine tools was used to confirm the feasibility of the proposed technology in engineering applications.
In this study, hybrid computational frameworks are developed for active noise control (ANC) systems using an evolutionary computing technique based on genetic algorithms (GAs) and interior-point method (IPM), following an integrated approach, GA-IPM. Standard ANC systems are usually implemented with the filtered extended least mean square algorithm for optimization of coefficients for the linear finite-impulse response filter, but are likely to become trapped in local minima (LM). This issue is addressed with the proposed GA-IPM computing approach which is considerably less prone to the LM problem. Also, there is no requirement to identify a secondary path for the ANC system used in the scheme. The design method is evaluated using an ANC model of a headset with sinusoidal, random, and complex random noise interferences under several scenarios based on linear and nonlinear primary and secondary paths. The accuracy and convergence of the proposed scheme are validated based on the results of statistical analysis of a large number of independent runs of the algorithm.
Automatic classification of sentiment data (e.g., reviews, blogs) has many applications in enterprise user management systems, and can help us understand people’s attitudes about products or services. However, it is difficult to train an accurate sentiment classifier for different domains. One of the major reasons is that people often use different words to express the same sentiment in different domains, and we cannot easily find a direct mapping relationship between them to reduce the differences between domains. So, the accuracy of the sentiment classifier will decline sharply when we apply a classifier trained in one domain to other domains. In this paper, we propose a novel approach called words alignment based on association rules (WAAR) for cross-domain sentiment classification, which can establish an indirect mapping relationship between domain-specific words in different domains by learning the strong association rules between domain-shared words and domain-specific words in the same domain. In this way, the differences between the source domain and target domain can be reduced to some extent, and a more accurate cross-domain classifier can be trained. Experimental results on AmazonR_ datasets show the effectiveness of our approach on improving the performance of cross-domain sentiment classification.
Induction motor drive systems fed by cables are widely used in industrial applications. However, high-frequency switching of power devices will cause common-mode (CM) voltages during operation, leading to serious CM currents in the motor drive systems. CM currents through the cables and motors in the drive systems can cause electromagnetic interference (EMI) with the surrounding electronic equipment and shorten the life of induction motors. Therefore, it is necessary to analyze the CM cur-rents in motor drive systems. In this paper, high-frequency models of unshielded and shielded power cables are formulated. The frequency-dependent effects and mutual inductances of the cables are taken into account. The power cable parameters are ex-tracted by the finite element method and validated by measurements. High-frequency models of induction motors and inverters are introduced from existing works. The CM currents at the motor and inverter terminals are obtained, and the influence of the cable length and cable type on the CM currents is analyzed. There is a good agreement between the experimental results and the CM currents predicted by the proposed models.
We present a simple implementation of a thermal energy harvesting circuit with the maximum power point tracking (MPPT) control for self-powered miniature-sized sensor nodes. Complex start-up circuitry and direct current to direct current (DC-DC) boost converters are not required, because the output voltage of targeted thermoelectric generator (TEG) devices is high enough to drive the load applications directly. The circuit operates in the active/asleep mode to overcome the power mismatch between TEG devices and load applications. The proposed circuit was implemented using a 0.35-μm complementary metal-oxide semiconductor (CMOS) process. Experimental results confirmed correct circuit operation and demonstrated the performance of the MPPT scheme. The circuit achieved a peak power efficiency of 95.5% and an MPPT accuracy of higher than 99%.
To select the type and value of the impedance of fault current limiters (FCLs) for power network designers, we introduce a new method to calculate the optimum value of FCL impedance depending on its position in the network. Due to the complexity of its impedance, the costs of both real and imaginary parts of FCL impedance are considered. The optimization of FCL impedance is based on a goal function that maximizes the reduction of the fault current while minimizing the costs. While the position of FCL in the network has an effect on the calculation of the optimum impedance value, the method for selecting FCL location is not the focus of this study. The proposed method for optimizing FCL impedance can be used for every network that has symmetrical and/or asymmetrical faults. We use a 14-bus IEEE network as an example to explain the process. The optimum FCL impedance used in this network is calculated by considering the vast range of costs for both real and imaginary parts of FCL impedance.