Jul 2019, Volume 20 Issue 7
    

  • Select all
  • Review
    Zu-yan XU, Shen-jin ZHANG, Xing-jiang ZHOU, Feng-feng ZHANG, Feng YANG, Zhi-min WANG, Nan ZONG, Guo-dong LIU, Lin ZHAO, Li YU, Chuang-tian CHEN, Xiao-yang WANG, Qin-jun PENG

    We briefly review recent results on photoemission spectroscopy based on the deep and vacuum ultraviolet diode pumped solid-state lasers which we have developed. Cascaded second harmonic generation with the nonlinear crystal KBe2BO3F2 (KBBF) is used to generate deep ultraviolet and vacuum ultraviolet laser radiation, which complements traditional incoherent light sources such as gas discharge lamps and synchrotron radiation, and has greatly improved resolution with respect to energy, momentum, and spin of photoemission spectroscopy. Many new functions have been developed with the advantages of high photon energy, narrow linewidth, high photon flux density, and so on. These have led to the observation of various new phenomena and the amassment of new data in the fields of high temperature superconductivity, topological electronics, Fermi semi-metals, and so forth. These laser systems have revived the field of photoemission spectroscopy and provided a new platform in this frontier research field.

  • Review
    Xiao-lin ZHENG, Meng-ying ZHU, Qi-bing LI, Chao-chao CHEN, Yan-chao TAN

    Artificial intelligence (AI) is the core technology of technological revolution and industrial transformation. As one of the new intelligent needs in the AI 2.0 era, financial intelligence has elicited much attention from the academia and industry. In our current dynamic capital market, financial intelligence demonstrates a fast and accurate machine learning capability to handle complex data and has gradually acquired the potential to become a “financial brain.” In this paper, we survey existing studies on financial intelligence. First, we describe the concept of financial intelligence and elaborate on its position in the financial technology field. Second, we introduce the development of financial intelligence and review state-of-the-art techniques in wealth management, risk management, financial security, financial consulting, and blockchain. Finally, we propose a research framework called FinBrain and summarize four open issues, namely, explainable financial agents and causality, perception and prediction under uncertainty, risk-sensitive and robust decision-making, and multi-agent game and mechanism design. We believe that these research directions can lay the foundation for the development of AI 2.0 in the finance field.

  • Review
    Zhen-yi XU, Yu KANG, Yang CAO, Yu-xiao YANG

    Identifying code has been widely used in man-machine verification to maintain network security. The challenge in engaging man-machine verification involves the correct classification of man and machine tracks. In this study, we propose a random forest (RF) model for man-machine verification based on the mouse movement trajectory dataset. We also compare the RF model with the baseline models (logistic regression and support vector machine) based on performance metrics such as precision, recall, false positive rates, false negative rates, F-measure, and weighted accuracy. The performance metrics of the RF model exceed those of the baseline models.

  • Orginal Article
    Ye YUAN, Kai-ge QU, Li-ji WU, Jia-wei MA, Xiang-min ZHANG

    Hash-based message authentication code (HMAC) is widely used in authentication and message integrity. As a Chinese hash algorithm, the SM3 algorithm is gradually winning domestic market value in China. The side channel security of HMAC based on SM3 (HMAC-SM3) is still to be evaluated, especially in hardware implementation, where only intermediate values stored in registers have apparent Hamming distance leakage. In addition, the algorithm structure of SM3 determines the difficulty in HMAC-SM3 side channel analysis. In this paper, a skillful bit-wise chosen-plaintext correlation power attack procedure is proposed for HMAC-SM3 hardware implementation. Real attack experiments on a field programmable gate array (FPGA) board have been performed. Experimental results show that we can recover the key from the hypothesis space of 2256 based on the proposed procedure.

  • Orginal Article
    Muhammad IMRAN, Bruce A. HARVEY, Muhammad ATIF, Adnan Ali MEMON

    This paper presents a block-based secure and robust watermarking technique for color images based on multi-resolution decomposition and de-correlation. The principal objective of the presented scheme is to simultaneously meet all the four requirements (robustness, security, imperceptibility, and capacity) of a good watermarking scheme. The contribution of this study is to basically achieve the four contradictory requirements that a good watermarking scheme must meet. To do so, different approaches are combined in a way that the four requirements are achieved. For instance, to obtain imperceptibility, the three color channels (red, green, and blue) are de-correlated using principal component analysis, and the first principal component (de-correlated red channel) is chosen for watermark embedding. Afterwards, to achieve robustness, the de-correlated channel is decomposed using a discrete wavelet transform (DWT), and the approximate band (the other three bands are kept intact to preserve the edge information) is further decomposed into distinct blocks. The random blocks are chosen based on a random generated key. The random selected blocks are further broken down into singular values and vectors. Based on the mutual dependency on singular values and vectors’ matrices, the values are modified depending on the watermarking bits, and their locations are saved and used as another key, required when the watermark is to be extracted. Consequently, two-level authentication levels ensure the security, and using both singular values and vectors increases the capacity of the presented scheme. Moreover, the involvement of both left and right singular vectors along with singular values in the watermarking embedding process strengthens the robustness of the proposed scheme. Finally, to compare the presented scheme with the state-of-the-art schemes in terms of imperceptibility (peak signal-to-noise ratio and structural similarity index), security (with numerous fake keys), robustness (normalized correlation and bit error rate), and capacity, the Gonzalez and Kodak datasets are used. The comparison shows significant improvement of the proposed scheme over existing schemes.

  • Orginal Article
    Le-kai ZHANG, Shou-qian SUN, Bai-xi XING, Rui-ming LUO, Ke-jun ZHANG

    Music can trigger human emotion. This is a psychophysiological process. Therefore, using psychophysiological characteristics could be a way to understand individual music emotional experience. In this study, we explore a new method of personal music emotion recognition based on human physiological characteristics. First, we build up a database of features based on emotions related to music and a database based on physiological signals derived from music listening including EDA, PPG, SKT, RSP, and PD variation information. Then linear regression, ridge regression, support vector machines with three different kernels, decision trees, k-nearest neighbors, multi-layer perceptron, and Nu support vector regression (NuSVR) are used to recognize music emotions via a data synthesis of music features and human physiological features. NuSVR outperforms the other methods. The correlation coefficient values are 0.7347 for arousal and 0.7902 for valence, while the mean squared errors are 0.023 23 for arousal and 0.014 85 for valence. Finally, we compare the different data sets and find that the data set with all the features (music features and all physiological features) has the best performance in modeling. The correlation coefficient values are 0.6499 for arousal and 0.7735 for valence, while the mean squared errors are 0.029 32 for arousal and 0.015 76 for valence. We provide an effective way to recognize personal music emotional experience, and the study can be applied to personalized music recommendation.

  • Orginal Article
    Quan-dong WANG, Liang-hao GUO, Wei-yu ZHANG, Sui-ling REN, Chao YAN

    A robust generalized sidelobe canceller is proposed to combat direction of arrival (DOA) mismatches. To estimate the interference-plus-noise (IPN) statistics characteristics, conventional signal of interest (SOI) extraction methods usually collect a large number of segments where only the IPN signal is active. To avoid that collection procedure, we redesign the blocking matrix structure using an eigenanalysis method to reconstruct the IPN covariance matrix from the samples. Additionally, a modified eigenanalysis reconstruction method based on the rank-one matrix assumption is proposed to achieve a higher reconstruction accuracy. The blocking matrix is obtained by incorporating the effective reconstruction into the maximum signal-to-interferenceplus-noise ratio (MaxSINR) beamformer. It can minimize the influence of signal leakage and maximize the IPN power for further noise and interference suppression. Numerical results show that the two proposed methods achieve considerable improvements in terms of the output waveform SINR and correlation coefficients with the desired signal in the presence of a DOA mismatch and a limited number of snapshots. Compared to the first proposed method, the modified one can reduce the signal distortion even further.

  • Orginal Article
    Zhi-yong SONG, Xing-lin SHEN, Qiang FU
    2019, 20(7): 988-1001. https://doi.org/10.1631/FITEE.1800394

    Cross-eye jamming is an electronic attack technique that induces an angular error in the monopulse radar by artificially creating a false target and deceiving the radar into detecting and tracking it. Presently, there is no effective anti-jamming method to counteract cross-eye jamming. In our study, through detailed analysis of the jamming mechanism, a multi-target model for a cross-eye jamming scenario is established within a random finite set framework. A novel anti-jamming method based on multitarget tracking using probability hypothesis density filters is subsequently developed by combining the characteristic differences between target and jamming with the releasing process of jamming. The characteristic differences between target and jamming and the releasing process of jamming are used to optimize particle partitioning. Particle identity labels that represent the properties of target and jamming are introduced into the detection and tracking processes. The release of cross-eye jamming is detected by estimating the number of targets in the beam, and the distinction between true targets and false jamming is realized through correlation and transmission between labels and estimated states. Thus, accurate tracking of the true targets is achieved under severe jamming conditions. Simulation results showed that the proposed method achieves a minimum delay in detection of cross-eye jamming and an accurate estimation of the target state.

  • Orginal Article
    Hai-yan WANG, Fu ZHAO, Hui-min GAO, John W. SUTHERLAND
    2019, 20(7): 1002-1020. https://doi.org/10.1631/FITEE.1700457

    An important production planning problem is how to best schedule jobs (or lots) when each job consists of a large number of identical parts. This problem is often approached by breaking each job/lot into sublots (termed lot streaming). When the total number of transfer sublots in lot streaming is large, the computational effort to calculate job completion time can be significant. However, researchers have largely neglected this computation time issue. To provide a practical method for production scheduling for this situation, we propose a method to address the n-job, m-machine, and lot streaming flow-shop scheduling problem. We consider the variable sublot sizes, setup time, and the possibility that transfer sublot sizes may be bounded because of capacity constrained transportation activities. The proposed method has three stages: initial lot splitting, job sequencing optimization with efficient calculation of the makespan/total flow time criterion, and transfer adjustment. Computational experiments are conducted to confirm the effectiveness of the three-stage method. The experiments reveal that relative to results reported on lot streaming problems for five standard datasets, the proposed method saves substantial computation time and provides better solutions, especially for large-size problems.