Group decision making plays an important role in various fields of management decision and economics. In this paper, we develop two methods for hesitant fuzzy multiple criteria group decision making with group consensus in which all the experts use hesitant fuzzy decision matrices (HFDMs) to express their preferences. The aim of this paper is to present two novel consensus models applied in different group decision making situations, which are composed of consensus checking processes, consensus-reaching processes, and selection processes. All the experts make their own judgments on each alternative over multiple criteria by hesitant fuzzy sets, and then the aggregation of each hesitant fuzzy set under each criterion is calculated by the aggregation operators. Furthermore, we can calculate the distance between any two aggregations of hesitant fuzzy sets, based on which the deviation between any two experts is yielded. After introducing the consensus measure, we develop two kinds of consensus-reaching procedures and then propose two step-by-step algorithms for hesitant fuzzy multiple criteria group decision making. A numerical example concerning the selection of selling ways about ‘Trade-Ins’ for Apple Inc. is provided to illustrate and verify the developed approaches. In this example, the methods which aim to reach a high consensus of all the experts before the selection process can avoid some experts’ preference values being too high or too low. After modifying the previous preference information by using our consensus measures, the result of the selection process is much more reasonable.
Network design problems (NDPs) have long been regarded as one of the most challenging problems in the field of transportation planning due to the intrinsic non-convexity of their bi-level programming form. Furthermore, a mixture of continuous/discrete decision variables makes the mixed network design problem (MNDP) more complicated and difficult to solve. We adopt a surrogate-based optimization (SBO) framework to solve three featured categories of NDPs (continuous, discrete, and mixed-integer). We prove that the method is asymptotically completely convergent when solving continuous NDPs, guaranteeing a global optimum with probability one through an indefinitely long run. To demonstrate the practical performance of the proposed framework, numerical examples are provided to compare SBO with some existing solving algorithms and other heuristics in the literature for NDP. The results show that SBO is one of the best algorithms in terms of both accuracy and efficiency, and it is efficient for solving large-scale problems with more than 20 decision variables. The SBO approach presented in this paper is a general algorithm of solving other optimization problems in the transportation field.
Due to its simplicity and ease of use, the standard grey wolf optimizer (GWO) is attracting much attention. However, due to its imperfect search structure and possible risk of being trapped in local optima, its application has been limited. To perfect the performance of the algorithm, an optimized GWO is proposed based on a mutation operator and eliminating-reconstructing mechanism (MR-GWO). By analyzing GWO, it is found that it conducts search with only three leading wolves at the core, and balances the exploration and exploitation abilities by adjusting only the parameter a, which means the wolves lose some diversity to some extent. Therefore, a mutation operator is introduced to facilitate better searching wolves, and an eliminating- reconstructing mechanism is used for the poor search wolves, which not only effectively expands the stochastic search, but also accelerates its convergence, and these two operations complement each other well. To verify its validity, MR-GWO is applied to the global optimization experiment of 13 standard continuous functions and a radial basis function (RBF) network approximation experiment. Through a comparison with other algorithms, it is proven that MR-GWO has a strong advantage.
With the rapidly increasing number of mobile devices being used as essential terminals or platforms for communication, security threats now target the whole telecommunication infrastructure and become increasingly serious. Network probing tools, which are deployed as a bypass device at a mobile core network gateway, can collect and analyze all the traffic for security detection. However, due to the ever-increasing link speed, it is of vital importance to offload the processing pressure of the detection system. In this paper, we design and evaluate a real-time pre-processing system, which includes a hardware accelerator and a multi-core processor. The implemented prototype can quickly restore each encapsulated packet and effectively distribute traffic to multiple back-end detection systems. We demonstrate the prototype in a well-deployed network environment with large volumes of real data. Experimental results show that our system can achieve at least 18 Gb/s with no packet loss with all kinds of communication protocols.
We present a new high-payload joint reversible data-hiding scheme for encrypted images. Instead of embedding data in the encrypted image directly, the content owner first uses an interpolation technique to estimate whether the location can be used for embedding and generates a location map before encryption. Next, the data hider embeds the additional data through flipping the most significant bits (MSBs) of the encrypted image according to the location map. At the receiver side, before extracting the additional data and reconstructing the image, the receiver decrypts the image first. Experimental results demonstrate that the proposed method can achieve real reversibility, which means data extraction and image recovery are free of error. Moreover, our scheme can embed more payloads than most existing reversible data hiding schemes in encrypted images.
Software defect prediction is aimed to find potential defects based on historical data and software features. Software features can reflect the characteristics of software modules. However, some of these features may be more relevant to the class (defective or non-defective), but others may be redundant or irrelevant. To fully measure the correlation between different features and the class, we present a feature selection approach based on a similarity measure (SM) for software defect prediction. First, the feature weights are updated according to the similarity of samples in different classes. Second, a feature ranking list is generated by sorting the feature weights in descending order, and all feature subsets are selected from the feature ranking list in sequence. Finally, all feature subsets are evaluated on a k-nearest neighbor (KNN) model and measured by an area under curve (AUC) metric for classification performance. The experiments are conducted on 11 National Aeronautics and Space Administration (NASA) datasets, and the results show that our approach performs better than or is comparable to the compared feature selection approaches in terms of classification performance.
To provide timely results for big data analytics, it is crucial to satisfy deadline requirements for MapReduce jobs in today’s production environments. Much effort has been devoted to the problem of meeting deadlines, and typically there exist two kinds of solutions. The first is to allocate appropriate resources to complete the entire job before the specified time limit, where missed deadlines result because of tight deadline constraints or lack of resources; the second is to run a pre-constructed sample based on deadline constraints, which can satisfy the time requirement but fail to maximize the volumes of processed data. In this paper, we propose a deadline-oriented task scheduling approach, named ‘Dart’, to address the above problem. Given a specified deadline and restricted resources, Dart uses an iterative estimation method, which is based on both historical data and job running status to precisely estimate the real-time job completion time. Based on the estimated time, Dart uses an approach–revise algorithm to make dynamic scheduling decisions for meeting deadlines while maximizing the amount of processed data and mitigating stragglers. Dart also efficiently handles task failures and data skew, protecting its performance from being harmed. We have validated our approach using workloads from OpenCloud and Facebook on a cluster of 64 virtual machines. The results show that Dart can not only effectively meet the deadline but also process near-maximum volumes of data even with tight deadlines and limited resources.
The Unified Modeling Language (UML) is an industry standard for modeling analysis and design. However, the semantics of UML is not precisely defined and the correctness of refinement relations cannot be verified. In this study, we use the theorem proof assistant Coq to formalize and mechanize the semantics of UMLStatecharts and the refinement relations between models. Based on the mechanized semantics, the desired properties of both the semantics and the refinement relations can be described and proven as predicates and lemmas. This approach provides a promising way to obtain certified fault-free modeling and refinement.
We propose a reconfigurable control-bit generation algorithm for rotation and sub-word rotation operations. The algorithm uses a self-routing characteristic to configure an inverse butterfly network. In addition to being highly parallelized and inexpensive, the algorithm integrates the rotation-shift, bi-directional rotation-shift, and sub-word rotation-shift operations. To our best knowledge, this is the first scheme to accommodate a variety of rotation operations into the same architecture. We have developed the highly efficient reconfigurable rotation unit (HERRU) and synthesized it into the Semiconductor Manufacturing International Corporation (SMIC)’s 65-nm process. The results show that the overall efficiency (relative area×relative latency) of our HERRU is higher by at least 23% than that of other designs with similar functions. When executing the bi-directional rotation operations alone, HERRU occupies a significantly smaller area with a lower latency than previously proposed designs.
Sparse representation is a mathematical model for data representation that has proved to be a powerful tool for solving problems in various fields such as pattern recognition, machine learning, and computer vision. As one of the building blocks of the sparse representation method, dictionary learning plays an important role in the minimization of the reconstruction error between the original signal and its sparse representation in the space of the learned dictionary. Although using training samples directly as dictionary bases can achieve good performance, the main drawback of this method is that it may result in a very large and inefficient dictionary due to noisy training instances. To obtain a smaller and more representative dictionary, in this paper, we propose an approach called Laplacian sparse dictionary (LSD) learning. Our method is based on manifold learning and double sparsity. We incorporate the Laplacian weighted graph in the sparse representation model and impose the l1-norm sparsity on the dictionary. An LSD is a sparse overcomplete dictionary that can preserve the intrinsic structure of the data and learn a smaller dictionary for each class. The learned LSD can be easily integrated into a classification framework based on sparse representation. We compare the proposed method with other methods using three benchmark-controlled face image databases, Extended Yale B, ORL, and AR, and one uncontrolled person image dataset, i-LIDS-MA. Results show the advantages of the proposed LSD algorithm over state-of-the-art sparse representation based classification methods.
A hybrid optimization approach combining a particle swarm algorithm, a genetic algorithm, and a heuristic inter-leaving algorithm is proposed for scheduling tasks in the multifunction phased array radar. By optimizing parameters using chaos theory, designing the dynamic inertia weight for the particle swarm algorithm as well as introducing crossover operation and mutation operation of the genetic algorithm, both the efficiency and exploration ability of the hybrid algorithm are improved. Under the frame of the intelligence algorithm, the heuristic interleaving scheduling algorithm is presented to further use the time resource of the task waiting duration. A large-scale simulation demonstrates that the proposed algorithm is more robust and effi-cient than existing algorithms.
Implicit feedback, which indirectly reflects opinion through user behaviors, has gained increasing attention in rec-ommender system communities due to its accessibility and richness in real-world applications. A major way of exploiting implicit feedback is to treat the data as an indication of positive and negative preferences associated with vastly varying confidence levels. Such algorithms assume that the numerical value of implicit feedback, such as time of watching, indicates confidence, rather than degree of preference, and a larger value indicates a higher confidence, although this works only when just one type of implicit feedback is available. However, in real-world applications, there are usually various types of implicit feedback, which can be referred to as heterogeneous implicit feedback. Existing methods cannot efficiently infer confidence levels from heterogeneous implicit feedback. In this paper, we propose a novel confidence estimation approach to infer the confidence level of user prefer-ence based on heterogeneous implicit feedback. Then we apply the inferred confidence to both point-wise and pair-wise matrix factorization models, and propose a more generic strategy to select effective training samples for pair-wise methods. Experiments on real-world e-commerce datasets from Tmall.com show that our methods outperform the state-of-the-art approaches, consid-ering several commonly used ranking-oriented evaluation criteria.
The most challenging problem in mesh denoising is to distinguish features from noise. Based on the robust guided normal estimation and alternate vertex updating strategy, we investigate a new feature-preserving mesh denoising method. To accurately capture local structures around features, we propose a corner-aware neighborhood (CAN) scheme. By combining both overall normal distribution of all faces in a CAN and individual normal influence of the interested face, we give a new consistency measuring method, which greatly improves the reliability of the estimated guided normals. As the noise level lowers, we take as guidance the previous filtered normals, which coincides with the emerging rolling guidance idea. In the vertex updating process, we classify vertices according to filtered normals at each iteration and reposition vertices of distinct types alternately with individual regularization constraints. Experiments on a variety of synthetic and real data indicate that our method adapts to various noise, both Gaussian and impulsive, no matter in the normal direction or in a random direction, with few triangles flipped.
A novel 3D interactive painting method for Chinese calligraphy and painting based on force feedback technology is proposed. The relationship between the force exerted on the brush and the resulting brush deformation is analyzed and a spring-mass model is used to build a model of the 3D Chinese brush. The 2D brush footprint between the brush and the plane of the paper or object is calculated according to the deformation of the 3D brush when force is exerted on the 3D brush. Then the 3D brush footprint is obtained by projecting the 2D brush footprint onto the surface of the 3D object in real time, and a complete 3D brushstroke is obtained by superimposing 3D brush footprints along the painting direction. The proposed method has been suc-cessfully applied in a virtual 3D interactive drawing system based on force feedback technology. In this system, users can paint 3D brushstrokes in real time with a Phantom Desktop haptic device, which can effectively serve as a virtual reality interface to the simulated painting environment for users.
Network function virtualization (NFV) is a newly proposed technique designed to construct and manage network functions dynamically and efficiently. Allocating physical resources to the virtual network function forwarding graph is a critical issue in NFV. We formulate the forwarding graph embedding (FGE) problem as a binary integer programming problem, which aims to increase the revenue and decrease the cost to a service provider (SP) while considering limited network resources and the requirements of virtual functions. We then design a novel regional resource clustering metric to quantify the embedding potential of each substrate node and propose a topology-aware FGE algorithm called ‘regional resource clustering FGE’ (RRC-FGE). After implementing our algorithms in C++, simulation results showed that the total revenue was increased by more than 50 units and the acceptance ratio by more than 15%, and the cost of the service provider was decreased by more than 60 units.
As a joint-optimization problem which simultaneously fulfills two different but correlated embedding tasks (i.e., entity embedding and relation embedding), knowledge embedding problem is solved in a joint embedding scheme. In this embedding scheme, we design a joint compatibility scoring function to quantitatively evaluate the relational facts with respect to entities and relations, and further incorporate the scoring function into the maxmargin structure learning process that explicitly learns the embedding vectors of entities and relations using the context information of the knowledge base. By optimizing the joint problem, our design is capable of effectively capturing the intrinsic topological structures in the learned embedding spaces. Experimental results demonstrate the effectiveness of our embedding scheme in characterizing the semantic correlations among different relation units, and in relation prediction for knowledge inference.
The RGB2GRAY conversion model is the most popular and classical tool for image decolorization. A recent study showed that adapting the three weighting parameters in this first-order linear model with a discrete searching solver has a great potential in its conversion ability. In this paper, we present a two-step strategy to efficiently extend the parameter searching solver to a two-order multivariance polynomial model, as a sum of three subspaces. We show that the first subspace in the two-order model is the most important and the second one can be seen as a refinement. In the first stage of our model, the gradient correlation similarity (Gcs) measure is used on the first subspace to obtain an immediate grayed image. Then, Gcs is applied again to select the optimal result from the immediate grayed image plus the second subspace-induced candidate images. Experimental results show the advantages of the proposed approach in terms of quantitative evaluation, qualitative evaluation, and algorithm complexity.
We present the design and analysis of a novel modified H-shaped split ring resonator (SRR) metamaterial. It has negative permeability and permittivity characteristics with multi-band resonance for the X, Ku, and Ka frequency bands. Different configurations of the patch antenna have been analyzed with different orientations and positions of the metamaterial. Optimized performance was achieved with the new shape of the metamaterial antenna with an appreciable 9 dB gain, 77 GHz bandwidth, 100% radiation efficiency, and 65% reduction in active area. The second-order fractal metamaterial antenna achieves high min-iaturization on the order of 1/21. This is truly a boon in the communications world, as a sharp beam with smaller physical di-mensions is urgently required.
Orthogonal frequency division multiplexing (OFDM), a very promising technique that is leading the evolution in wireless mobile communication to sideline the bandwidth scarcity issue in spectrum allocation, is severely affected by the undesirable effects of the frequency offset error, which generates inter carrier interference (ICI) due to the Doppler shift and local oscillator frequency synchronization errors. There are many ICI cancellation techniques available in the literature, such as self-cancellation (SC), maximum likelihood estimation (MLE), and windowing, but they present a tradeoff between bandwidth redundancy and system complexity. In this study, a new energy-efficient, bandwidth-effective technique is proposed to mitigate ICI through cyclic prefix (CP) reuse at the receiver end. Unlike SC and MLE where the whole OFDM symbol data is transmitted in duplicate to create redundancy at the transmitter end, the proposed technique uses the CP data (which is only 20% of the total symbol bandwidth) to estimate the channel, and it produces similar results with a huge bandwidth saving. The simulation results show that the proposed technique has a significant improvement in error performance, and a comparative analysis demonstrates the substantial improvement in energy efficiency with high bandwidth gain. Therefore, it outperforms the legacy ICI cancellation schemes under consideration.
We propose a thoroughly optimal signal design strategy to achieve the Pareto boundary (boundary of the achievable rate region) with improper Gaussian signaling (IGS) on the Z-interference channel (Z-IC) under the assumption that the interference is treated as additive Gaussian noise. Specifically, we show that the Pareto boundary has two different schemes determined by the two paths manifesting the characteristic of improperly transmitted signals. In each scheme, we derive several concise closed-form expressions to calculate each user’s optimally transmitted power, covariance, and pseudo-covariance of improperly transmitted signals. The effectiveness of the proposed optimal signal design strategy is supported by simulations, and the results clearly show the superiority of IGS. The proposed optimal signal design strategy also provides a simple way to achieve the required rate region, with which we also derive a closed-form solution to quickly find the circularity coefficient that maximizes the sum rate. Finally, we provide an in-depth discussion of the structure of the Pareto boundary, characterized by the channel coefficient, the degree of impropriety measured by the covariance, and the pseudo-covariance of signals transmitted by two users.