This paper presents an approach to the problem of documenting the design of a network of components and verifying that its structure is complete and consistent, (i.e., that the components, functioning together, will satisfy the requirements of the complete product), before the components are implemented. Our approach differs from others in that both hardware and software components are viewed as hardware-like devices in which an output value can change instantaneously when input values change and all components operate synchronously rather than in sequence. We define what we mean by completeness and consistency and illustrate how the documents can be used to verify a design before it is implemented.
Document driven requirements analysis, as proposed by Prof. David Parnas, which has had some success in practice, focuses on creating concise and complete formal requirements documents to serve as references for formal verification, software design, implementation, testing, inspection, and so on. However, at present large number of requirements documents are still written in natural languages. Therefore, generating formal requirements specification from informal textual requirements description has become a big challenge. In this paper, a concern-based approach to generating formal requirements specification from textual requirements document is proposed, which applies
Quotients and factors are important notions in the design of various computational procedures for regular languages and for the analysis of their logical properties. We propose a new representation of regular languages, by linear systems of language equations, which is suitable for the following computations: language reversal, left quotients and factors, right quotients and factors, and factor matrices. We present algorithms for the computation of all these notions, and indicate an application of the factor matrix to the computation of solutions of a particular language reconstruction problem. The advantage of these algorithms is that they all operate only on linear systems of language equations, while the design of the same algorithms for other representations often require translation to other representations.
In this study, we propose a support vector machine (SVM)-based ensemble learning system for customer relationship management (CRM) to help enterprise managers effectively manage customer risks from the risk aversion perspective. This system differs from the classical CRM for retaining and targeting profitable customers; the main focus of the proposed SVM-based ensemble learning system is to identify high-risk customers in CRM for avoiding possible loss. To build an effective SVM-based ensemble learning system, the effects of ensemble members’ diversity, ensemble member selection and different ensemble strategies on the performance of the proposed SVM-based ensemble learning system are each investigated in a practical CRM case. Through experimental analysis, we find that the Bayesian-based SVM ensemble learning system with diverse components and
Actuarial theory in a stochastic interest rate environment is an active research area in life insurance; business and life insurance reserves are one of the key contents in actuarial theory. In this study, an interest force accumulation function model with a Gauss process and a Poisson process is proposed as the basis for the reserve model. With the proposed model, the net premium reserve model, which is based on the semi-continuous variable payment life insurance policy, is approximated. Based on this reserve model, the future loss variance model is proposed and the risk, which is caused by drawing on the reserve, is analyzed and evaluated. Subsequently, assuming a uniform distribution of death (UDD) the reserve and future loss variance models are also provided. Finally, a numerical example is presented for illustration and verification purposes. Using the numerical calculation, the relationships between reserve, future loss variance and model parameters are analyzed. The conclusions are a good fit to real life insurance practices.
This paper enlarges the scope of fuzzy-payoff game to
With the enforcement of the removal system for distressed firms and the new Bankruptcy Law in China’s securities market in June 2007, the development of the bankruptcy process for firms in China is expected to create a huge impact. Therefore, identification of potential corporate distress and offering early warnings to investors, analysts, and regulators has become important. There are very distinct differences, in accounting procedures and quality of financial documents, between firms in China and those in the western world. Therefore, it may not be practical to directly apply those models or methodologies developed elsewhere to support identification of such potential distressed situations. Moreover, localized models are commonly superior to ones imported from other environments.
Based on the
To make better use of mutual fund information for decision-making we propose a coned-context, data envelopment analysis (DEA) model with expected shortfall (ES) modeled under an asymmetric Laplace distribution in order to measure risk when evaluating performance of mutual funds. Unlike traditional models, this model not only measures the attractiveness of mutual funds relative to the performance of other funds, but also takes the decision makers’ preferences and expert knowledge/judgment into full consideration. The model avoids unsatisfying and impractical outcomes that sometimes occur with traditional measures and it also provides more management information for decision-making. Determining input and output variables is obviously very important in DEA evaluation. Using statistical tests and theoretical analysis, we demonstrate that ES under an asymmetric Laplace distribution is reliable and we therefore propose the model as a major risk measure for mutual funds. At the same time, we consider a fund’s performance over different time horizons (e.g., one, three and five years) in order to determine the persistence of fund performance. Using the coned-context DEA model with ES value under an asymmetric Laplace distribution, we also present the results of an empirical study of mutual funds in China, which provides significant insights into management of mutual funds. This analysis suggests that the coned context measure will help investors to select the best fund and fund managers in order to identify the funds with the most potential.
This paper examines the relevance of various financial and economic indicators in forecasting business cycle turning points using neural network (NN) models. A three-layer feed-forward neural network model is used to forecast turning points in the business cycle of China. The NN model uses 13 indicators of economic activity as inputs and produces the probability of a recession as its output. Different indicators are ranked in terms of their effectiveness of predicting recessions in China. Out-of-sample results show that some financial and economic indicators, such as steel output, M2, Pig iron yield, and the freight volume of the entire society are useful for predicting recession in China using neural networks. The asymmetry of business cycle can be verified using our NN method.
Graphical models have become the basic framework for topic based probabilistic modeling. Especially models with latent variables have proved to be effective in capturing hidden structures in the data. In this paper, we survey an important subclass Directed Probabilistic Topic Models (DPTMs) with soft clustering abilities and their applications for knowledge discovery in text corpora. From an unsupervised learning perspective, “topics are semantically related probabilistic clusters of words in text corpora; and the process for finding these topics is called topic modeling”. In topic modeling, a document consists of different hidden topics and the topic probabilities provide an explicit representation of a document to smooth data from the semantic level. It has been an active area of research during the last decade. Many models have been proposed for handling the problems of modeling text corpora with different characteristics, for applications such as document classification, hidden association finding, expert finding, community discovery and temporal trend analysis. We give basic concepts, advantages and disadvantages in a chronological order, existing models classification into different categories, their parameter estimation and inference making algorithms with models performance evaluation measures. We also discuss their applications, open challenges and future directions in this dynamic area of research.
Authenticated group key agreement (GKA) is an important cryptographic mechanism underlying many collaborative and distributed applications. Recently, identity (ID)-based authenticated GKA has been increasingly researched because of the authentication and simplicity of the ID-based cryptosystem. However, there are two disadvantages with this kind of mechanism: 1) the private key escrow is inherent and 2) the Private Key Generator (PKG) must send client private keys over secure channels, making private key’s distribution difficult. The two disadvantages, particularly secure channels, may be unacceptable for secure group communications application. Fortunately, we can avoid both of them. In this paper, with bilinear maps on ECC, we present a new authenticated group key agreement protocol that does not require secure channels. The basic idea is the usual way of circumventing escrow: double key and double encryption (verification). The secret key of a user is generated by a key generation center (KGC) and the user collaboratively. Each of them has “half” of the secret information about the secret key of the user, and there is no secret key distribution problem. In addition, the computation cost of the protocol is very low because the main computation is binary addition.