EDITORIAL

Machine learning and intelligence science: Sino-foreign interchange workshop IScIDE2010 (A)

  • Lei XU , 1 ,
  • Yanda LI , 2
Expand
  • 1. Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
  • 2. Department of Automation, Tsinghua University, Beijing 100084, China

Received date: 19 Jan 2011

Published date: 05 Mar 2011

Copyright

2014 Higher Education Press and Springer-Verlag Berlin Heidelberg

Cite this article

Lei XU , Yanda LI . Machine learning and intelligence science: Sino-foreign interchange workshop IScIDE2010 (A)[J]. Frontiers of Electrical and Electronic Engineering, 2011 , 6(1) : 1 -5 . DOI: 10.1007/s11460-011-0136-0

1
He X, Niyogi P. Locality preserving projections. In: Advances in Neural Information Processing Systems 16. Cambridge, MA: MIT Press, 2003, 152-160

2
He X, Cai D, Niyogi P. Tensor subspace analysis. In: Proceedings of Advances in Neural Information Processing Systems. 2005, 18: 499-506

3
Kohonen T. Self-organized formation of topologically correct feature map. Biological Cybernetics, 1982, 43(1): 59-69

DOI

4
Kohonen T. Self-Organizing Maps. 2nd ed. Berlin: Springer, 1997

5
Yin H. Data visualisation and manifold mapping using the ViSOM. Neural Networks, 2002, 15(8-9): 1005-1016

DOI

6
Yin H. On multidimensional scaling and the embedding of self-organising maps. Neural Networks, 2008, 21(2-3): 160-169

DOI

7
Oja E. Neural networks, principal components, and subspaces. International Journal of Neural Systems, 1989, 1(1): 61-68

DOI

8
Oja E, Ogawa H, Wangviwattana J. Learning in nonlinear constrained Hebbian networks. In: Proceedings of ICANN’91. 1991, 385-390

9
Xu L. Least MSE reconstruction for self-organization. In: Proceedings of IJCNN91-Singapore. 1991, 3: 2363-2373

10
Hastie T, Stuetzle W. Principal curves. Journal of the American Statistical Association, 1989, 84(406): 502-516

DOI

11
LeBlanc M, Tibshirani R J. Adaptive principal surfaces. Journal of the American Statistical Association, 1994, 89(425): 53-64

DOI

12
Scholkopf B, Smola A, Muller K R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 1998, 10(5): 1299-1319

DOI

13
Ham J, Lee D D, Mika S, Scholkopf B. A kernel view of the dimensionality reduction of manifolds. In: Proceedings of the 21st International Conference on Machine Learning. 2004, 369-376

14
Xu L. Independent Subspaces. In: Ramón J, Dopico R, Dorado J, Pazos A, eds. Encyclopedia of Artificial Intelligence. Hershey(PA): IGI Global, 2008, 903-912

DOI

15
Yang J, Zhang D, Frangi A F, Yang J Y. Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(1): 131-137

DOI

16
Xu L, Krzyzak A, Suen C Y. Several methods for combining multiple classifiers and their applications in handwritten character recognition. IEEE Transactions on Systems, Man, and Cybernetics, 1992, 22: 418-435

DOI

17
Kittler J, Hatef M, Duin R P W, Matas J. On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(3): 226-239

DOI

18
Hansen L K, Salamon P. Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12(10): 993-1001

DOI

19
Xu L, Krzyzak A, Sun C Y. Associative switch for combining multiple classifiers. In: Proceedings of IJCNN91, Seattle, WA. 1991, (I): 43-48

20
Wolpert D H. Stacked generalization. Neural Networks, 1992, 5(2): 241-259

DOI

21
Baxt WG. Improving the accuracy of an artificial neural network using multiple differently trained networks. Neural Computation, 1992, 4(5): 772-780

DOI

22
Breiman L. Stacked Regression. Department of Statistics, Berkeley. 1992, TR-367

23
Jacobs R A, Jordan M I, Nowlan S J, Hinton G E. Adaptive mixtures of local experts. Neural Computation, 1991, 3(1): 79-87

DOI

24
Jordan M I, Jacobs R A. Hierarchical mixtures of experts and the EM algorithm. Neural Computation, 1994, 6(2): 181-214

DOI

25
Jordan M I, Xu L. Convergence Results for The EM Approach to Mixtures of Experts Architectures. Neural Networks, 1995, 8(9): 1409-1431

DOI

26
Xu L, Jordan M I, Hinton G E. An Alternative Model for Mixtures of Experts. In: Cowan , Tesauro , Alspector , eds. Advances in Neural Information Processing Systems 7. MIT Press, 1995, 633-640

27
Xu L, Jordan M I, Hinton G E. A modified gating network for the mixtures of experts architecture, Proc. In: Proceedings of WCNN’94, San Diego, CA. 1994, (2): 405-410

28
Xu L, Jordan M I. EM learning on a generalized finite mixture model for combining multiple classifiers. In: Proceedings of WCNN’93. 1993, (IV): 227-230

29
Dempster A P, Laird N M, Rubin D B. Maximum-likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B. Methodological, 1977, 39(1): 1-38

30
Redner R A, Walker H F. Mixture densities, maximum likelihood, and the EM algorithm. SIAM Review, 1984, 26(2): 195-239

DOI

31
Xu L, Amari S. Combining classifiers and learning mixture-of-experts. In: Dopico J R R, Dorado J, Pazos A, eds. Encyclopedia of Artificial Intelligence. 2009, 318-326

32
Lu B L, Ito M. Task decomposition and module combination based on class relations: a modular neural network for pattern classification. IEEE Transactions on Neural Networks, 1999, 10(5): 1244-1256

DOI

33
Miller D J, Uyar H S. A mixture of experts classifier with learning based on both labelled and unlabelled data. In: Mozer M, Jordan M I, Petsche T, eds. Advances in Neural Information Processing Systems 9. Cambridge: MIT Press, 1997, 571-577

34
Xu L. Bayesian Ying Yang system and theory as a unified statistical learning approach: (I) Unsupervised and semi-unsupervised learning. In: Amari S, Kassabov N, eds. Brain-like Computing and Intelligent Information Systems. Springer-Verlag, 1997, 241-274

35
Zhou Z H, Wu J, Tang W. Ensembling neural networks: many could be better than all. Artificial Intelligence, 2002, 137(1-2): 239-263

DOI

36
Zhou Z H, Li M. Tri-training: exploiting unlabeled data using three classifiers. IEEE Transactions on Knowledge and Data Engineering, 2005, 17(11): 1529-1541

DOI

37
Stockham T G, Cannon T M, Ingebretsen R B. Blind deconvolution through digital signal processing. Proceedings of the IEEE, 1975, 63(4): 678-692

DOI

38
Kundur D, Hatzinakos D. Blind image deconvolution revisited. IEEE Signal Processing Magazine, 1996, 13(6): 61-63

DOI

39
Xu L, Yan P F, Chang T. Semi-blind deconvolution of finite length sequence: (I) linear problem & (II) nonlinear problem. SCIENTIA SINICA, Series A, 1987, (12): 1318-1344

40
Xu L. Bayesian Ying-Yang system, best harmony learning, and five action circling. Frontiers of Electrical and Electronic Engineering in China, 2010, 5(3): 281-328

DOI

41
Caianiello E R. Some remarks on organization and structure. Biological Cybernetics, 1977, 26(3): 151-158

DOI

Outlines

/