Tri-party deep network representation learning using inductive matrix completion

Zhong-lin Ye , Hai-xing Zhao , Ke Zhang , Yu Zhu , Yu-zhi Xiao

Journal of Central South University ›› 2019, Vol. 26 ›› Issue (10) : 2746 -2758.

PDF
Journal of Central South University ›› 2019, Vol. 26 ›› Issue (10) : 2746 -2758. DOI: 10.1007/s11771-019-4210-8
Article

Tri-party deep network representation learning using inductive matrix completion

Author information +
History +
PDF

Abstract

Most existing network representation learning algorithms focus on network structures for learning. However, network structure is only one kind of view and feature for various networks, and it cannot fully reflect all characteristics of networks. In fact, network vertices usually contain rich text information, which can be well utilized to learn text-enhanced network representations. Meanwhile, Matrix-Forest Index (MFI) has shown its high effectiveness and stability in link prediction tasks compared with other algorithms of link prediction. Both MFI and Inductive Matrix Completion (IMC) are not well applied with algorithmic frameworks of typical representation learning methods. Therefore, we proposed a novel semi-supervised algorithm, tri-party deep network representation learning using inductive matrix completion (TDNR). Based on inductive matrix completion algorithm, TDNR incorporates text features, the link certainty degrees of existing edges and the future link probabilities of non-existing edges into network representations. The experimental results demonstrated that TFNR outperforms other baselines on three real-world datasets. The visualizations of TDNR show that proposed algorithm is more discriminative than other unsupervised approaches.

Keywords

network representation / network embedding / representation learning / matrix-forestindex / inductive matrix completion

Cite this article

Download citation ▾
Zhong-lin Ye, Hai-xing Zhao, Ke Zhang, Yu Zhu, Yu-zhi Xiao. Tri-party deep network representation learning using inductive matrix completion. Journal of Central South University, 2019, 26(10): 2746-2758 DOI:10.1007/s11771-019-4210-8

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

TsoumakasG, KatakisI. Multi-label classification: an overview. [J]. International Journal of Data Warehousing and Mining, 2007, 3(3): 1-13

[2]

Liben-NowellD, KleinbergJ. The link-prediction problem for social networks. [J]. Journal of the American Society for Information Science and Technology, 2007, 58(7): 1019-1031

[3]

TuC, LiuZ, SunMInferring correspondences from multiple sources for microblog user tags [C]// The Chinese National Conference on Social Media Processing, 2014, Heidelberg, Springer: 112

[4]

YuH F, JainP, KarPLarge-scale multi-label learning with missing labels [C]// Proceedings of the 31st International Conference on Machine Learning, 2014, Heidelberg, Springer: 593601

[5]

PerozziB, Al-RfouR, SkienaSDeepWalk: online learning of social representations [C]// ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2014, New York, ACM: 701710

[6]

MikolovT, SutskeverI, ChenKDistributed representations of words and phrases and their compositionality [C]// Proceedings of the 27th Annual Conference on Neural Information Processing Systems, 2013, Massachusetts, MIT: 31113119

[7]

TangJ, QuM, WangM-zLINE: large-scale information network embedding [C]// Proceedings of the 24th International World Wide Web Conferences Steering Committee, 2015, Heidelberg, Springer: 10671077

[8]

CaoS-s, LuW, XuQ-kaiGraRep: learning graph representations with global structural information [C]// Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, 2015, New York, ACM: 891900

[9]

WangD-x, CuiP, ZhuW-wuStructural deep network embedding [C]// ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, New York, ACM: 12251234

[10]

GroverA, LeskovecJNode2vec: scalable feature learning for networks [C]// ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, New York, ACM: 855864

[11]

TangJ, QuM, MeiQ-zhuPTE: predictive text embedding through large-scale heterogeneous text networks [C]// ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015, New York, ACM: 11651174

[12]

SunX, GuoJ, DingXA general framework for content-enhanced network representation learning [EB/OL], 2018

[13]

TuC C, WangH, ZengX K, LiuZ Y, SunM SCommunity-enhanced network representation learning for network analysis [EB/OL], 2017

[14]

PanS-r, WuJ, ZhuX-qTri-party deep network representation [C]// International Joint Conference on Artificial Intelligence, 2016, San Francisco, Morgan Kaufmann: 18951901

[15]

GarciaduranA, NiepertMLearning graph representations with embedding propagation [EB/OL], 2017

[16]

WangX, CuiP, WangJCommunity preserving network embedding[EB/OL], 2017

[17]

ZhangD-k, YinJ, ZhuX-qUser profile preserving social network embedding [C]// 26th International Joint Conference on Artificial Intelligence, 2017, San Francisco, Morgan Kaufmann: 33783384

[18]

LiC, WangS, YangD. PPNE: property preserving network embedding. [J]. Database Systems for Advanced Applications, 2017163179

[19]

HuangX, LiJ, HuXAccelerated attributed network embedding [EB/OL], 2018

[20]

HuangzP, MamoulisNHeterogeneous information network embedding for meta path-based proximity [EB/OL], 2018

[21]

TuK, CuiP, WangXStructural deep embedding for hyper-networks [EB/OL], 2018

[22]

LevyO, GoldberyYNeural word embedding as implicit matrix factorization [C]// Conference on Neural Information Processing Systems, 2014, Massachusetts, MIT: 21772185

[23]

YuH F, JainP, KarPLarge-scale multi-label learning with missing labels [EB/OL], 2018

[24]

YangC, LiuZComprehend deepWalk as matrix factorization [EB/OL], 2018

[25]

YangC, LiuZ-y, ZhaoD-lNetwork representation learning with rich text information [C]// Proceedings of of the 24th International Joint Conference on Artificial Intelligence, 2015, San Francisco, Morgan Kaufmann: 21112117

[26]

TuC-c, ZhangW-c, LiuZ-yMax-margin deepwalk: Discriminative learning of network representation [C]// International Joint Conference on Artificial Intelligence, 2016, San Francisco, Morgan Kaufmann: 38893895

[27]

HearstM A, DumaisS T, OsmanE. Support vector machines. [J]. IEEE Intelligent Systems & Their Applications, 2002, 13(4): 18-28

[28]

ZhuJ, AhmedA, XingE P. MedLDA: maximum margin supervised topic models. [J]. Journal of Machine Learning Research, 2012, 13: 2237-2278

[29]

NatarajanN, DhillonI S. Inductive matrix completion for predicting gene-disease associations. [J]. Bioinformatics, 2014, 30(12): 60-68

[30]

AouayS, JamoussiS, GargouriFFeature based link prediction [C]// IEEE/ACS International Conference on Computer Systems and Applications, 2014, New York, USA: 523527

[31]

LiD, XuZ, LiS, SunXLink prediction in social networks based on hypergraph [C]// International Conference on World Wide Web, 2013, New York, USA, ACM Press: 4142

[32]

DongE, LiJ, XieZ. Link prediction via convex nonnegative matrix factorization on multiscale blocks. [J]. Journal of Applied Mathematics, 2014, 15(3): 1-9

[33]

FarasatA, NikolaevA, SrihariS N. Probabilistic graphical models in modern social network analysis. [J]. Social Network Analysis and Mining, 2015, 5(1): 1-29

[34]

MeiY, TanG. An improved brain emotional learning algorithm for accurate and efficient data analysis. [J]. Journal of Central South University, 2018, 25(5): 1084-1098

[35]

JhaB N, LiH. Structural reliability analysis using a hybrid HDMR-ANN method. [J]. Journal of Central South University, 2017, 24(11): 2532-2541

[36]

FoussF, YenL, PirotteAAn experimental investigation of graph kernels on a collaborative recommendation task [C]// International Conference on Data Mining, 2006, Piscataway, NJ, IEEE: 863868

[37]

MorinF, BengionYHierarchical probabilistic neural network language model [C]// 10th International Workshop on Artificial Intelligence and Statistics, 2005, Piscataway, NJ, IEEE: 246252

[38]

JainP, DhillonI SProvable inductive matrix completion [EB/OL], 2017

[39]

LiuP, ZhaoH, TengJ, YangY, LiuY. Parallel naive Bayes algorithm for large-scale Chinese text classification based on spark. [J]. Journal of Central South University, 2019, 26(1): 1-12

[40]

FanR E, ChangK W, HsiehC J. LIBLINEAR: A library for large linear classification. [J]. Journal of Machine Learning Research, 2008, 9(9): 1871-1874

AI Summary AI Mindmap
PDF

145

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/