Modeling the Correlations of Relations for Knowledge Graph Embedding

Knowledge graph embedding, which maps the entities and relations into low-dimensional vector spaces, has demonstrated its effectiveness in many tasks such as link prediction and relation extraction. Typical methods include TransE, TransH, and TransR. All these methods map different relations into th...

Full description

Saved in:
Bibliographic Details
Published inJournal of computer science and technology Vol. 33; no. 2; pp. 323 - 334
Main Authors Zhu, Ji-Zhao, Jia, Yan-Tao, Xu, Jun, Qiao, Jian-Zhong, Cheng, Xue-Qi
Format Journal Article
LanguageEnglish
Published New York Springer US 01.03.2018
Springer
Springer Nature B.V
College of Computer Science and Engineering, Northeastern University, Shenyang 110169, China
Key Laboratory of Network Data Science and Technology, Institute of Computing Technology Chinese Academy of Sciences, Beijing 110190, China%Key Laboratory of Network Data Science and Technology, Institute of Computing Technology Chinese Academy of Sciences, Beijing 110190, China%College of Computer Science and Engineering, Northeastern University, Shenyang 110169, China
Subjects
Online AccessGet full text
ISSN1000-9000
1860-4749
DOI10.1007/s11390-018-1821-8

Cover

More Information
Summary:Knowledge graph embedding, which maps the entities and relations into low-dimensional vector spaces, has demonstrated its effectiveness in many tasks such as link prediction and relation extraction. Typical methods include TransE, TransH, and TransR. All these methods map different relations into the vector space separately and the intrinsic correlations of these relations are ignored. It is obvious that there exist some correlations among relations because different relations may connect to a common entity. For example, the triples (Steve Jobs, PlaceOfBrith, California) and (Apple Inc., Location, California) share the same entity California as their tail entity. We analyze the embedded relation matrices learned by TransE/TransH/TransR, and find that the correlations of relations do exist and they are showed as low-rank structure over the embedded relation matrix. It is natural to ask whether we can leverage these correlations to learn better embeddings for the entities and relations in a knowledge graph. In this paper, we propose to learn the embedded relation matrix by decomposing it as a product of two low-dimensional matrices, for characterizing the low-rank structure. The proposed method, called TransCoRe (Translation-Based Method via Modeling the Correlations of Relations), learns the embeddings of entities and relations with translation-based framework. Experimental results based on the benchmark datasets of WordNet and Freebase demonstrate that our method outperforms the typical baselines on link prediction and triple classification tasks.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1000-9000
1860-4749
DOI:10.1007/s11390-018-1821-8