Cross Lingual Knowledge Graph Entity Alignment by Aggregating Extensive Structures and Specifc Semantics

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Journal of Ambient Intelligence and Humanized Computing (2023) 14:12609–12616

https://doi.org/10.1007/s12652-022-04319-5

ORIGINAL RESEARCH

Cross‑lingual knowledge graph entity alignment by aggregating


extensive structures and specific semantics
Beibei Zhu1 · Tie Bao1 · Jiayu Han2 · Ridong Han1 · Lu Liu3 · Tao Peng1

Received: 17 September 2021 / Accepted: 6 July 2022 / Published online: 18 July 2022
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022

Abstract
Entity alignment aims to link entities from different knowledge graphs (KGs) that refer to the same real-world identity.
Recently, embedding-based approaches that primarily center on topological structures get close attention in this field. Even
achieving promising performance, these approaches overlook the vital impact of entity-specific semantics on entity alignment
tasks. In this paper, we propose a new framework SSEA (Extensive Structures and Specific Semantics for Entity Alignment),
which jointly employs extensive structures and specific semantics to boost the performance of entity alignment. Specifi-
cally, we employ graph convolution networks (GCNs) to learn the representations of entity structures. Besides considering
entity representations, we also explore relation semantics by approximating relation embeddings based on head entity and
tail entity representations. Moreover, attribute semantics are also learned by GCNs while they are independent of joint
entity and relation embeddings. The results of structure, relation, and attribute representations are concatenated for better
entity alignment. Experimental results on three benchmark datasets from real-world KGs demonstrate that our approach
has achieved promising performance in most cases. Notably, SSEA has achieved 91.78 and 97.20 for metrics Hits@1 and
Hits@10 respectively on the DBP15KFR−EN dataset.

Keywords Entity alignment · Knowledge graph · Embedding · Graph convolution networks

1 Introduction

KGs such as DBpedia (Bizer et al. 2009) are often hetero-


geneous and they have different representations of the same
* Tao Peng identity. To cope with the challenge, entity alignment is
tpeng@jlu.edu.cn proposed, whose task is linking entities with the same real-
Beibei Zhu world object from different KGs. Entity alignment improves
zhubb20@mails.jlu.edu.cn the integrity of the knowledge graph by aligning entities and
Tie Bao merging disparate knowledge graphs into one unified graph.
baotie@jlu.edu.cn It effectively alleviates the heterogeneity and redundancy
Jiayu Han problems caused by different data sources or design pat-
jyhan126@uw.edu terns and improves the performance of knowledge-driven
Ridong Han applications, such as machine reading, thus contributing to
hanrd20@mails.jlu.edu.cn technological progress and social development.
Lu Liu Recently, embedding-based methods (Wu et al. 2019b;
liulu@jlu.edu.cn Xu et al. 2020; Wu et al. 2019a) are widely applied to align
entities, they embed KGs into a low dimensional vector
1
College of Computer Science and Technology, Jilin space and compute the distances between entity vectors
University, Qianjin Street, Changchun 130012, Jilin, China
for alignment. TransE (Bordes et al. 2013) is a representa-
2
Department of Linguistics, University of Washington, tive embedding-based model for entity alignment, GCNs
WA98195‑3770, Seattle 98195, Washington, USA
(Kipf and Welling 2017) are also widely used because of
3
College of Software, Jilin University, Qianjin Street, the ability to aggregate neighbour information. Most of the
Changchun 130012, Jilin, China

13
Vol.:(0123456789)
12610 B. Zhu et al.

recent methods put more emphasis on extensive structure 2.2 KG embedding


embeddings captured by multi-layer GCNs. Even though the
semantics of multi-hop neighbours can be transformed to the Current KG embedding approaches fall into three main cat-
center entity by training, the extensive structure-based meth- egories: translation models, which view relations as transla-
ods can’t capture relation semantics, which is not conducive tions from head entities to tail entities, e.g. TransE (Bordes
to the robustness of entity alignment. In addition, when the et al. 2013), etc.; and semantic matching models, who are
central entities in two knowledge graphs have no aligned based on similarity scoring functions. They measure the
neighbours, the embedding-based entity alignment meth- plausibility of facts by matching the underlying semantics
ods, which only take advantage of entity or relation embed- of entities and the relations contained in the vector space
dings, have little evidence supporting equivalence between representation, e.g. Complex (Trouillon et al. 2016), etc.;
the two entities. While the two central entities may have neural network models apply deep learning techniques to
similar attribute semantic information, which can optimize learn the representation information, e.g. ConvE (Dettmers
the representation of entities and facilitate entity alignment. et al. 2018), etc.
Motivated by the above observations, we propose a super-
vised SSEA framework for cross-lingual entity alignment.
2.3 Translation‑based entity alignment
Topological structure and entity-specific semantic informa-
tion are combined in our framework to align entities from
Initial embedding-based methods such as JE (Hao et al.
a global perspective. Many studies have shown that using
2016), MTransE (Chen et al. 2017) and JAPE (Sun et al.
the same framework to learn entity structures and seman-
2017) rely on Trans-family like TransE to embed KG struc-
tics can improve the performance of the model. Therefore,
ture in the vector space and then calculate the distance
we uniformly employ GCNs (Wang et al. 2018) to learn
between vectors. Subsequent approaches, such as BootEA
information embeddings in this paper. Specifically, the key
(Sun et al. 2018), expand the size of the seed set by iteration,
contributions of our approach are as follows:
which improves performance but increases time complexity
and training resources considerably.
1. Extensive topological structures and specific relation
and attribute semantics are jointly used to get more
expressive entity representations. 2.4 GNNs‑based entity alignment
2. Our approach requires only pre-aligned entity pairs for
training, rather than pre-aligned relation and attribute Given that the similarity of neighbour structures has an
pairs, which reduces training overhead. impact on entity alignment, GNNs-based methods are also
3. The performance of our model outperforms the existing proposed to align entities. GCN-EA (Wang et al. 2018) is
model on three cross-lingual datasets. Furthermore, the among the first attempt in this direction. Then more and
results show that the various modules of our approach more researchers have used graph neural networks to align
contribute positively to the model performance. entities, like HGCN (Wu et al. 2019b), GM (Xu et al. 2020)
and RDGCN (Wu et al. 2019a). Compared with the Transla-
tion-based embedding method, the GNNs-based embedding
2 Related work method can collect the neighbour information of the entity
and have more robust feature capture capability. So in this
2.1 Graph convolution networks paper, we leverage GCNs to encode the information of KGs.

GCNs are proposed to solve the problem of spatial feature


extraction of graph structure data, which are based on the 3 Our approach
theory of spectral domain (Kearnes et al. 2016). GCNs
define the convolution mode on the graph, use Laplace 3.1 Overview
matrix to extract the features of the graph, and realize the
vector coding of the graph network. GCNs adopt an end-to- Extensive structures refer to the topological link of KGs, and
end learning method to improve efficiency , and they are a specific semantics include relation semantic and attribute
widely used deep learning method for many natural language semantic in this paper. Figure 1 presents the specific com-
processing tasks. position of SSEA framework. In the first part, given two
cross-lingual KGs and the seed set, ERE module utilizes
GCNs to embed entities of two heterogeneous KGs into a
unified low dimensional vector space to obtain the initial

13
Cross‑lingual knowledge graph entity alignment by aggregating extensive structures and… 12611

Fig. 1  The framework of SSEA,


which contains three parts: (1)
Entity and Relation Embedding
(ERE) module, (2) Attribute
Embedding (AE) module, and
(3) Entity Alignment (EA)

representation of the entities’ extensive structures. It uses layer, and the output of the previous layer is used as input
entity representations to approximate relation embeddings to the next layer. Specifically, let H(l)
s
denote the entity node
for relation semantics. Then it computes the joint representa- representation in lth GCN layer, and the hidden state update
tions of the entity based on entity embeddings and relation formula is calculated:
embeddings. In the second part, SE module learns attribute ( )
semantic information from GCNs. Then we combine entity H(l+1)
s
= ReLU LH(l) s
Ws(l) , (1)
embeddings, relation embeddings, and attribute embeddings
where L is the adjacency matrix of the graph. To realize
together to generate the embed-based similarity matrix. In
symmetry normalization, L is set as D ̂ − 2, D
̂ is the diag-
1 1
̂− 2 ÂD
the third part, for entity h12 to be aligned, the framework cal-
onal degree matrix of node degrees, A ̂ = A + I and I is the
culates the embedding-based similarities between it and all
identity matrix A, on the basis of adjacency matrix A, each
candidate entities. The entity most similar to h12, h22, is used
entity pair adds a self-loop to consider the characteristics of
as the output. We assume there are one-to-one alignments
the node itself; Ws(l) ∈ d(l) × d(l+1) is a weight matrix of the
between testing source entities and testing target entities in
lth layer, d(l+1) is the number of features in the (l + 1)th layer.
this paper.
ReLU is an activation function.
The deeper the GCNs, the more difficult the network
3.2 Joint entity and relation embedding module
training. To cope with the challenge, we use layer-wise
highway gates like RDGCN, which let more information be
3.2.1 Entity embedding
returned directly to the input without a nonlinear transforma-
tion. Specifically, two nonlinear transformation layers are
Figure 2 illustrates the forward propagation process of the
added, one is the transform gate and the other is the carry
GCN network used in our paper, where a large number of
gate.
hidden layers are inserted between the input and output

Fig. 2  The block diagram of


GCN, which consists of an input
layer, an output layer and many
hidden layers. ReLU is the
activation function

13
12612 B. Zhu et al.

( ( ))
3.2.2 Relation embedding
henew = concat hes , matadd HLr , HRr , (4)
GCNs are able to capture the structural information of
where concat(⋅) means concatenatation; matadd(⋅) means
the knowledge graph to obtain the representation of each
matrix addition.
entity, however, they cannot directly obtain the relation
embeddings. For a relation r, there are many head entities
and tail entities in the knowledge graph of relation r. The 3.3 Attribute embedding
head entities and tail entities of a relation r can provide its
semantic. To compute the relation embedding and further Attribute embedding is also learned by GCNs. Entity embed-
optimize entity representation by combining with original ding and attribute embedding are trained separately, so we
entity embedding, we approximate the representation of set two different feature vectors for structure and attribute,
the relation by using the average embedding of the head respectively. Let H(l) denote the attribute representation in
and tail entities to which the relation is connected based a
lth layer, and the convolutional is computed in the same way
on HGCN. The head and tail entity representations of each as entity embedding:
relation can be learned from GCNs to assist entity align-
( )
ment. The vector representation of relation r is described H(l+1) = ReLU LH(l) Wa(l) , (5)
a a
as:
�∑ ∑ � where L is a combination Laplace; Wa(l) ∈ d(l) × d(l+1) is a
m∈Hh hm n∈Ht hn layer-specific trainable weight matrix of the lth layer, d(l+1)
r = concat � �, , (2)

card Hh card(Ht ) is the number of features in the (l + 1)th layer.
We get the final entity representation hefinal by combining
where concat(⋅) is a function to concatenate vectors; card(⋅) joint entity representations henew described in the last section
means the number of elements in the set. Hh and Ht represent with attribute representations ha . hefinal can be calculated as:
the set of head and tail entities connected by the relation r, ( )
respectively. hm and hn represent the vector representations hefinal = concat 𝛽henew , (1 − 𝛽)ha (6)
of the head and tail entities, respectively.
Then, a transformation matrix Wr is applied to consider where concat(⋅) means concatenation; 𝛽 is an adjustable
relation semantics included in relation triples. parameter to balance the importance of joint embeddings
( ) and attribute embeddings.
r = matmul r� , Wr , (3)

where matmul(⋅) means matrix multiplication; Wr ∈ ℝ2d ×d ,


e r
3.4 Entity alignment
de and dr represent the number of features of the entity repre-
sentation and the relation representation, respectively. The smaller the distance between two node vectors, the
greater the probability that two entities can be aligned.
We employ Manhattan distance to calculate the distance
3.2.3 Joint entity and relation embedding between the entities:
( ) ‖
To make full use of relation semantics, the information of ‖
dis ei , ej = ‖hei − hej ‖ , (7)
‖ ‖1
the correlated relations of the entity e needs to be aggre-
gated first. For the entity e, it can be either the head entity where hei and hej denote the vector representation of ei and
or the tail entity in the relation triple. Therefore, when ej , respectively.
calculating the relevant relation representations of the The margin-based scoring function is used as our loss
entity, SSEA considers them from both head and tail per- function:
spectives. The relevant relation representations considered ∑ ∑ [ ( ) ]
from the head entity or tail entity perspective are defined Loss = dis(p, q) − dis p� , q� + 𝛾 + ,
(8)
as HLr and HRr , respectively. They can be computed by the (p,q)∈L (p ,q )∈L
� � �

matrix multiplication of the relation representation r and


where [x]+=max{x, 0} calculates the maximum of x and 0;
the corresponding head or tail entities.
𝛾 > 0 is a margin hyper-parameter of the distance between
So the whole related representations are added first to
positive and negative samples; L and L′ represent the set of
aggregate the information of the correlated relations of the
positive and negative triples, respectively. Specifically, for
entity e. Next, related relation representations are fused
the entity e to be aligned, the model computes the K-nearest
with the entity embedding hes by concatenating.
entities of e to limit the range of negative sampling.

13
Cross‑lingual knowledge graph entity alignment by aggregating extensive structures and… 12613

4 Experiments best results. We use Hits@k (k=1,10) as evaluation metrics.


Hits@k means the proportion of correct alignments ranked
4.1 Experimental settings in top-k. Higher Hits@k indicates better performance. The
baseline approaches can be roughly divided into two catego-
4.1.1 Datasets ries: TransE-based models and GNNs-based models.
Among all TransE-based models, BootEA performs best.
By convention, we conduct experiments on DBP15K (Sun BootEA is a representative model that uses an iterative strat-
et al. 2017), including DBP15KZH−EN (Chinese-English), egy to expand the size of seed set, but its performance is far
DBP15KJA−EN (Japanese-English), and DBP15KFR−EN from that of SSEA. So this confirms the iterative strategy is
(French-English). helpful to improve the performance in the entity alignment
task, but it has some limitations. The key lies in how to make
4.1.2 System settings better use of the entity’s extensive structural information and
semantic information.
We employ two-layer GCNs. We set the dimension of hid- HGCN performs best among all baselines. The improve-
den layer in GCN for joint entity and relation embeddings ments may come from refining entity embeddings used by it.
and attribute embeddings to 300 and 100 respectively. The Our proposed model SSEA further outperforms HGCN on
number of negative samples K is 250. Besides, 𝛾 is 3. The three datasets. This confirms that extensive structures with
number of the epochs for joint training and attribute train- specific semantics can effectively improve the performance
ing is 1000 and 2000 respectively. 𝛽 is 0.9. We use 30% of of entity alignment.
equivalent entity pairs as training seeds. SSEA w/o HG refers to the model SSEA without high-
way gates. The experimental results suggest that it is impor-
4.2 Overall results tant to consider highway networks in case of noise propa-
gation. SSEA w/o RE refers to the model SSEA without
Table 1 delivers the performance of all compared models. relation semantics. The experimental results convey that
The results are in percentage (%). Bold numbers are the it is necessary to use relation semantics for better entity

Table 1  Performance on entity DBP15KZH−EN DBP15KJA−EN DBP15KFR−EN


alignment
Methods Hits@1 Hits@10 Hits@1 Hits@10 Hits@1 Hits@10

JE (Hao et al. 2016) 21.27 42.77 18.92 39.97 15.38 38.84


JAPE (Sun et al. 2017) 41.18 74.46 36.25 68.50 32.39 66.68
BootEA (Sun et al. 2018) 62.94 84.75 62.23 85.39 65.30 87.44
MTransE (Chen et al. 2017) 30.83 61.41 27.86 57.45 24.41 55.55
DAEA (Sun et al. 2020) 56.76 88.30 57.59 89.23 58.04 91.16
AlignE (Sun et al. 2018) 47.18 79.19 44.76 78.89 48.12 82.43
RTEA (Jiang et al. 2019) 57.30 86.44 53.39 85.73 53.84 86.78
JTMEA (Lu et al. 2021) 42.18 75.85 38.73 72.42 37.08 75.62
JETEA (Song et al. 2021) 42.69 75.02 36.44 72.42 36.46 71.80
NAEA (Zhu et al. 2019) 65.01 86.73 64.14 87.27 67.32 89.43
GCN-EA (Wang et al. 2018) 41.25 74.38 39.91 74.46 37.29 74.49
GM (Xu et al. 2020) 67.93 78.48 73.97 87.15 89.38 95.24
HMEA (Guo et al. 2021) 54.04 87.88 53.06 87.47 48.40 86.49
GTEA (Jiang et al. 2021) 59.58 76.48 66.23 79.05 60.34 75.59
Inga (Pang et al. 2019) 50.45 79.42 51.46 79.46 50.45 79.42
RDGCN (Wu et al. 2019a) 70.75 84.55 76.74 89.54 88.64 95.72
HGCN (Wu et al. 2019b) 72.03 85.70 76.62 89.73 89.16 96.11
SSEA w/o HG 69.21 83.32 69.46 82.13 78.57 87.62
SSEA w/o RE 76.67 85.47 80.70 90.60 90.31 95.76
SSEA w/o AE 71.10 84.01 77.17 89.50 89.87 94.31
SSEA w/o RE and AE 70.58 83.69 75.94 88.72 88.60 93.74
SSEA 79.34 89.86 82.97 92.96 91.78 97.20

13
12614 B. Zhu et al.

Fig. 3  The effect of different seed proportions on Hits@1 Fig. 4  The effect of information balance parameter on Hits@1

alignment. The ablation model SSEA w/o AE refers to the


model SSEA without attribute semantics. The experimental
results indicate that although the entity attribute value is the
external semantic information of the entity, the semantic
information contained in attribute value can not be ignored.
When getting rid of relation and attribute semantics, the per-
formance declines obviously. We made an ablation model
SSEA w/o RE and AE to prove this hypothesis. The experi-
mental result suggests that the positive influence of entity
topology information on entity alignment performance is
not enough. Moreover, relation and attribute embeddings
are effective.

Fig. 5  The effect of GCN layers on Hits@1


4.3 Discussion

Figures 3 and 6 outline the Hits@1 and Hits@10 of SSEA


on three datasets when we evaluate our model by shifting
the ratio of seed sets from 10 to 40% with the step of 10%.
The horizontal axis represents the proportion of seeds and
the vertical axis represents the Hits@1 and Hits@10 met-
rics of the model respectively. As the proportion of seed
sets increases, the performance of SSEA on three datasets
gradually increases, which suggests that the amount of
training data can have a large impact on entity alignment
performance.
Knowledge graph topology embedding is the most funda-
mental part of entity alignment, while attribute information
Fig. 6  The effect of different seed proportions on Hits@10
is external information and is a further strategy to optimize
entity embedding. Figures 4 and 7 show the more weight
𝛽 is applied to the joint entity and relation embeddings, From Figures 5 and 8, we can find that the performance
the higher the value of Hits@1 and Hits@10 of the model. of our model varies depending on the number of GCN lay-
The horizontal axis represents the balance parameters of ers. The horizontal axis indicates the number of layers of
the internal and external information and the vertical axis GCNs and the vertical axis indicates the performance of the
represents the performance of the model. This shows the model. The performance of the model does not continue to
importance of our model in terms of obtaining joint entity grow as the number of layers of GCNs increases. When the
and relation representations to align entities. number of layers of GCNs is 3, the Hits@1 and Hits@10

13
Cross‑lingual knowledge graph entity alignment by aggregating extensive structures and… 12615

as training data. The experiments on three real-world data-


sets and discussion demonstrate the outperformance of our
method.

Supplementary Information The online version contains supplemen-


tary material available at https://d​ oi.o​ rg/1​ 0.1​ 007/s​ 12652-0​ 22-0​ 4319-5.

Funding The project is sponsored by the National Natural Science


Foundation of China (61872163, 61806084); Jilin Provincial Education
Department Project (JJKH20190160KJ).

Data availibility The datasets generated during and/or analysed dur-


ing the current study are available from the corresponding author on
reasonable request.
Fig. 7  The effect of information balance parameter on Hits@10

References
Bizer C, Lehmann J, Kobilarov G et al (2009) Dbpedia - a crystalliza-
tion point for the web of data. J Web Semant 7:154–165. https://​
doi.​org/​10.​1016/j.​websem.​2009.​07.​002
Bordes A, Usunier N, García-Durán A, et al (December 2013) Trans-
lating embeddings for modeling multi-relational data. Paper pre-
sented at the 27th Advances in Neural Information Processing
Systems, Lake Tahoe, Nevada, United States, 5–8
Chen M, Tian Y, Yang M, et al (August 2017) Multilingual knowledge
graph embeddings for cross-lingual knowledge alignment. Paper
presented at the 26th International Joint Conference on Artificial
Intelligence, Melbourne, Australia, 19–25
Dettmers T, Minervini P, Stenetorp P, et al (February 2018) Convo-
lutional 2d knowledge graph embeddings. Paper presented at the
Fig. 8  The effect of GCN layers on Hits@10 32nd Association for the Advance of Artificial Intelligence, New
Orleans, Louisiana, USA, 2–7
Guo H, Tang J, Zeng W et al (2021) Multi-modal entity alignment in
hyperbolic space. Neurocomputing 461:598–607. https://​doi.​org/​
begin to decline. This is because the more layers of GCN, 10.​1016/j.​neucom.​2021.​03.​132
the more neighborhood information it contains, which will Hao Y, Zhang Y, He S, et al (September 2016) A joint embedding
have a bad impact on embedding vector computing. method for entity alignment of knowledge bases. Paper presented
at the 1st Knowledge Graph and Semantic Computing, Beijing,
China, 19–22
Jiang S, Nie T, Shen D, et al (September 2021) Entity alignment of
5 Conclusion knowledge graph by joint graph attention and translation repre-
sentation. Paper presented at the 18th International Conference,
In this paper, a new framework SSEA for cross-lingual KG Kaifeng, China, 24–26
Jiang T, Bu C, Zhu Y, et al (August 2019) Two-stage entity alignment:
entity alignment is proposed, which combines extensive Combining hybrid knowledge graph embedding with similarity-
structures and specific semantics together to improve entity based relation alignment. Paper presented at the 16th Pacific Rim
alignment. We first employ GCNs to capture global topo- International Conference on Artificial Intelligence, Cuvu, Yanuca
logical structure representations of KG. Then we approxi- Island, Fiji, 26–30
Kearnes SM, McCloskey K, Berndl M et al (2016) Molecular graph
mate relation representations based on the embeddings of convolutions: moving beyond fingerprints. J Comput Aided Mol
head entity and tail entity. To make full use of attribute Des 30:595–608
semantics to boost the alignment performance, we also use Kipf TN, Welling M (April 2017) Semi-supervised classification with
GCNs to get attribute representations in an easy but effec- graph convolutional networks. Paper presented at the 5th Interna-
tional Conference on Learning Representations, Toulon, France,
tive way. Entity embeddings and relation embeddings are 24–26
trained together, and attribute embeddings are independent Lu G, Zhang L, Jin M et al (2021) Entity alignment via knowledge
of joint entity and relation embedding. The results of struc- embedding and type matching constraints for knowledge graph
ture, relation, and attribute representations are concatenated, inference. J Ambient Intell Humaniz Comput 4:1–11
Pang N, Zeng W, Tang J, et al (June 2019) Iterative entity alignment
and the final entity representations after concatenating are with improved neural attribute embedding. Paper presented at the
robust and accurate. We only need pre-aligned entity pairs 16th Extended Semantic Web Conference, Portoroz, Slovenia, 2

13
12616 B. Zhu et al.

Song X, Zhang H, Bai L (August 2021) Entity alignment between Wu Y, Liu X, Feng Y, et al (August 2019) Relation-aware entity align-
knowledge graphs using entity type matching. Paper presented ment for heterogeneous knowledge graphs. Paper presented at
at the 14th Knowledge Science, Engineering and Management, the 28th International Joint Conference on Artificial Intelligence,
Tokyo, Japan, 14–16 Macao, China, 10–16
Sun J, Zhou Y, Zong C (December 2020) Dual attention network for Wu Y, Liu X, Feng Y, et al (November 2019) Jointly learning entity
cross-lingual entity alignment. Paper presented at the 28th Inter- and relation representations for entity alignment. Paper presented
national Conference on Computational Linguistics,Barcelona, at the 9th International Joint Conference on Natural Language
Spain (Online), 8–13 Processing, Hong Kong, China, 3–7
Sun Z, Hu W, Li C (October 2017) Cross-lingual entity alignment via Xu K, Song L, Feng Y, et al (February 2020) Coordinated reasoning
joint attribute-preserving embedding. Paper presented at the 16th for cross-lingual knowledge graph alignment. Paper presented
International Semantic Web Conference, Vienna, Austria, 21–25 at Innovative Applications of Artificial Intelligence Conference,
Sun Z, Hu W, Zhang Q, et al (July 2018) Bootstrapping entity align- New York, USA, 7–12
ment with knowledge graph embedding. Paper presented at the Zhu Q, Zhou X, Wu J, et al (August 2019) Neighborhood-aware atten-
27th International Joint Conference on Artificial Intelligence, tional representation for multilingual knowledge graphs. Paper
Stockholm, Sweden, 13–19 presented at the 28th International Joint Conference on Artificial
Trouillon T, Welbl J, Riedel S, et al (June 2016) Complex embeddings Intelligence, Macao, China, 10–16
for simple link prediction. Paper presented at the 33nd Interna-
tional Conference on Machine Learning, ICML 2016, New York Publisher's Note Springer Nature remains neutral with regard to
City, USA, 19–24 jurisdictional claims in published maps and institutional affiliations.
Wang Z, Lv Q, Lan X, et al (2018) Cross-lingual knowledge graph
alignment via graph convolutional networks. Paper presented at
the 2018 Conference on Empirical Methods in Natural Language
Processing, Brussels, Belgium, 31 October–4 November, 2018

13

You might also like