Publishing Service

Polishing & Checking

Frontiers of Information Technology & Electronic Engineering

ISSN 2095-9184 (print), ISSN 2095-9230 (online)

NGAT: attention in breadth and depth exploration for semi-supervised graph representation learning

Abstract: Recently, graph neural networks (GNNs) have achieved remarkable performance in representation learning on graph-structured data. However, as the number of network layers increases, GNNs based on the neighborhood aggregation strategy deteriorate due to the problem of oversmoothing, which is the major bottleneck for applying GNNs to real-world graphs. Many efforts have been made to improve the process of feature information aggregation from directly connected nodes, i.e., breadth exploration. However, these models perform the best only in the case of three or fewer layers, and the performance drops rapidly for deep layers. To alleviate oversmoothing, we propose a nested graph attention network (NGAT), which can work in a semi-supervised manner. In addition to breadth exploration, a k-layer NGAT uses a layer-wise aggregation strategy guided by the attention mechanism to selectively leverage feature information from the kth-order neighborhood, i.e., depth exploration. Even with a 10-layer or deeper architecture, NGAT can balance the need for preserving the locality (including root node features and the local structure) and aggregating the information from a large neighborhood. In a number of experiments on standard node classification tasks, NGAT outperforms other novel models and achieves state-of-the-art performance.

Key words: Graph learning; Semi-supervised learning; Node classification; Attention

Chinese Summary  <21> NGAT:基于广度和深度探索注意力机制的半监督图表示学习

胡荐苛,张引
浙江大学计算机科学与技术学院,中国杭州市,310027
摘要:近年来图神经网络(GNN)在图结构数据表示学习方面取得显著成绩。然而,随着网络层数增加,由于过度平滑问题,基于邻域信息聚合策略的GNN性能恶化,这也是GNN应用于真实图的主要瓶颈。研究人员对直连节点的特征信息聚合过程进行了许多改进,即广度探索。然而,这些模型仅在层数为3或更少的情况下才表现最佳,而在深层情况下性能迅速下降。为缓解过度平滑,本文提出一种嵌套的图注意网络,即基于双重注意力机制的多尺度特征融合模型NGAT,该网络可以半监督形式工作。除广度探索,k层NGAT运用注意力机制引导的分层聚合策略,选择性地利用来自k阶邻域的信息特征,即深度探索。即使对于10层或更深的架构,NGAT也能平衡保留局部性(包括根节点特征和局部结构)和从大型邻域聚合信息的需求。本文在公开数据集上对比了现有图神经网络模型,实验表明本文提出的NGAT模型具备更强的节点嵌入学习能力。

关键词组:图学习;半监督学习;节点分类;注意力机制


Share this article to: More

Go to Contents

References:

<Show All>

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





DOI:

10.1631/FITEE.2000657

CLC number:

TP391

Download Full Text:

Click Here

Downloaded:

5212

Download summary:

<Click Here> 

Downloaded:

415

Clicked:

5833

Cited:

0

On-line Access:

2022-03-22

Received:

2020-11-22

Revision Accepted:

2022-04-22

Crosschecked:

2021-01-10

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952276; Fax: +86-571-87952331; E-mail: jzus@zju.edu.cn
Copyright © 2000~ Journal of Zhejiang University-SCIENCE