Social Recommendation via Graph Attentive Aggregation

Recommender systems play an important role in helping users discover items of interest from a large resource collection in various online services. Although deep graph neural network-based collaborative filtering methods have achieved promising performance in recommender systems, they are still some...

Full description

Saved in:
Bibliographic Details
Published inParallel and Distributed Computing, Applications and Technologies Vol. 13148; pp. 369 - 382
Main Authors Liufu, Yuanwei, Shen, Hong
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2022
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text
ISBN9783030967710
3030967719
ISSN0302-9743
1611-3349
DOI10.1007/978-3-030-96772-7_34

Cover

More Information
Summary:Recommender systems play an important role in helping users discover items of interest from a large resource collection in various online services. Although deep graph neural network-based collaborative filtering methods have achieved promising performance in recommender systems, they are still some weaknesses. Firstly, existing graph neural network methods only take user-item interactions into account neglecting direct user-user interactions which can be obtained from social networks. Secondly, they treat the observed data uniformly without considering fine-grained differences in importance or relevance in the user-item interactions. In this paper, we propose a novel graph neural network social graph attentive aggregation (SGA) which is suitable for parallel training to boost efficiency which is the common bottleneck for neural network deployed machine learning models. This model obtains user-user collaborative information from social networks and utilizes self-attention mechanism to model the differentiation of importance in the user-item interactions. We conduct experiments on two real-world datasets and the results demonstrate that our method is effective and can be trained in parallel efficiently.
ISBN:9783030967710
3030967719
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-96772-7_34