您现在正在浏览: 首页 » 新闻资讯 » 实验室新闻 » 正文

喜讯:本科生张剑清同学的论文被人工智能国际权威期刊“Neurocomputing”录用

发布时间: 2021-03-17 14:55:37   作者:admin   来源: 本站原创   浏览次数:  
摘要: 张剑清同学为杭州电子科技大学卓越学院2016级本科生,在实验室的王东京老师和俞东进老师的指导下进行了推...

     张剑清同学为杭州电子科技大学卓越学院2016级本科生,在实验室的王东京老师和俞东进老师的指导下进行了推荐系统的研究在人工智能国际权威期刊“Neurocomputing”发表一篇SCI论文。

     Jianqing Zhang, Dongjing Wang*, Dongjin Yu. TLSAN: Time-aware Long- and Short-term Attention Network for Next-item Recommendation. Neurocomputing, 2021, 441: 179-191. (SCI,2区,IF= 4.438)

    论文链接https://doi.org/10.1016/j.neucom.2021.02.015

    代码链接https://github.com/TsingZ0/TLSAN

    论文摘要Recently, deep neural networks are widely applied in recommender systems for their effectiveness in capturing/modeling users’ preferences. Especially, the attention mechanism in deep learning enables recommender systems to incorporate various features in an adaptive way. Specifically, as for the next item recommendation task, we have the following three observations: 1) users’ sequential behavior records aggregate at time positions (“time-aggregation”), 2) users have personalized taste that is related to the “time-aggregation” phenomenon (“personalized time-aggregation”), and 3) users’ short-term interests play an important role in the next item prediction/recommendation. In this paper, we propose a new Time-aware Long- and Short-term Attention Network (TLSAN) to address those observations mentioned above. Specifically, TLSAN consists of two main components. Firstly, TLSAN models “personalized time-aggregation” and learn user-specific temporal taste via trainable personalized time position embeddings with category-aware correlations in long-term behaviors. Secondly, long- and short-term feature-wise attention layers are proposed to effectively capture users’ long- and short-term preferences for accurate recommendation. Especially, the attention mechanism enables TLSAN to utilize users’ preferences in an adaptive way, and its usage in long- and short-term layers enhances TLSAN’s ability of dealing with sparse interaction data. Extensive experiments are conducted on Amazon datasets from different fields (also with different size), and the results show that TLSAN outperforms state-of-the-art baselines in both capturing users’ preferences and performing time-sensitive next-item recommendation.