报告时间:10月10号(周一)上午9:00-10:00
主持人:吴炜
腾讯会议号:534-494-879
题目:大规模图对比学习
摘要:Deep learning on graphs has attracted significant interest recently. However, most of the works have focused on (semi-) supervised learning, resulting in shortcomings, including heavy label reliance, poor generalization, and weak robustness. To address these issues, self-supervised learning (SSL), which extracts informative knowledge through well-designed pretext tasks without relying on manual labels, has become a promising and trending learning paradigm for graph data. However, current self-supervised learning methods, especially graph contrastive learning (GCL) methods, require heavy computations in their designs, hindering them from scaling to large graphs. In this talk, I will introduce our recent development in scaling up graph contrast learning for graph data. By examining existing graph contrast learning methods and pinpointing their defects, we propose a new group discrimination method (to appear in NeurIPS 2022), which is orders of magnitude (10,000+ faster than GCL baselines) while consuming much less memory. The new solution opens a new door to self-supervised learning for large-scale datasets in the graph domain and potentially in other areas such as vision and language.
个人简介: 潘世瑞,澳大利亚基金委杰出青年,格里菲斯大学正教授(Full Professor)。连续两年入选全球AAAI/IJCAI最具影响力学者,入选全球前2%顶尖科学家榜单 (2021), 获得2021蒙纳士大学信息技术学院研究卓越奖(早期研究者)。指导学生获得数据挖掘会议ICDM 最佳学生论文奖(2020),获得2020年JCDL会议最佳论文提名奖。在NeurIPS、ICML、KDD、TPAMI、TKDE、TNNLS等发表高水平论文150篇。同时担任TPAMI, TNNLS, TKDE等领域期刊审稿人,任IJCAI, AAAI, KDD, WWW, CVPR 等(高级)程序委员会委员。谷歌学术引用11,000+,H指数(H-Index) 43。主要研究方向为数据挖掘、机器学习。其发表于TNNLS-2021年关于图神经网络综述论文引用高达4000+,发表于KDD、IJCAI、AAAI、CIKM等顶级会议的共8篇文章被评为最具影响力论文(Most Influential Papers)。