RANKING AND RECOMMENDATIONS

CORE MACHINE LEARNING

TASER: Temporal Adaptive Sampling for Fast and Accurate Dynamic Graph Representation Learning

February 15, 2024

Abstract

Temporal Graph Neural Networks (TGNNs) have demonstrated state-of-the-art performance in various high-impact applications, including fraud detection and content recommendation. Despite the success of TGNNs, they are prone to the prevalent noise found in real-world dynamic graphs like time-deprecated links and skewed interaction distribution. The noise causes two critical issues that significantly compromise the accuracy of TGNNs: (1) models are supervised by inferior interactions, and (2) noisy input induces high variance in the aggregated messages. However, current TGNN denoising techniques do not consider the diverse and dynamic noise pattern of each node. In addition, they also suffer from the excessive mini-batch generation overheads caused by traversing more neighbors. We believe the remedy for fast and accurate TGNNs lies in temporal adaptive sampling. In this work, we propose TASER, the first adaptive sampling method for TGNNs optimized for accuracy, efficiency, and scalability. TASER adapts its mini-batch selection based on training dynamics and temporal neighbor selection based on the contextual, structural, and temporal properties of past interactions. To alleviate the bottleneck in mini-batch generation, TASER implements a pure GPU-based temporal neighbor finder and a dedicated GPU feature cache. We evaluate the performance of TASER using two state-of-the-art backbone TGNNs. On five popular datasets, TASER outperforms the corresponding baselines by an average of 2.3% in Mean Reciprocal Rank (MRR) while achieving an average of 5.1× speedup in training time.

Download the Paper

AUTHORS

Written by

Danny Deng

Hongkuan Zhou

Hanqing Zeng

Yinglong Xia

Chris Leung (AI)

Jianbo Li

Rajgopal Kannan

Viktor Prasanna

Publisher

IEEE IPDPS

Research Topics

Ranking & Recommendations

Core Machine Learning

Related Publications

July 21, 2024

CORE MACHINE LEARNING

From Neurons to Neutrons: A Case Study in Mechanistic Interpretability

Ouail Kitouni, Niklas Nolte, Samuel Pérez Díaz, Sokratis Trifinopoulos, Mike Williams

July 21, 2024

July 08, 2024

THEORY

CORE MACHINE LEARNING

An Adaptive Stochastic Gradient Method with Non-negative Gauss-Newton Stepsizes

Antonio Orvieto, Lin Xiao

July 08, 2024

June 17, 2024

HUMAN & MACHINE INTELLIGENCE

COMPUTER VISION

D-Flow: Differentiating through Flows for Controlled Generation

Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, Yaron Lipman

June 17, 2024

June 17, 2024

COMPUTER VISION

CORE MACHINE LEARNING

Bespoke Non-Stationary Solvers for Fast Sampling of Diffusion and Flow Models

Neta Shaul, Uriel Singer, Ricky Chen, Matt Le, Ali Thabet, Albert Pumarola, Yaron Lipman

June 17, 2024

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.