January 09, 2024
We propose an implementation of an efficient fused matrix multiplication kernel for W4A16 quantized inference, where we perform dequantization and GEMM in a fused kernel using a SplitK work decomposition. Our implementation shows improvement for the type of skinny matrix-matrix multiplications found in foundation model inference workloads. In particular, this paper surveys the type of matrix multiplication between a skinny activation matrix and a square weight matrix. Our results show an average of 65\% speed improvement on A100, and an average of 124\% speed improvement on H100 (with a peak of 295\%) for a range of matrix dimensions including those found in a llama-style model, where m < n = k.
Written by
Less Wright
Adnan Hoque
Publisher
arxiv.org
Research Topics
Core Machine Learning
May 07, 2024
Hwanwoo Kim, Xin Zhang, Jiwei Zhao, Qinglong Tian
May 07, 2024
March 28, 2024
Vitoria Barin Pacela, Kartik Ahuja, Simon Lacoste-Julien, Pascal Vincent
March 28, 2024
March 13, 2024
Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian
March 13, 2024
February 15, 2024
Danny Deng, Hongkuan Zhou, Hanqing Zeng, Yinglong Xia, Chris Leung (AI), Jianbo Li, Rajgopal Kannan, Viktor Prasanna
February 15, 2024
Product experiences
Foundational models
Product experiences
Latest news
Foundational models