Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

MegaBlocks: Efficient Sparse Training with Mixture-of-Experts

MegaBlocks: Efficient Sparse Training with Mixture-of-Experts

FromPapers Read on AI


MegaBlocks: Efficient Sparse Training with Mixture-of-Experts

FromPapers Read on AI

ratings:
Length:
49 minutes
Released:
Dec 14, 2023
Format:
Podcast episode

Description

We present MegaBlocks, a system for efficient Mixture-of-Experts (MoE) training on GPUs. Our system is motivated by the limitations of current frameworks, which restrict the dynamic routing in MoE layers to satisfy the constraints of existing software and hardware. These formulations force a tradeoff between model quality and hardware efficiency, as users must choose between dropping tokens from the computation or wasting computation and memory on padding. To address these limitations, we reformulate MoE computation in terms of block-sparse operations and develop new block-sparse GPU kernels that efficiently handle the dynamism present in MoEs. Our approach never drops tokens and maps efficiently to modern hardware, enabling end-to-end training speedups of up to 40% over MoEs trained with the state-of-the-art Tutel library and 2.4x over DNNs trained with the highly-optimized Megatron-LM framework.

2022: Trevor Gale, D. Narayanan, C. Young, M. Zaharia



https://arxiv.org/pdf/2211.15841.pdf
Released:
Dec 14, 2023
Format:
Podcast episode

Titles in the series (100)

Keeping you up to date with the latest trends and best performing architectures in this fast evolving field in computer science. Selecting papers by comparative results, citations and influence we educate you on the latest research. Consider supporting us on Patreon.com/PapersRead for feedback and ideas.