21 min listen
Exponentially Faster Language Modelling
ratings:
Length:
17 minutes
Released:
Nov 27, 2023
Format:
Podcast episode
Description
Language models only really need to use an exponential fraction of their neurons for individual inferences. As proof, we present UltraFastBERT, a BERT variant that uses 0.3% of its neurons during inference while performing on par with similar BERT models. UltraFastBERT selectively engages just 12 out of 4095 neurons for each layer inference. This is achieved by replacing feedforward networks with fast feedforward networks (FFFs). While no truly efficient implementation currently exists to unlock the full acceleration potential of conditional neural execution, we provide high-level CPU code achieving 78x speedup over the optimized baseline feedforward implementation, and a PyTorch implementation delivering 40x speedup over the equivalent batched feedforward inference. We publish our training code, benchmarking setup, and model weights.
2023: Peter Belcák, Roger Wattenhofer
https://arxiv.org/pdf/2311.10770v2.pdf
2023: Peter Belcák, Roger Wattenhofer
https://arxiv.org/pdf/2311.10770v2.pdf
Released:
Nov 27, 2023
Format:
Podcast episode
Titles in the series (100)
Stack More Layers Differently: High-Rank Training Through Low-Rank Updates: Despite the dominance and effectiveness of scaling, resulting in large networks with hundreds of billions of parameters, the necessity to train overparametrized models remains poorly understood, and alternative approaches do not necessarily make it c... by Papers Read on AI