28 min listen
Ring Attention with Blockwise Transformers for Near-Infinite Context
Ring Attention with Blockwise Transformers for Near-Infinite Context
ratings:
Length:
27 minutes
Released:
Feb 26, 2024
Format:
Podcast episode
Description
Transformers have emerged as the architecture of choice for many state-of-the-art AI models, showcasing exceptional performance across a wide range of AI applications. However, the memory demands imposed by Transformers limit their ability to handle long sequences, thereby posing challenges in utilizing videos, actions, and other long-form sequences and modalities in complex environments. We present a novel approach, Ring Attention with Blockwise Transformers (Ring Attention), which leverages blockwise computation of self-attention and feedforward to distribute long sequences across multiple devices while fully overlapping the communication of key-value blocks with the computation of blockwise attention. Our approach enables training and inference of sequences that are up to device count times longer than those achievable by prior memory-efficient Transformers, without resorting to approximations or incurring additional communication and computation overheads. Extensive experiments on language modeling and reinforcement learning tasks demonstrate the effectiveness of our approach in allowing millions of tokens context size and improving performance.
2023: Hao Liu, Matei Zaharia, Pieter Abbeel
https://arxiv.org/pdf/2310.01889v4.pdf
2023: Hao Liu, Matei Zaharia, Pieter Abbeel
https://arxiv.org/pdf/2310.01889v4.pdf
Released:
Feb 26, 2024
Format:
Podcast episode
Titles in the series (100)
LISA: Reasoning Segmentation via Large Language Model: Although perception systems have made remarkable advancements in recent years, they still rely on explicit human instruction to identify the target objects or categories before executing visual recognition tasks. Such systems lack the ability to acti... by Papers Read on AI