26 min listen
Transformers On Large-Scale Graphs with Bayan Bruss - #641
FromThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Transformers On Large-Scale Graphs with Bayan Bruss - #641
FromThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
ratings:
Length:
39 minutes
Released:
Aug 7, 2023
Format:
Podcast episode
Description
Today we’re joined by Bayan Bruss, Vice President of Applied ML Research at Capital One. In our conversation with Bayan, we covered a pair of papers his team presented at this year’s ICML conference. We begin with the paper Interpretable Subspaces in Image Representations, where Bayan gives us a dive deep into the interpretability framework, embedding dimensions, contrastive approaches, and how their model can accelerate image representation in deep learning. We also explore GOAT: A Global Transformer on Large-scale Graphs, a scalable global graph transformer. We talk through the computation challenges, homophilic and heterophilic principles, model sparsity, and how their research proposes methodologies to get around the computational barrier when scaling to large-scale graph models.
The complete show notes for this episode can be found at twimlai.com/go/641.
The complete show notes for this episode can be found at twimlai.com/go/641.
Released:
Aug 7, 2023
Format:
Podcast episode
Titles in the series (100)
This Week in ML & AI - 6/24/16: Dueling Neural Networks at ICML, Plus Training a Robotic Housekeeper: This Week in Machine Learning & AI brings you the… by The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)