Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

AI Transformers in context, with Aleksa Gordić

AI Transformers in context, with Aleksa Gordić

FromLondon Futurists


AI Transformers in context, with Aleksa Gordić

FromLondon Futurists

ratings:
Length:
28 minutes
Released:
Sep 22, 2022
Format:
Podcast episode

Description

Welcome to episode 5 of the London Futurist podcast, with your co-hosts David Wood and Calum Chace.We’re attempting something rather ambitious in episodes 5 and 6. We try to explain how today’s cutting edge artificial intelligence systems work, using language familiar to lay people, rather than people with maths or computer science degrees.Understanding how Transformers and Generative Adversarial Networks (GANs) work means getting to grips with concepts like matrix transformations, vectors, and landscapes with 500 dimensions.This is challenging stuff, but do persevere. These AI systems are already having a profound impact, and that impact will only grow. Even at the level of pure self-interest, it is often said that in the short term, AIs won’t take all the jobs, but people who understand AI will take the best jobs.We are extremely fortunate to have as our guide for these episodes a brilliant AI researcher at DeepMind, Aleksa Gordić.Note that Aleksa is speaking in personal capacity and is not representing DeepMind.Aleksa's YouTube channel is https://www.youtube.com/c/TheAIEpiphany00.03 An ambitious couple of episodes01.22 Introducing Aleksa, a double rising star02.15 Keeping it simple02.50 Aleksa's current research, and previous work on Microsoft's HoloLens03.40 Self-taught in AI. Not representing DeepMind04.20 The narrative of the Big Bang in 2012, when Machine Learning started to work in AI.05.15 What machine learning is05.45 AlexNet. Bigger data sets and more powerful computers06.40 Deep learning a subset of machine learning, and a re-branding of artificial neural networks07.27 2017 and the arrival of Transformers07.40 Attention is All You Need08.16 Before this there were LSTMs, Long Short-Term Memories08.40 Why Transformers beat LSTMs09.58 Tokenisation. Splitting text into smaller units and mapping them onto higher dimension networks10.30 3D space is defined by three numbers10.55 Humans cannot envisage multi-dimensional spaces with hundreds of dimensions, but it's OK to imagine them as 3D spaces11.55 Some dimensions of the word "princess"12.30 Black boxes13.05 People are trying to understand how machines handle the dimensions13.50 "Man is to king as woman is to queen." Using mathematical operators on this kind of relationship14.35 Not everything is explainable14.45 Machines discover the relationships themselves15.15 Supervised and self-supervised learning. Rewarding or penalising the machine for predicting labels16.25 Vectors are best viewed as arrows in 3D space, although that is over-simplifying17.20 For instance the relationship between "queen" and "woman" is a vector17.50 Self-supervised systems do their own labelling18.30 The labels and relationships have probability distributions19.20 For instance, a princess is far more likely to wear a slipper than a dog19.35 Large numbers of parameters19.40 BERT, the original Transformer, had a hundred million or so parameters20.04 Now it's in the hundreds of billions, or even trillions20.24 A parameter is analogous to a synapse in the human brain21.19 Synapses can have different weights22.10 The more parameters, the lower the loss22.35 Not just text, but images too, because images can also be represented as tokens23.00 In late 2020 Google released the first vision Transformer23.29 Dall-E and Midjourney are diffusion models, which have replaced GANs24.15 What are GANs, or Generative Adversarial Networks?24.45 Two types of model: Generators and Discriminators. The first tries to fool the second26.20 Simple text can produce photorealistic images27.10 Aleksa's YouTube videos are available at "The AI Epiphany"27.40 CloseMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Released:
Sep 22, 2022
Format:
Podcast episode

Titles in the series (81)

Anticipating and managing exponential impact - hosts David Wood and Calum Chace