73 min listen
#80 AIDAN GOMEZ [CEO Cohere] - Language as Software
#80 AIDAN GOMEZ [CEO Cohere] - Language as Software
ratings:
Length:
52 minutes
Released:
Nov 15, 2022
Format:
Podcast episode
Description
We had a conversation with Aidan Gomez, the CEO of language-based AI platform Cohere. Cohere is a startup which uses artificial intelligence to help users build the next generation of language-based applications. It's headquartered in Toronto. The company has raised $175 million in funding so far.
Language may well become a key new substrate for software building, both in its representation and how we build the software. It may democratise software building so that more people can build software, and we can build new types of software. Aidan and I discuss this in detail in this episode of MLST.
Check out Cohere -- https://dashboard.cohere.ai/welcome/register?utm_source=influencer&utm_medium=social&utm_campaign=mlst
Support us!
https://www.patreon.com/mlst
YT version: https://youtu.be/ooBt_di8DLs
TOC:
[00:00:00] Aidan Gomez intro
[00:02:12] What's it like being a CEO?
[00:02:52] Transformers
[00:09:33] Deepmind Chomsky Hierarchy
[00:14:58] Cohere roadmap
[00:18:18] Friction using LLMs for startups
[00:25:31] How different from OpenAI / GPT-3
[00:29:31] Engineering questions on Cohere
[00:35:13] Francois Chollet says that LLMs are like databases
[00:38:34] Next frontier of language models
[00:42:04] Different modes of understanding in LLMs
[00:47:04] LLMs are the new extended mind
[00:50:03] Is language the next interface, and why might that be bad?
References:
[Balestriero] Spine theory of NNs
https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf
[Delétang et al] Neural Networks and the Chomsky Hierarchy
https://arxiv.org/abs/2207.02098
[Fodor, Pylyshyn] Connectionism and Cognitive Architecture: A Critical Analysis
https://ruccs.rutgers.edu/images/personal-zenon-pylyshyn/docs/jaf.pdf
[Chalmers, Clark] The extended mind
https://icds.uoregon.edu/wp-content/uploads/2014/06/Clark-and-Chalmers-The-Extended-Mind.pdf
[Melanie Mitchell et al] The Debate Over Understanding in AI's Large Language Models
https://arxiv.org/abs/2210.13966
[Jay Alammar]
Illustrated stable diffusion
https://jalammar.github.io/illustrated-stable-diffusion/
Illustrated transformer
https://jalammar.github.io/illustrated-transformer/
https://www.youtube.com/channel/UCmOwsoHty5PrmE-3QhUBfPQ
[Sandra Kublik] (works at Cohere!)
https://www.youtube.com/channel/UCjG6QzmabZrBEeGh3vi-wDQ
Language may well become a key new substrate for software building, both in its representation and how we build the software. It may democratise software building so that more people can build software, and we can build new types of software. Aidan and I discuss this in detail in this episode of MLST.
Check out Cohere -- https://dashboard.cohere.ai/welcome/register?utm_source=influencer&utm_medium=social&utm_campaign=mlst
Support us!
https://www.patreon.com/mlst
YT version: https://youtu.be/ooBt_di8DLs
TOC:
[00:00:00] Aidan Gomez intro
[00:02:12] What's it like being a CEO?
[00:02:52] Transformers
[00:09:33] Deepmind Chomsky Hierarchy
[00:14:58] Cohere roadmap
[00:18:18] Friction using LLMs for startups
[00:25:31] How different from OpenAI / GPT-3
[00:29:31] Engineering questions on Cohere
[00:35:13] Francois Chollet says that LLMs are like databases
[00:38:34] Next frontier of language models
[00:42:04] Different modes of understanding in LLMs
[00:47:04] LLMs are the new extended mind
[00:50:03] Is language the next interface, and why might that be bad?
References:
[Balestriero] Spine theory of NNs
https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf
[Delétang et al] Neural Networks and the Chomsky Hierarchy
https://arxiv.org/abs/2207.02098
[Fodor, Pylyshyn] Connectionism and Cognitive Architecture: A Critical Analysis
https://ruccs.rutgers.edu/images/personal-zenon-pylyshyn/docs/jaf.pdf
[Chalmers, Clark] The extended mind
https://icds.uoregon.edu/wp-content/uploads/2014/06/Clark-and-Chalmers-The-Extended-Mind.pdf
[Melanie Mitchell et al] The Debate Over Understanding in AI's Large Language Models
https://arxiv.org/abs/2210.13966
[Jay Alammar]
Illustrated stable diffusion
https://jalammar.github.io/illustrated-stable-diffusion/
Illustrated transformer
https://jalammar.github.io/illustrated-transformer/
https://www.youtube.com/channel/UCmOwsoHty5PrmE-3QhUBfPQ
[Sandra Kublik] (works at Cohere!)
https://www.youtube.com/channel/UCjG6QzmabZrBEeGh3vi-wDQ
Released:
Nov 15, 2022
Format:
Podcast episode
Titles in the series (100)
Exploring Open-Ended Algorithms: POET by Machine Learning Street Talk (MLST)