83 min listen
E37: The Mind-Reading Revolution with Dr. Tanishq Mathew Abraham (Part 1 of 2)
From"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
E37: The Mind-Reading Revolution with Dr. Tanishq Mathew Abraham (Part 1 of 2)
From"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
ratings:
Length:
80 minutes
Released:
Jun 20, 2023
Format:
Podcast episode
Description
In this episode, Nathan sits down with Tanishq Mathew Abraham, 19-year-old UC Davis grad and one of the youngest people in the world to receive a Ph.D, with a degree in biomedical engineering. Tanishq is the founder of the Medical AI Research Center (MedARC), and with his teammates, recently published a paper: Reconstructions of the Mind's Eye, which encompasses their breakthrough research on reconstructing visual perceptions from fMRI scans into images. In this episode, Nathan and Tanishq talk about the technology behind the fMRI-to-image project, developing the model, and future applications for this research.
Part 2 with Tanishq will be released as the next episode.
The Cognitive Revolution is a part of the Turpentine podcast network. To learn more: Turpentine.co
TIMESTAMPS:
(00:00) Episode Preview
(05:43) The MindEye Project
(09:06) Resemblance between AI reconstruction of mind's eye and visual presented
(10:00) What is a voxel and which regions of the brain were studied?
(10:23) What would the raw data of a voxel be?
(11:44) Is there a time dimension to voxels?
(15:00) Sponsor: Omneky
(17:50) Goals for the MindEye project
(25:57) What is the starting point of the model?
(31:15) Aligning the model: reconstruction vs retrieval
(40:34) Would doing a full end-to-end training be fine for the reconstruction?
(42:15) The role of a limited data set
(43:09) Training separate models per subject
(45:07) Generalizability with a limited dataset
(47:20) Mapping from one high-dimensional space to another
(50:47) Stable Diffusion VAE encoding
(1:00:50) How long does it take to train the model?
(1:03:14) How similar or different are the subjects and their individual models?
(1:05:59) The future of this research: custom models for your brain?
(1:07:34) How much does this research contribute to brain research and wearables?
(1:11:15) Fuzzing data and future research applications
LINKS:
MedARC: medarc.ai
MindEye Paper: https://www.researchgate.net/publication/371136623_Reconstructing_the_Mind's_Eye_fMRI-to-Image_with_Contrastive_Learning_and_Diffusion_Priors
MP3 of this episode: https://chrt.fm/track/993DGA/traffic.megaphone.fm/RINTP1584997572.mp3?updated=1687271014
TWITTER:
@iScienceLuvr (Tanishq)
@MedARC_AI (MedARC)
@CogRev_Podcast
@labenz (Nathan)
@eriktorenberg (Erik)
SPONSOR:
Thank you Omneky (www.omneky.com) for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.
MUSIC CREDIT:
MusicLM
Part 2 with Tanishq will be released as the next episode.
The Cognitive Revolution is a part of the Turpentine podcast network. To learn more: Turpentine.co
TIMESTAMPS:
(00:00) Episode Preview
(05:43) The MindEye Project
(09:06) Resemblance between AI reconstruction of mind's eye and visual presented
(10:00) What is a voxel and which regions of the brain were studied?
(10:23) What would the raw data of a voxel be?
(11:44) Is there a time dimension to voxels?
(15:00) Sponsor: Omneky
(17:50) Goals for the MindEye project
(25:57) What is the starting point of the model?
(31:15) Aligning the model: reconstruction vs retrieval
(40:34) Would doing a full end-to-end training be fine for the reconstruction?
(42:15) The role of a limited data set
(43:09) Training separate models per subject
(45:07) Generalizability with a limited dataset
(47:20) Mapping from one high-dimensional space to another
(50:47) Stable Diffusion VAE encoding
(1:00:50) How long does it take to train the model?
(1:03:14) How similar or different are the subjects and their individual models?
(1:05:59) The future of this research: custom models for your brain?
(1:07:34) How much does this research contribute to brain research and wearables?
(1:11:15) Fuzzing data and future research applications
LINKS:
MedARC: medarc.ai
MindEye Paper: https://www.researchgate.net/publication/371136623_Reconstructing_the_Mind's_Eye_fMRI-to-Image_with_Contrastive_Learning_and_Diffusion_Priors
MP3 of this episode: https://chrt.fm/track/993DGA/traffic.megaphone.fm/RINTP1584997572.mp3?updated=1687271014
TWITTER:
@iScienceLuvr (Tanishq)
@MedARC_AI (MedARC)
@CogRev_Podcast
@labenz (Nathan)
@eriktorenberg (Erik)
SPONSOR:
Thank you Omneky (www.omneky.com) for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.
MUSIC CREDIT:
MusicLM
Released:
Jun 20, 2023
Format:
Podcast episode
Titles in the series (100)
E6: The Computer Vision Revolution with Junnan Li and Dongxu Li of BLIP and BLIP2: As recently as January 2021, the challenge of "interpreting what is going on in a photograph" was considered "nowhere near solved." Today's guests Junnan Li and Dongxu Li changed that with their publication and open-sourcing of BLIP, which delivered state-of-the-art performance on image captioning and other vision-language tasks. BLIP became the #18 most-cited AI paper of 2022, and now Junnan and Dongxu are back with BLIP-2, this time showing how small models can harness the power of existing foundation models to do multi-modal tasks. We talked to Junnan and Dongxu about their research and how they see the trend toward connector models shaping the future. We talked to Junnan and Dongxu about their research and how they see the trend toward connector models shaping the future. by "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis