60 min listen
2024 Big Ideas: Miracle Drugs, Programmable Medicine, and AI Interpretability
Froma16z Podcast
ratings:
Length:
65 minutes
Released:
Dec 8, 2023
Format:
Podcast episode
Description
Smart energy grids. Voice-first companion apps. Programmable medicines. AI tools for kids.
We asked over 40 partners across a16z to preview one big ideathey believe will drive innovation in 2024.Here in our 3-part series, you’ll hear directly from partners across all our verticals, as we dive even more deeply into these ideas. What’s the why now? Who is already building in these spaces? What opportunities and challenges are on the horizon? And how can you get involved?View all 40+ big ideas: https://a16z.com/bigideas2024 Stay Updated: Find a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Released:
Dec 8, 2023
Format:
Podcast episode
More Episodes from a16z Podcast
Securing the Black Box: OpenAI, Anthropic, and GDM Discuss: Human nature fears the unknown, and with the rapid progress of AI, concerns naturally arise. Uncanny robocalls, data breaches, and misinformation floods are among the worries. But what about security in the era of large language models? In this episode, we hear from security leaders at OpenAI, Anthropic, and Google DeepMind. Matt Knight, Head of Security at OpenAI, Jason Clinton, CISO at Anthropic, and Vijay Bolina, CISO at Google DeepMind, are joined by Joel de la Garza, operating partner at a16z and former chief security officer at Box and Citigroup. Together, they explore how large language models impact security, including changes in offense and defense strategies, misuse by nation-state actors, prompt engineering, and more. In this changing environment, how do LLMs transform security dynamics? Let's uncover the answers. by a16z Podcast