Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

LLM Evaluation with Arize AI's Aparna Dhinakaran // #210

LLM Evaluation with Arize AI's Aparna Dhinakaran // #210

FromMLOps.community


LLM Evaluation with Arize AI's Aparna Dhinakaran // #210

FromMLOps.community

ratings:
Length:
56 minutes
Released:
Feb 9, 2024
Format:
Podcast episode

Description

Large Language Models have taken the world by storm. But what are the real use cases? What are the challenges in productionizing them? In this event, you will hear from practitioners about how they are dealing with things such as cost optimization, latency requirements, trust of output, and debugging. You will also get the opportunity to join workshops that will teach you how to set up your use cases and skip over all the headaches.

Join the AI in Production Conference on February 15 and 22 here: https://home.mlops.community/home/events/ai-in-production-2024-02-15

________________________________________________________________________________________
Aparna Dhinakaran is the Co-Founder and Chief Product Officer at Arize AI, a pioneer and early leader in machine learning (ML) observability.

MLOps podcast #210 with Aparna Dhinakaran, Co-Founder and Chief Product Officer of Arize AI, LLM Evaluation with Arize AI's Aparna Dhinakaran.

// Abstract
Dive into the complexities of Language Model (LLM) evaluation, the role of the Phoenix evaluations library, and the importance of highly customized evaluations in software application. The discourse delves into the nuances of fine-tuning in AI, the debate between the use of open-source versus private models, and the urgency of getting models into production for early identification of bottlenecks. Then examine the relevance of retrieved information, output legitimacy, and the operational advantages of Phoenix in supporting LLM evaluations.

// Bio
Aparna Dhinakaran is the Co-Founder and Chief Product Officer at Arize AI, a pioneer and early leader in AI observability and LLM evaluation. A frequent speaker at top conferences and thought leader in the space, Dhinakaran is a Forbes 30 Under 30 honoree. Before Arize, Dhinakaran was an ML engineer and leader at Uber, Apple, and TubeMogul (acquired by Adobe). During her time at Uber, she built several core ML Infrastructure platforms, including Michelangelo. She has a bachelor’s from Berkeley's Electrical Engineering and Computer Science program, where she published research with Berkeley's AI Research group. She is on a leave of absence from the Computer Vision Ph.D. program at Cornell University.

// MLOps Jobs board
https://mlops.pallet.xyz/jobs

// MLOps Swag/Merch
https://mlops-community.myshopify.com/

// Related Links
Arize-Phoenix: https://phoenix.arize.com/
Phoenix LLM task eval library: https://docs.arize.com/phoenix/llm-evals/running-pre-tested-evals
Aparna's recent piece on LLM evaluation: https://arize.com/blog-course/llm-evaluation-the-definitive-guide/
Thread on the difference between model and task LLM evals: https://twitter.com/aparnadhinak/status/1752763354320404488
Research thread on why numeric score evals are broken: https://twitter.com/aparnadhinak/status/1748368364395721128

--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/

Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Aparna on LinkedIn: https://www.linkedin.com/in/aparnadhinakaran/
Released:
Feb 9, 2024
Format:
Podcast episode

Titles in the series (100)

Weekly talks and fireside chats about everything that has to do with the new space emerging around DevOps for Machine Learning aka MLOps aka Machine Learning Operations.