Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

Economics & Optimization of AI/ML

Economics & Optimization of AI/ML

FromThe Cloudcast


Economics & Optimization of AI/ML

FromThe Cloudcast

ratings:
Length:
36 minutes
Released:
Aug 30, 2023
Format:
Podcast episode

Description

Luis Ceze (Co-founder & CEO @OctoML) talks about barriers to entry for AI & ML, the economics and different ways to think about funding, training, fine tuning, inferencing and optimizations for Artificial Intelligence and Machine Learning.SHOW: 749CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT - "CLOUDCAST BASICS"SHOW SPONSORS:CloudZero – Cloud Cost Visibility and Savings​​CloudZero provides immediate and ongoing savings with 100% visibility into your total cloud spendReduce the complexities of protecting your workloads and applications in a multi-cloud environment. Panoptica provides comprehensive cloud workload protection integrated with API security to protect the entire application lifecycle.  Learn more about Panoptica at panoptica.appSHOW NOTES:OctoML (homepage)OctoML makes it easier to put AI/ML models into productionOctoML launches OctoAITopic 1 - Welcome to the show. You have an interesting background with roots in both VC markets and academia. Tell us a little bit about your background.Topic 2 - Generative AI is now all the rage. But as more people dig into AI/ML in general, they find out quickly there are a few barriers to entry. Let’s address some of them as you have an extensive history here. The first barrier I believe most people hit is complexity. The tools to ingest data into models and deployment of models has improved but what about the challenges implementing that into production applications? How do folks overcome this first hurdle?Topic 3 - The next hurdle I think most organizations hit is where to place the models. Where to train them, where to fine tune them and where to run them could be the same or different places. Can you talk a bit about placement of models? Also, as a follow up, how does GPU shortages play into this and can models be fine tuned to work around this?Topic 4 - Do you see the AI/ML dependence on GPU’s continuing into the future? Will there be an abstraction layer or another technology coming that will allow the industry to move away from GPU’s from more mainstream applications?Topic 5 - The next barrier but very related to the previous one is cost. There are some very real world tradeoffs between cost and performance when it comes to AI/ML. What cost factors need to be considered besides hardware costs? Data ingestion and data gravity comes to mind as a hidden cost that can add up quickly if not properly considered. Another one is latency. Maybe you arrive at an answer but at a slower rate that is more economical. How do organizations optimize for cost?Topic 6 - Do most organizations tend to use an “off the shelf model” today? Maybe an open source model that they train with their private data? I would expect this to be the fastest way to production, why build your own model when the difference is in your data? How does data privacy factor into this scenario?FEEDBACK?Email: show at the cloudcast dot netTwitter: @thecloudcastnet
Released:
Aug 30, 2023
Format:
Podcast episode

Titles in the series (100)

The Cloudcast is the industry's leading, independent Cloud Computing podcast. Since 2011, co-hosts Aaron Delp & Brian Gracely have interviewed technology and business leaders that are shaping the future of computing. Topics will include Cloud Computing | Open Source | AWS | Azure | GCP | Serverless | DevOps | Big Data | ML | AI | Security | Kubernetes | AppDev | SaaS | PaaS | CaaS | IoT.