Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

#42: Meta’s Segment Anything Model (SAM) for Computer Vision, ChatGPT’s Safety Problem, and the Limitations of ChatGPT Detectors

#42: Meta’s Segment Anything Model (SAM) for Computer Vision, ChatGPT’s Safety Problem, and the Limitations of ChatGPT Detectors

FromThe Artificial Intelligence Show


#42: Meta’s Segment Anything Model (SAM) for Computer Vision, ChatGPT’s Safety Problem, and the Limitations of ChatGPT Detectors

FromThe Artificial Intelligence Show

ratings:
Length:
39 minutes
Released:
Apr 11, 2023
Format:
Podcast episode

Description

One step forward, two steps back…or at least made with caution. Meta announces their Segment Anything Model, and in that same breath, we’re talking about ChatGPT and safety, as well as the limitations of being able to detect the usage of ChatGPT. Paul and Mike break it down:
Meta AI announces their Segment Anything Model
An article from Meta introduces their Segment Anything project, aiming to democratize image segmentation in computer vision. This project includes the Segment Anything Model (SAM) and the Segment Anything 1-Billion mask dataset (SA-1B), the largest segmentation dataset ever.
This has wide-ranging applications across different industries. Meta cites that it could do things like be incorporated into augmented reality glasses to instantly identify objects you’re looking at and prompt you with reminders and instructions related to an object.
In marketing and business specifically, Gizmodo calls the demo of SAM a Photoshop Magic Wand tool on steroids, and one of its reporters used it to do sophisticated image editing on the fly with ease by simply pointing and clicking to remove and adjust images.
Right now, the model is available only for non-commercial testing, but given the use cases, it could find its way into Meta’s platforms as a creative aid.
Paul and Mike discuss the opportunities for marketers and the business world at large.
Does ChatGPT have a safety problem?
Is OpenAI's April 5 statement on their website is a response to calls for increased AI safety, like the open letter signed by Elon Musk and others, and Italy’s full ban on ChatGPT?
A new article from WIRED breaks down why and how Italy’s ban could spur wider regulatory action across the European Union—and call into question the overall legality of AI tools. When banning ChatGPT, Italy’s data regulator cited several major problems with the tool. But, fundamentally, their reasoning for the ban hinged on GDPR, the European Union’s wide-ranging General Data Protection Regulation privacy law. 
Experts cited by WIRED said there are just two ways that OpenAI could have gotten that data legally under EU law. The first would be if they had gotten consent from each user affected, which they did not. The second would be arguing they have “legitimate interests” to use each user’s data in training their models. The experts cited say that the second one will be extremely difficult for OpenAI to prove to EU regulators. Italy’s data regulator has already been quoted by WIRED as saying this defense is “inadequate.”
This matters outside Italy because all EU countries are bound by GDPR. And data regulators in France, Germany, and Ireland have already contacted Italy’s regulator to get more info on their findings and actions.
This also isn’t just an OpenAI problem. Plenty of other major AI companies likely have trained their models in a way that violates GDPR. This is an interesting conversation and topic to keep our eyes on. With other countries follow suit?
Can we really detect the use of ChatGPT?
OpenAI, the maker of ChatGPT, just published what it’s calling “Our approach to AI safety,” an article outlining specific steps the company takes to make its AI systems safer, more aligned, and developed responsibly.
Some of the steps listed include delaying the general release of systems like GPT-4 to make sure they’re as safe and aligned as possible before being accessible to the public, protecting children by requiring people to be 18 or older, or 13 or older with parental approval, to use AI tools. They are also looking into options to verify users. They cite that GPT-4 is 82% less likely to respond to requests for disallowed content.
Listen for more. Why now? Are we confident they’re developing AI responsibly?
Released:
Apr 11, 2023
Format:
Podcast episode

Titles in the series (96)

The Marketing AI Show makes artificial intelligence actionable and approachable for marketers. Brought to you by the creators of the Marketing AI Institute and the Marketing AI Conference (MAICON), join us for weekly conversations with top authors, entrepreneurs, AI researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business, and career.