Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

ratings:
Length:
10 minutes
Released:
May 17, 2024
Format:
Podcast episode

Description

How can we ensure that AI is aligned with human values? What can AI teach us about human cognition and creativity?Dr. Raphaël Millière is Assistant Professor in Philosophy of AI at Macquarie University in Sydney, Australia. His research primarily explores the theoretical foundations and inner workings of AI systems based on deep learning, such as large language models. He investigates whether these systems can exhibit human-like cognitive capacities, drawing on theories and methods from cognitive science. He is also interested in how insights from studying AI might shed new light on human cognition. Ultimately, his work aims to advance our understanding of both artificial and natural intelligence.“I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”https://raphaelmilliere.comhttps://researchers.mq.edu.au/en/persons/raphael-millierewww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
Released:
May 17, 2024
Format:
Podcast episode

Titles in the series (100)

Ten minute highlights of the popular The Creative Process & One Planet podcasts. Exploring the fascinating minds of creative people. Conversations with writers, artists & creative thinkers across the Arts & STEM. We discuss their life, work & artistic practice. Winners of Oscar, Emmy, Tony, Pulitzer, leaders & public figures share real experiences & offer valuable insights. Notable guests and participating museums and organizations include: Academy of Motion Picture Arts & Sciences, Neil Patrick Harris, Smithsonian, Roxane Gay, Musée Picasso, EARTHDAY-ORG, Neil Gaiman, UNESCO, Joyce Carol Oates, Mark Seliger, Acropolis Museum, Hilary Mantel, Songwriters Hall of Fame, George Saunders, The New Museum, Lemony Snicket, Pritzker Architecture Prize, Hans-Ulrich Obrist, Serpentine Galleries, Joe Mantegna, PETA, Greenpeace, EPA, Morgan Library & Museum, and many others. The interviews are hosted by founder and creative educator Mia Funk with the participation of students, universities, and collaborators from around the world. These conversations are also part of our traveling exhibition.
 www.creativeprocess.info For full episodes, follow The Creative Process - Arts Culture & Society.