Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

#17 Reparametrize Your Models Automatically, with Maria Gorinova

#17 Reparametrize Your Models Automatically, with Maria Gorinova

FromLearning Bayesian Statistics


#17 Reparametrize Your Models Automatically, with Maria Gorinova

FromLearning Bayesian Statistics

ratings:
Length:
52 minutes
Released:
Jun 4, 2020
Format:
Podcast episode

Description

Have you already encountered a model that you know is scientifically sound, but that MCMC just wouldn’t run? The model would take forever to run — if it ever ran — and you would be greeted with a lot of divergences in the end. Yeah, I know, my stress levels start raising too whenever I hear the word « divergences »…
Well, you’ll be glad to hear there are tricks to make these models run, and one of these tricks is called re-parametrization — I bet you already heard about the poorly-named non-centered parametrization?
Well fear no more! In this episode, Maria Gorinova will tell you all about these model re-parametrizations! Maria is a PhD student in Data Science & AI at the University of Edinburgh. Her broad interests range from programming languages and verification, to machine learning and human-computer interaction. 
More specifically, Maria is interested in probabilistic programming languages, and in exploring ways of applying program-analysis techniques to existing PPLs in order to improve usability of the language or efficiency of inference.
As you’ll hear in the episode, she thinks a lot about the language aspect of probabilistic programming, and works on the automation of various “tricks” in probabilistic programming: automatic re-parametrization, automatic marginalization, automatic and efficient model-specific inference.
As Maria also has experience with several PPLs like Stan, Edward2 and TensorFlow Probability, she’ll tell us what she thinks a good PPL design requires, and what the future of PPLs looks like to her.
Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ (https://bababrinkman.com/) !
Links from the show:
Maria on the Web: http://homepages.inf.ed.ac.uk/s1207807/index.html (http://homepages.inf.ed.ac.uk/s1207807/index.html)
Maria on Twitter: https://twitter.com/migorinova (https://twitter.com/migorinova)
Maria on GitHub: https://github.com/mgorinova (https://github.com/mgorinova)
Automatic Reparameterisation of Probabilistic Programs (Maria's paper with Dave Moore and Matthew Hoffman): https://arxiv.org/abs/1906.03028 (https://arxiv.org/abs/1906.03028)
Stan User's Guide on Reparameterization: https://mc-stan.org/docs/2_23/stan-users-guide/reparameterization-section.html (https://mc-stan.org/docs/2_23/stan-users-guide/reparameterization-section.html)
HMC for hierarchical models -- Background on reparameterization: https://arxiv.org/abs/1312.0906 (https://arxiv.org/abs/1312.0906)
NeuTra -- Automatic reparameterization: https://arxiv.org/abs/1903.03704 (https://arxiv.org/abs/1903.03704)
Edward2 -- A library for probabilistic modeling, inference, and criticism: http://edwardlib.org/ (http://edwardlib.org/)
Pyro -- Automatic reparameterization and marginalization: https://pyro.ai/ (https://pyro.ai/)
Gen -- Programmable inference: http://probcomp.csail.mit.edu/software/gen/ (http://probcomp.csail.mit.edu/software/gen/)
TensorFlow Probability: https://www.tensorflow.org/probability/ (https://www.tensorflow.org/probability/)



This podcast uses the following third-party services for analysis:

Podcorn - https://podcorn.com/privacy
Released:
Jun 4, 2020
Format:
Podcast episode

Titles in the series (100)

Are you a researcher or data scientist / analyst / ninja? Do you want to learn Bayesian inference, stay up to date or simply want to understand what Bayesian inference is? Then this podcast is for you! You'll hear from researchers and practitioners of all fields about how they use Bayesian statistics, and how in turn YOU can apply these methods in your modeling workflow. When I started learning Bayesian methods, I really wished there were a podcast out there that could introduce me to the methods, the projects and the people who make all that possible. So I created "Learning Bayesian Statistics", where you'll get to hear how Bayesian statistics are used to detect black matter in outer space, forecast elections or understand how diseases spread and can ultimately be stopped. But this show is not only about successes -- it's also about failures, because that's how we learn best. So you'll often hear the guests talking about what *didn't* work in their projects, why, and how they overcame these challenges. Because, in the end, we're all lifelong learners! My name is Alex Andorra by the way, and I live in Paris. By day, I'm a data scientist and modeler at the https://www.pymc-labs.io/ (PyMC Labs) consultancy. By night, I don't (yet) fight crime, but I'm an open-source enthusiast and core contributor to the python packages https://docs.pymc.io/ (PyMC) and https://arviz-devs.github.io/arviz/ (ArviZ). I also love https://www.pollsposition.com/ (election forecasting) and, most importantly, Nutella. But I don't like talking about it – I prefer eating it. So, whether you want to learn Bayesian statistics or hear about the latest libraries, books and applications, this podcast is for you -- just subscribe! You can also support the show and https://www.patreon.com/learnbayesstats (unlock exclusive Bayesian swag on Patreon)! This podcast uses the following third-party services for analysis: Podcorn - https://podcorn.com/privacy