Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

Deep Learning Neural Networks: Building Trust and Breaking Bias

Deep Learning Neural Networks: Building Trust and Breaking Bias

FromDataCafé


Deep Learning Neural Networks: Building Trust and Breaking Bias

FromDataCafé

ratings:
Length:
51 minutes
Released:
Apr 7, 2022
Format:
Podcast episode

Description

We explore one of the key issues around Deep Learning Neural Networks - how can you prove that your neural network will perform correctly? Especially if the neural network in question is at the heart of a mission-critical application, such as making a real-time control decision in an autonomous car. Similarly, how can you establish if you've trained your neural network at the heart of a loan decision agent  with a prebuilt bias? How can you be sure that your black box is going to adapt to critical new situations?We speak with Prof. Alessio Lomuscio about how Mixed Integer Linear Programs (MILPs) and Symbolic Interval Propagation can be used to capture and solve verification problems in large Neural Networks. Prof. Lomuscio leads the Verification of Autonomous Systems Group in the Dept. of Computing at Imperial College; their results have shown that verification is feasible for models in the millions of tunable parameters, which was previously not possible. Tools like VENUS and VeriNet, developed in their lab, can verify key operational properties in Deep Learning Networks and this has a particular relevance for safety-critical applications in e.g. the aviation industry, medical imaging and autonomous transportation. Particularly importantly, given that neural networks are only as good as the training data that they have learned from, it is also possible to prove that a particular defined bias does or does not exist for a given network. This latter case is, of course, important for many social or industrial applications: being able to show that a decisioning tool treats people of all genders, ethnicities and abilities equitably.Interview GuestOur interview guest Alessio Lomuscio is Professor of Safe Artificial Intelligence in the Department of Computing at Imperial College London. Anyone wishing to contact Alessio about his team's verification technology can do so via his Imperial College website, or via the Imperial College London spin-off Safe Intelligence that will be commercialising the AI verification technology in the future.Further ReadingPublication list for Prof. Alessio Lomuscio  (via Imperial College London)Paper on Formal Analysis of Neural Network-based Systems in the Aircraft Domain using the VENUS tool (via Imperial College London)Paper on Scalable Complete Verification of ReLU Neural Networks via Dependency-based Branching (via IJCAI.org)Paper on DEEPSPLIT: An Efficient Splitting Method for Neural Network Verification via Indirect Effect Analysis (via IJCAI.org)Team: Verification of Autonomous Systems Group, Department of Computing, Imperial College LondonTools: VENUS and VeriNetSome links above may require payment or login. We are not endorsing them or receiving any payment for mentioning them. They are provided as is. Often free versions of papers are available and we would encourage you to investigate.Recording date: 8 Feb 2022Interview date: 31 Aug 2021
Released:
Apr 7, 2022
Format:
Podcast episode

Titles in the series (26)

Welcome to the DataCafé: a special-interest Data Science podcast with Dr Jason Byrne and Dr Jeremy Bradley, interviewing leading data science researchers and domain experts in all things business, stats, maths, science and tech.