Artwork

เนื้อหาจัดทำโดย The Gradient เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก The Gradient หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !

Hugo Larochelle: Deep Learning as Science

1:48:28
 
แบ่งปัน
 

Manage episode 368147596 series 2975159
เนื้อหาจัดทำโดย The Gradient เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก The Gradient หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

In episode 80 of The Gradient Podcast, Daniel Bashir speaks to Professor Hugo Larochelle.

Professor Larochelle leads the Montreal Google DeepMind team and is adjunct professor at Université de Montréal and a Canada CIFAR Chair. His research focuses on the study and development of deep learning algorithms.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (01:38) Prof. Larochelle’s background, working in Bengio’s lab

* (04:53) Prof. Larochelle’s work and connectionism

* (08:20) 2004-2009, work with Bengio

* (08:40) Nonlocal Estimation of Manifold Structure, manifolds and deep learning

* (13:58) Manifold learning in vision and language

* (16:00) Relationship to Denoising Autoencoders and greedy layer-wise pretraining

* (21:00) From input copying to learning about local distribution structure

* (22:30) Zero-Data Learning of New Tasks

* (22:45) The phrase “extend machine learning towards AI” and terminology

* (26:55) Prescient hints of prompt engineering

* (29:10) Daniel goes on totally unnecessary tangent

* (30:00) Methods for training deep networks (strategies and robust interdependent codes)

* (33:45) Motivations for layer-wise pretraining

* (35:15) Robust Interdependent Codes and interactions between neurons in a single network layer

* (39:00) 2009-2011, postdoc in Geoff Hinton’s lab

* (40:00) Reflections on the AlexNet moment

* (41:45) Frustration with methods for evaluating unsupervised methods, NADE

* (44:45) How researchers thought about representation learning, toying with objectives instead of architectures

* (47:40) The Restricted Boltzmann Forest

* (50:45) Imposing structure for tractable learning of distributions

* (53:11) 2011-2016 at U Sherbooke (and Twitter)

* (53:45) How Prof. Larochelle approached research problems

* (56:00) How Domain Adversarial Networks came about

* (57:12) Can we still learn from Restricted Boltzmann Machines?

* (1:02:20) The ~ Infinite ~ Restricted Boltzmann Machine

* (1:06:55) The need for researchers doing different sorts of work

* (1:08:58) 2017-present, at MILA (and Google)

* (1:09:30) Modulating Early Visual Processing by Language, neuroscientific inspiration

* (1:13:22) Representation learning and generalization, what is a good representation (Meta-Dataset, Universal representation transformer layer, universal template, Head2Toe)

* (1:15:10) Meta-Dataset motivation

* (1:18:00) Shifting focus to the problem—good practices for “recycling deep learning”

* (1:19:15) Head2Toe intuitions

* (1:21:40) What are “universal representations” and manifold perspective on datasets, what is the right pretraining dataset

* (1:26:02) Prof. Larochelle’s takeaways from Fortuitous Forgetting in Connectionist Networks (led by Hattie Zhou)

* (1:32:15) Obligatory commentary on The Present Moment and current directions in ML

* (1:36:18) The creation and motivations of the TMLR journal

* (1:41:48) Prof. Larochelle’s takeaways about doing good science, building research groups, and nurturing a research environment

* (1:44:05) Prof. Larochelle’s advice for aspiring researchers today

* (1:47:41) Outro

Links:

* Professor Larochelle’s homepage and Twitter

* Transactions on Machine Learning Research

* Papers

* 2004-2009

* Nonlocal Estimation of Manifold Structure

* Classification using Discriminative Restricted Boltzmann Machines

* Zero-data learning of new tasks

* Exploring Strategies for Training Deep Neural Networks

* Deep Learning using Robust Interdependent Codes

* 2009-2011

* Stacked Denoising Autoencoders

* Tractable multivariate binary density estimation and the restricted Boltzmann forest

* The Neural Autoregressive Distribution Estimator

* Learning Attentional Policies for Tracking and Recognition in Video with Deep Networks

* 2011-2016

* Practical Bayesian Optimization of Machine Learning Algorithms

* Learning Algorithms for the Classification Restricted Boltzmann Machine

* A neural autoregressive topic model

* Domain-Adversarial Training of Neural Networks

* NADE

* An Infinite Restricted Boltzmann Machine

* 2017-present

* Modulating early visual processing by language

* Meta-Dataset

* A Universal Representation Transformer Layer for Few-Shot Image Classification

* Learning a universal template for few-shot dataset generalization

* Impact of aliasing on generalization in deep convolutional networks

* Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning

* Fortuitous Forgetting in Connectionist Networks


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

130 ตอน

Artwork
iconแบ่งปัน
 
Manage episode 368147596 series 2975159
เนื้อหาจัดทำโดย The Gradient เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดหาให้โดยตรงจาก The Gradient หรือพันธมิตรแพลตฟอร์มพอดแคสต์ของพวกเขา หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่แสดงไว้ที่นี่ https://th.player.fm/legal

In episode 80 of The Gradient Podcast, Daniel Bashir speaks to Professor Hugo Larochelle.

Professor Larochelle leads the Montreal Google DeepMind team and is adjunct professor at Université de Montréal and a Canada CIFAR Chair. His research focuses on the study and development of deep learning algorithms.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (01:38) Prof. Larochelle’s background, working in Bengio’s lab

* (04:53) Prof. Larochelle’s work and connectionism

* (08:20) 2004-2009, work with Bengio

* (08:40) Nonlocal Estimation of Manifold Structure, manifolds and deep learning

* (13:58) Manifold learning in vision and language

* (16:00) Relationship to Denoising Autoencoders and greedy layer-wise pretraining

* (21:00) From input copying to learning about local distribution structure

* (22:30) Zero-Data Learning of New Tasks

* (22:45) The phrase “extend machine learning towards AI” and terminology

* (26:55) Prescient hints of prompt engineering

* (29:10) Daniel goes on totally unnecessary tangent

* (30:00) Methods for training deep networks (strategies and robust interdependent codes)

* (33:45) Motivations for layer-wise pretraining

* (35:15) Robust Interdependent Codes and interactions between neurons in a single network layer

* (39:00) 2009-2011, postdoc in Geoff Hinton’s lab

* (40:00) Reflections on the AlexNet moment

* (41:45) Frustration with methods for evaluating unsupervised methods, NADE

* (44:45) How researchers thought about representation learning, toying with objectives instead of architectures

* (47:40) The Restricted Boltzmann Forest

* (50:45) Imposing structure for tractable learning of distributions

* (53:11) 2011-2016 at U Sherbooke (and Twitter)

* (53:45) How Prof. Larochelle approached research problems

* (56:00) How Domain Adversarial Networks came about

* (57:12) Can we still learn from Restricted Boltzmann Machines?

* (1:02:20) The ~ Infinite ~ Restricted Boltzmann Machine

* (1:06:55) The need for researchers doing different sorts of work

* (1:08:58) 2017-present, at MILA (and Google)

* (1:09:30) Modulating Early Visual Processing by Language, neuroscientific inspiration

* (1:13:22) Representation learning and generalization, what is a good representation (Meta-Dataset, Universal representation transformer layer, universal template, Head2Toe)

* (1:15:10) Meta-Dataset motivation

* (1:18:00) Shifting focus to the problem—good practices for “recycling deep learning”

* (1:19:15) Head2Toe intuitions

* (1:21:40) What are “universal representations” and manifold perspective on datasets, what is the right pretraining dataset

* (1:26:02) Prof. Larochelle’s takeaways from Fortuitous Forgetting in Connectionist Networks (led by Hattie Zhou)

* (1:32:15) Obligatory commentary on The Present Moment and current directions in ML

* (1:36:18) The creation and motivations of the TMLR journal

* (1:41:48) Prof. Larochelle’s takeaways about doing good science, building research groups, and nurturing a research environment

* (1:44:05) Prof. Larochelle’s advice for aspiring researchers today

* (1:47:41) Outro

Links:

* Professor Larochelle’s homepage and Twitter

* Transactions on Machine Learning Research

* Papers

* 2004-2009

* Nonlocal Estimation of Manifold Structure

* Classification using Discriminative Restricted Boltzmann Machines

* Zero-data learning of new tasks

* Exploring Strategies for Training Deep Neural Networks

* Deep Learning using Robust Interdependent Codes

* 2009-2011

* Stacked Denoising Autoencoders

* Tractable multivariate binary density estimation and the restricted Boltzmann forest

* The Neural Autoregressive Distribution Estimator

* Learning Attentional Policies for Tracking and Recognition in Video with Deep Networks

* 2011-2016

* Practical Bayesian Optimization of Machine Learning Algorithms

* Learning Algorithms for the Classification Restricted Boltzmann Machine

* A neural autoregressive topic model

* Domain-Adversarial Training of Neural Networks

* NADE

* An Infinite Restricted Boltzmann Machine

* 2017-present

* Modulating early visual processing by language

* Meta-Dataset

* A Universal Representation Transformer Layer for Few-Shot Image Classification

* Learning a universal template for few-shot dataset generalization

* Impact of aliasing on generalization in deep convolutional networks

* Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning

* Fortuitous Forgetting in Connectionist Networks


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

130 ตอน

ทุกตอน

×
 
Loading …

ขอต้อนรับสู่ Player FM!

Player FM กำลังหาเว็บ

 

คู่มืออ้างอิงด่วน