Sunday, October 30, 2011

Unsupervised Feature Learning and Deep Learning

http://openclassroom.stanford.edu/MainFolder/CoursePage.php?course=ufldl

Course Description

Machine learning has seen numerous successes, but applying learning algorithms today often means spending a long time hand-engineering the input feature representation. This is true for many problems in vision, audio, NLP, robotics, and other areas. In this course, you'll learn about methods for unsupervised feature learning and deep learning, which automatically learn a good representation of the input from unlabeled data. You'll also pick up the "hands-on," practical skills and tricks-of-the-trade needed to get these algorithms to work well.

Basic knowledge of machine learning (supervised learning) is assumed, though we'll quickly review logistic regression and gradient descent.

I. INTRODUCTION


II. LOGISTIC REGRESSION

Representation(1.5x)
Batch gradient descent(1.2x)(1.5x)
Gradient descent in practice(1.2x)(1.5x)
Stochastic gradient descent
Exponentially weighted average
Shuffling data
Exercise 1: Implementation


III. NEURAL NETWORKS

Representation
Architecture
Examples and intuitions #1(1.2x)
Examples and intuitions #2
Parameter learning
Gradient checking
Random initialization
Vectorized implementation
Activation function derivative


V. APPLICATION TO CLASSIFICATION


IV. UNSUPERVISED FEATURE LEARNING and SELF-TAUGHT LEARNING


V. APPLICATION TO CLASSIFICATION


VI. DEEP LEARNING WITH AUTOENCODERS


VII. SPARSE REPRESENTATIONS


VIII. WHITENING


IX. INDEPENDENT COMPONENTS ANALYSIS (ICA)


X. SLOW FEATURE ANALYSIS (SFA)


XI. RESTRICTED BOLTZMANN MACHINES (RBM)


XII. DEEP BELIEF NETWORKS (DBN)

No comments: