- 1.Week_1_–_Lecture__History_motivation_and_evolution_of_Deep_Learning
- 2.Week_1_–_Practicum__Classification_linear_algebra_and_visualisation
- 3.Week_2_–_Lecture__Stochastic_gradient_descent_and_backpropagation
- 4.Week_2_–_Practicum__Training_a_neural_network
- 5.Week_3_–_Lecture__Convolutional_neural_networks
- 6.Week_3_–_Practicum__Natural_signals_properties_and_CNNs
- 7.Week_4_–_Practicum_Listening_to_convolutions
- 8.Week_5_–_Lecture_Optimisation
- 9.Week_5_–_Practicum__1D_multichannel_convolution_and_autograd
- 10.Week_6_–_Lecture_CNN_applications_RNN_and_attention
- 11.Week_6_–_Practicum_RNN_and_LSTM_architectures
推薦一門由深度學(xué)習(xí)泰斗,Yann LeCun主講的深度學(xué)習(xí)基礎(chǔ)課程,紐約大學(xué)2020深度學(xué)習(xí)新課《深度學(xué)習(xí)(pytorch)》。
本課程涉及深度學(xué)習(xí)和表示學(xué)習(xí)的最新技術(shù),重點是有監(jiān)督和無監(jiān)督的深度學(xué)習(xí)、嵌入方法、度量學(xué)習(xí)、卷積網(wǎng)和遞歸網(wǎng),并應(yīng)用于計算機視覺、自然語言理解和語音識別。期望學(xué)生最好有一定的數(shù)據(jù)科學(xué)和機器學(xué)習(xí)基礎(chǔ)知識。
Contribution instructions
1. Week11.1. Motivation of Deep Learning, and Its History and Inspiration
1.2. Evolution and Uses of CNNs and Why Deep Learning?
1.3. Problem Motivation, Linear Algebra, and Visualization
2. Week22.1. Introduction to Gradient Descent and Backpropagation Algorithm
2.2. Computing gradients for NN modules and Practical tricks for Back Propagation
2.3. Artificial neural networks(ANNs)
3. Week33.1. Visualization of neural networks parameter transformation and fundamental concepts of convolution
3.2. ConvNet Evolutions, Architectures, Implementation Details and Advantages.
3.3. Properties of natural signals
4. Week 44.1. Linear Algebra and Convolutions
5. Week55.1. Optimization TechniquesI
5.2. Optimization Techniques ll
5.3. Understanding convolutions and automatic differentiation engine
6. Week66.1. Applications of Convolutional Network
6.2. RNNs, GRUs, LSTMs, Attention, Seq2Seq, and Memory Networks
6.3. Architecture of RNN and LSTM Model
7. week77.1. Energy-Based Models
7.2. SSL, EBM with details and examples
7.3. Introduction to Autoencoders
