- 02 計(jì)算機(jī)視覺歷史回顧與介紹(中)
- 03 計(jì)算機(jī)視覺歷史回顧與介紹(下)
- 04 數(shù)據(jù)驅(qū)動(dòng)的圖像分類方式:K最近鄰與線性分類器(上)
- 05 數(shù)據(jù)驅(qū)動(dòng)的圖像分類方式:K最近鄰與線性分類器(下)
- 06 線性分類器損失函數(shù)與最優(yōu)化(上)
- 07 線性分類器損失函數(shù)與最優(yōu)化(下)
- 08 反向傳播與神經(jīng)網(wǎng)絡(luò)初步(上)
- 09 反向傳播與神經(jīng)網(wǎng)絡(luò)初步(下)
- 10 神經(jīng)網(wǎng)絡(luò)訓(xùn)練細(xì)節(jié)part1(上)
- 11 神經(jīng)網(wǎng)絡(luò)訓(xùn)練細(xì)節(jié)part1(下)
- 12 神經(jīng)網(wǎng)絡(luò)訓(xùn)練細(xì)節(jié)part2(上)
- 13 神經(jīng)網(wǎng)絡(luò)訓(xùn)練細(xì)節(jié)part2(下)
- 14 卷積神經(jīng)網(wǎng)絡(luò)詳解(上)
- 15 卷積神經(jīng)網(wǎng)絡(luò)詳解(下)
- 16 遷移學(xué)習(xí)之物體定位與檢測(上)
- 17 遷移學(xué)習(xí)之物體定位與檢測(下)
- 18 卷積神經(jīng)網(wǎng)絡(luò)的可視化與進(jìn)一步理解(上)
- 19 卷積神經(jīng)網(wǎng)絡(luò)的可視化與進(jìn)一步理解(下)
課件英文視頻及字幕等 by 愛可可-愛生活
鏈接:https://pan.baidu.com/s/1pKsTivp#list
官網(wǎng)
鏈接:CS231n: Convolutional Neural Networks for Visual Recognition

深度學(xué)習(xí)與計(jì)算機(jī)視覺-非常經(jīng)典人工智能課程-
These notes accompany the Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition.
For questions/concerns/bug reports contact Justin Johnson regarding the assignments, or contact Andrej Karpathy regarding the course notes. You can also submit a pull request directly to our git repo.
We encourage the use of the hypothes.is extension to annote comments and discuss these notes inline.
Spring 2019 Assignments
Assignment #1: Image Classification, kNN, SVM, Softmax, Neural Network
Assignment #2: Fully-Connected Nets, Batch Normalization, Dropout, Convolutional Nets
Assignment #3: Image Captioning with Vanilla RNNs, Image Captioning with LSTMs, Network Visualization, Style Transfer, Generative Adversarial Networks
Module 0: Preparation
Setup Instructions
Python / Numpy Tutorial
IPython Notebook Tutorial
Google Cloud Tutorial
AWS Tutorial
Module 1: Neural Networks
Image Classification: Data-driven Approach, k-Nearest Neighbor, train/val/test splits
L1/L2 distances, hyperparameter search, cross-validation
Linear classification: Support Vector Machine, Softmax
parameteric approach, bias trick, hinge loss, cross-entropy loss, L2 regularization, web demo
Optimization: Stochastic Gradient Descent
optimization landscapes, local search, learning rate, analytic/numerical gradient
Backpropagation, Intuitions
chain rule interpretation, real-valued circuits, patterns in gradient flow
Neural Networks Part 1: Setting up the Architecture
model of a biological neuron, activation functions, neural net architecture, representational power
Neural Networks Part 2: Setting up the Data and the Loss
preprocessing, weight initialization, batch normalization, regularization (L2/dropout), loss functions
Neural Networks Part 3: Learning and Evaluation
gradient checks, sanity checks, babysitting the learning process, momentum (+nesterov), second-order methods, Adagrad/RMSprop, hyperparameter optimization, model ensembles
Putting it together: Minimal Neural Network Case Study
minimal 2D toy data example
Module 2: Convolutional Neural Networks
Convolutional Neural Networks: Architectures, Convolution / Pooling Layers
layers, spatial arrangement, layer patterns, layer sizing patterns, AlexNet/ZFNet/VGGNet case studies, computational considerations
Understanding and Visualizing Convolutional Neural Networks
tSNE embeddings, deconvnets, data gradients, fooling ConvNets, human comparisons
Transfer Learning and Fine-tuning Convolutional Neural Networks
中文字幕
