- Introduction - Probabilistic and Statistical Machine Learning 2020
- Part 1 - Machine learning and inductive bias
- Part 2 - Warmup- The kNN Classifier
- Part 3 - Formal setup risk consistency
- Part 4 - Bayesian decision theory
- Part 5 - The Bayes classifier
- Part 6 - Risk minimization approximation and estimation error
- Part 7 - Linear least squares
- Part 7a - Introduction to convex optimization
- Part 8 - Feature representation
- Part 9 - Ridge regression
- Part 10 - Lasso
- Part 11 - Cross validation
- Part 12 - Risk minimization vs. probabilistic approaches
- Part 13 - Linear discriminant analysis
- Part 14 - Logistic regression
- Part 15 - Convex optimization Lagrangian dual pro
- Part 16 - Support vector machines- hard and soft margin
- Part 17 - Support vector machines- the dual problem
- Part 18 - Kernels- definitions and examples
- Part 19 - The reproducing kernel Hilbert space
- Part 20 - Kernel SVMs
- Part 21 - Kernelizing least squares regression
- Part 22 - How to center and normalize in feature sp
- Part 23a - Random forests- building the trees
- Part 23b - Random forests- building the forests
- Part 24 - Boosting
- Part 25 - Principle Component Analysis
- Part 26 - Kernel PCA
- Part 27 - Multidimensional scaling
- Part 28 - Random projections and the Theorem of Johnson-Lindenstrauss
- Part 29 - Neighborhood graphs
- Part 30 - Isomap
- Part 31 - t-SNE
- Part 32 - Introduction to clustering
- Part 33 - k-means clustering
- Part 34 - Linkage algorithms for hierarchical cluster
- Part 35 - Spectral graph theory
- Part 36 - Spectral clustering unnormalized case
- Part 37 - Spectral clustering- normalized regularized
- Part 38 - Statistical learning theory- Convergence
- Part 39 - Statistical learning theory- finite function classes
- Part 40 - Statistical learning theory- shattering coefficient
- Part 41 - Statistical learning theory- VC dimension
- Part 42 - Statistical learning theory- Rademacher complexity
- Part 43 - Statistical learning theory- consistency of regularization
- Part 44 - Statistical learning theory- Revisiting Occam and outlook
- Part 45 - ML and Society- The general debate
- Part 46 - ML and Society- (Un)fairness in ML
- Part 47 - ML and Society- Formal approaches to fair
- Part 48 ML and Society Algorithmic approaches to fairness
- Part 49 ML and Society Explainable ML
- Part 50 ML and Society The energy footprint of ML
- Part 51 - Low rank matrix completion- algorithms
- Part 52 - Low rank matrix completion- theory
- Part 53 - Compressed sensing
- Part 54 - ML pipeline- data preprocessing learning
- Part 55 - ML pipeline- evaluation
統(tǒng)計(jì)學(xué)習(xí)是關(guān)于計(jì)算機(jī)基于數(shù)據(jù)構(gòu)建的概率統(tǒng)計(jì)模型并運(yùn)用模型對(duì)數(shù)據(jù)進(jìn)行預(yù)測(cè)和分析的一門(mén)科學(xué),統(tǒng)計(jì)學(xué)習(xí)也稱為統(tǒng)計(jì)機(jī)器學(xué)習(xí)。
前言:機(jī)器學(xué)習(xí)比較重要的幾部分:線性模型、統(tǒng)計(jì)學(xué)習(xí)、深度學(xué)習(xí),線性部分包括SVM、壓縮感知、稀疏編碼,都是控制整個(gè)模型的稀疏性去做線性函數(shù),偏 Discriminative 判別模型;統(tǒng)計(jì)學(xué)習(xí)主要通過(guò)統(tǒng)計(jì)方法對(duì)數(shù)據(jù)建模找到極大似然,偏 Generative 生成方法;深度學(xué)習(xí)就是 neural model,偏非線性。
機(jī)器學(xué)習(xí)中的統(tǒng)計(jì)多是基于對(duì)事件的不確定性度量關(guān)于,是主觀的,而不是客觀地基于頻次。
