Regularization
This page contains resources about Regularization, Overfitting and Bias-Variance tradeoff.
Subfields and Concepts[edit]
- Shrinkage Penalty / Regularization Term
- Regularization
- Regularized least squares
- L0 penalization / Spike-and-slab prior
- L0-regularization
- L1-regularization / Least absolute shrinkage and selection operator (LASSO) / Laplace prior
- L2-regularization / Ridge Regression / Tikhonov Regularization / Gaussian prior
- Lp-regularization (where p is a positive real number)
- Max norm constraints
- Early Stopping (in epochs during training of Artificial Neural Networks)
- Mini-Batches (in the training of Artificial Neural Networks)
- Total Variation (TV) Regularization (i.e. L1-norm of the gradient)
- Dropout
- Matrix Regularization
- Elastic Nets
Online courses[edit]
Video Lectures[edit]
Lecture Notes[edit]
Books[edit]
- Ito, K., & Jin, B. (2014). Inverse Problems: Tikhonov Theory and Algorithms. World Scientific.
- Engl, H. W., Hanke, M., & Neubauer, A. (1996). Regularization of Inverse Problems. Springer Science & Business Media.
Scholarly Articles[edit]
- Starck, J. L., & Fadili, M. J. (2009). An overview of inverse problem regularization using sparsity. In Image Processing (ICIP), 16th IEEE International Conference on, 1453-1456.
Software[edit]
See also[edit]
Other Resources[edit]
- Why does shrinkage work? - Stack Exchange