Jump to navigation Jump to search
Subfields and Concepts
- Shrinkage Penalty / Regularization Term
- Regularized least squares
- L0 penalization / Spike-and-slab prior
- L1-regularization / Least absolute shrinkage and selection operator (LASSO) / Laplace prior
- L2-regularization / Ridge Regression / Tikhonov Regularization / Gaussian prior
- Lp-regularization (where p is a positive real number)
- Max norm constraints
- Early Stopping (in epochs during training of Artificial Neural Networks)
- Mini-Batches (in the training of Artificial Neural Networks)
- Total Variation (TV) Regularization (i.e. L1-norm of the gradient)
- Matrix Regularization
- Elastic Nets
- Ito, K., & Jin, B. (2014). Inverse Problems: Tikhonov Theory and Algorithms. World Scientific.
- Engl, H. W., Hanke, M., & Neubauer, A. (1996). Regularization of Inverse Problems. Springer Science & Business Media.
- Starck, J. L., & Fadili, M. J. (2009). An overview of inverse problem regularization using sparsity. In Image Processing (ICIP), 16th IEEE International Conference on, 1453-1456.
- Why does shrinkage work? - Stack Exchange