Cross-Entropy, Label Smoothing, and Focal Loss
Connections between cross-entropy loss, label smoothing, and focal loss.
Table of Contents
The cross-entropy loss is one of the most popular loss functions in modern machine learning, often used with classification problems.
One way to derive the cross-entropy loss is by thinking in terms of a true but unknown data distribution , and an estimated distribution . Using KL-divergence to compare two distributions, our learning objective is to find a distribution such that the KL-divergence is minimized w.r.t as,
If perfectly models the true underlying data distribution , then we achieve the global minima of .
Now, let us unpack the KL-divergence term starting with the definition.
Where is the entropy of distribution , and is the cross-entropy loss between distributions and .
It can now be seen that, minimizing the KL-divergence is equivalent to minimizing the cross-entropy loss - the entropy term is a constant outside our control (a property of the true data-generating process), and more importantly independent of for optimization.
In practice, for a dataset of input-label observations , we compute the average cross-entropy loss for a -way classification problem as,
where the outer sum is over all the observations, and the inner sum is the cross-entropy between true conditional distribution and modeled conditional distribution . is represented as a delta distribution which puts all its mass on the true label, i.e. .
Label Smoothing
Label smoothing1 is a common trick used in training neural network classifiers to ensure that the network is not over-confident and better calibrated.
Instead of the delta distribution we noted earlier, the key idea of label smoothing is to use smoothed target distribution such that with probability , the target is resampled at random, i.e.
The implied loss function now is .
Therefore, with a few rearrangements, what we get is a weighted objective where the first term nudges our model towards a uniform distribution over the labels and the remainder is the same old cross-entropy loss but reweighted with .2
This objective makes sense intuitively. We want to match the true distribution , but we regularize it such that our classifier is smoothed out by also matching to uniform distribution. Label smoothing demonstrably leads to better generalization and calibration, although leads to worse model distillation due to loss of information at the penultimate layer by encouraging the representations of the same label to cluster tightly.3
Focal Loss
Another proposal to improve calibration of neural networks is focal loss,4 originally proposed for object detection.5
Focal loss modifies the original cross-entropy loss, such that for :6
This objective implies that as soon as starts modeling the original distribution well, we will artificially downweight the loss incurred. Again, intuitively this makes sense since the cross-entropy loss has a tendency to keep fitting until we reach the degenerate distribution.
With a bit of algebraic massaging, we can understand the connection of focal loss to cross-entropy loss.
where the second equation comes from Benoulliās inequality, the third equation comes by definition of modulus operator (the terms inside the expectation are always non-positive). represents the vector of probabilities from the true distribution such that the infinity norm since we represent it as a one-hot encoded vector, and represents the vector constructed via our modeled distribution such that we can use Hƶlderās inequality. We can then revert the modulus since each term is non-positive, such that last term is simply negative entropy of .
Therefore, the focal loss minimizes an upper bound of the entropy-regularized cross-entropy loss. Regularizing with the entropy of nudges the learned distribution to be higher entropy, leading to smoother learned distributions, which demonstrably leads to better calibration.4
Remarks
It is intuitive to expect calibration to improve by learning smoother classifier distributions. Both label smoothing and focal loss bear neat connections to the original cross-entropy loss, via a reweighted objective and an entropy-regularized objective respectively. More importantly, alongside calibration, these methods often improve generalization. I wonder what other objectives lead to similar enhancements.
Footnotes
-
Christian Szegedy et al. āRethinking the Inception Architecture for Computer Vision.āĀ 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Ā (2015): 2818-2826. https://ieeexplore.ieee.org/document/7780677 ā©
-
A canonical choice of is . ā©
-
Rafael MĆ¼ller et al. āWhen Does Label Smoothing Help?āĀ Neural Information Processing SystemsĀ (2019). https://arxiv.org/abs/1906.02629 ā©
-
Mukhoti, Jishnu et al. āCalibrating Deep Neural Networks using Focal Loss.āĀ ArXivĀ abs/2002.09437 (2020). https://arxiv.org/abs/2002.09437 ā© ā©2
-
Lin, Tsung-Yi et al. āFocal Loss for Dense Object Detection.āĀ 2017 IEEE International Conference on Computer Vision (ICCV)Ā (2017): 2999-3007. https://arxiv.org/abs/1708.02002 ā©
-
A canonical choice of is . ā©