The cross-entropy loss is one of the most popular loss functions in modern machine learning, often used with classification problems.
One way to derive the cross-entropy loss is by thinking in terms of a true but unknown data distribution $p$, and an estimated distribution $q$. Using KL-divergence to compare two distributions, our learning objective is to find a distribution $q^\star$ such that the KL-divergence is minimized w.r.t $q$ as,
If $q^\star$ perfectly models the true underlying data distribution $p$, then we achieve the global minima of $KL(p \parallel q^\star) = 0$.
Now, let us unpack the KL-divergence term starting with the definition.
Where $H[p]$ is the entropy of distribution $p$, and $CE(p \parallel q)$ is the cross-entropy loss between distributions $p$ and $q$.
It can now be seen that, minimizing the KL-divergence is equivalent to minimizing the cross-entropy loss - the entropy term $H[p]$ is a constant outside our control (a property of the true data-generating process), and more importantly independent of $q$ for optimization.
In practice, for a dataset $\mathcal{D}$ of input-label observations $\{x,y\}$, we compute the average cross-entropy loss for a $K$-way classification problem as,
where the outer sum is over all the observations, and the inner sum is the cross-entropy between true conditional distribution $p(y \mid x)$ and modeled conditional distribution $q(y\mid x)$. $p(y \mid x)$ is represented as a delta distribution which puts all its mass on the true label, i.e. $k = y$.
Label smoothing^{1} is a common trick used in training neural network classifiers to ensure that the network is not over-confident and better calibrated.
Instead of the delta distribution $p(y\mid x) = \delta_{y=k}$ we noted earlier, the key idea of label smoothing is to use smoothed target distribution $p(y\mid x)$ such that with probability $\epsilon < 1$, the target is resampled at random, i.e.
The implied loss function now is $CE(\widetilde{p} \parallel q)$.
Therefore, with a few rearrangements, what we get is a weighted objective where the first term $CE(U \parallel q)$ nudges our model towards a uniform distribution over the labels $U$ and the remainder is the same old cross-entropy loss but reweighted with $1-\epsilon$.^{2}
This objective makes sense intuitively. We want to match the true distribution $p$, but we regularize it such that our classifier is smoothed out by also matching to uniform distribution. Label smoothing demonstrably leads to better generalization and calibration, although leads to worse model distillation due to loss of information at the penultimate layer by encouraging the representations of the same label to cluster tightly.^{3}
Another proposal to improve calibration of neural networks is focal loss,^{4} originally proposed for object detection.^{5}
Focal loss modifies the original cross-entropy loss, such that for $\gamma \geq 1$:^{6}
This objective implies that as soon as $q$ starts modeling the original distribution $p$ well, we will artificially downweight the loss incurred. Again, intuitively this makes sense since the cross-entropy loss has a tendency to keep fitting until we reach the degenerate $\delta_{y=k}$ distribution.
With a bit of algebraic massaging, we can understand the connection of focal loss to cross-entropy loss.
where the second equation comes from Benoulli's inequality, the third equation comes by definition of modulus $\lvert\cdot\rvert$ operator (the terms inside the expectation are always non-positive). $P = [p_1,\dots,p_K]$ represents the vector of probabilities from the true distribution such that the infinity norm $\lVert P \rVert_{\infty} = 1$ since we represent it as a one-hot encoded vector, and $Q = [q_1\log{q_1},\dots,q_K\log{q_K}]$ represents the vector constructed via our modeled distribution $q$ such that we can use Hölder's inequality. We can then revert the modulus since each term is non-positive, such that last term is simply negative entropy of $q$.
Therefore, the focal loss minimizes an upper bound of the entropy-regularized cross-entropy loss. Regularizing with the entropy of $q$ nudges the learned distribution to be higher entropy, leading to smoother learned distributions, which demonstrably leads to better calibration.^{4}
It is intuitive to expect calibration to improve by learning smoother classifier distributions. Both label smoothing and focal loss bear neat connections to the original cross-entropy loss, via a reweighted objective and an entropy-regularized objective respectively. More importantly, alongside calibration, these methods often improve generalization. I wonder what other objectives lead to similar enhancements.
Christian Szegedy et al. “Rethinking the Inception Architecture for Computer Vision.” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015): 2818-2826. https://ieeexplore.ieee.org/document/7780677 ↩
A canonical choice of $\epsilon$ is $0.1$. ↩
Rafael Müller et al. “When Does Label Smoothing Help?” Neural Information Processing Systems (2019). https://arxiv.org/abs/1906.02629 ↩
Mukhoti, Jishnu et al. “Calibrating Deep Neural Networks using Focal Loss.” ArXiv abs/2002.09437 (2020). https://arxiv.org/abs/2002.09437 ↩ ↩^{2}
Lin, Tsung-Yi et al. “Focal Loss for Dense Object Detection.” 2017 IEEE International Conference on Computer Vision (ICCV) (2017): 2999-3007. https://arxiv.org/abs/1708.02002 ↩
A canonical choice of $\gamma$ is $3$. ↩