Differential Entropy
What is "differential" in differential entropy?
Entropy for continuous random variables is technically called differential entropy. I’ve always wondered what the differential means, and I finally have an answer.
Discrete Random Variables
Shannon’s groundbreaking work in information theory1 defined information as a measure of surprise. Specifically, for discrete random variables as where is the probability mass. Consequently, the average information, or entropy is defined as,2
Extending this definition to continuous random variables, however, is tricky as we’ll see next.
Continuous Random Variables
Discrete probability masses are often visualized as histograms. In similar spirit, instead of thinking in terms of a continuous random variable , we are going to think in terms of its discretized version , binned into buckets of width .3
To construct the entropy of such a discretized distribution, we need to define . One way is to think in terms of the area of one bin relative to the total area occupied by all bins. For number of values in a bin, the area will be (a thin rectangle). For the total area across all bins , we have the probability of a bin as . This construction satisfies the law of total probability such that , i.e. probability of all bins sum to .
Now that we have a normalized histogram, we can instead work with normalized counts which we denote by . Under such a normalization, the area itself defines the probability of the bin:
Instead of our original continuous random variable , let us now work with this definition of probability for the discretized version .
Entropy of Discretized Random Variable
Let’s plug the definition of discretized probability into entropy. We have
As the bin width approaches zero, the entropy becomes:
This result is trouble - the entropy for all continuous random variables in infinite. In principle, this result is not wrong - as the precision of our continuous quantity’s measurement increases (i.e. the bin width decreases), the average surprise in the measurement increases. But it leaves us with an unworkable definition of entropy for continuous random variables since we always need to know the bin width.
Differential Entropy
To work with entropy of continuous random variables, the resolution is that we only keep the interesting term and skip the constant width term . The differential entropy is therefore given by,
And therefore,
the differential comes from ignoring the constant width term, which otherwise forces the entropy to be always infinite.
This definition is often clear from context and not made explicit. Notably, in cases involving comparison of two continuous distributions (e.g. KL-divergence), this difference often cancels out and does not cause trouble.
Footnotes
-
Claude E. Shannon. “A mathematical theory of communication.” Bell Syst. Tech. J. 27 (1948): 623-656. https://ieeexplore.ieee.org/document/6773024 ↩
-
David John Cameron MacKay. “Information Theory, Inference, and Learning Algorithms.” IEEE Transactions on Information Theory 50 (2004): 2544-2545. https://www.inference.org.uk/mackay/itila/ ↩
-
James V. Stone. “Information Theory: A Tutorial Introduction.” ArXiv abs/1802.05968 (2015). https://arxiv.org/abs/1802.05968 ↩