Gradient Boosted Decision Trees: A Recap

A note on the big three gradient boosting algorithms

๐Ÿงฎ math
Table of Contents

This technical note is a summary of the big three gradient boosting decision tree (GBDT) algorithms. Some notation has been slightly tweaked from the original to maintain consistency.

Regularized Gradient Tree Boosting

Gradient boosting is the process of building an ensemble of predictors by performing gradient descent in the functional space.

For an ensemble of KK predictors ฯ•K(x)=โˆ‘k=1Kfk(x)\phi_{K}(\mathbf{x}) = \sum_{k=1}^Kf_k(\mathbf{x}) with weak predictors ff as decision trees, the typical learning objective is,

L(ฯ•K)=โˆ‘i=1nโ„“(yi,ฯ•K(xi))+โˆ‘k=1Kฮฉ(fk),\mathcal{L}(\phi_K) = \sum_{i=1}^n\ell(y_i, \phi_K(\mathbf{x}_i)) + \sum_{k=1}^K\Omega(f_k),

for a differentiable loss function โ„“\ell, and regularization term,

ฮฉ(f)=ฮณT+12ฮปโˆฅwโˆฅ2,\Omega(f) = \gamma T + \tfrac{1}{2}\lambda \lVert w\rVert^2,

where TT is the number of leaves in each tree ff, and wโˆˆRTw \in \mathbb{R}^T is the vector of continuous scores for each leaf. Note that classical GBDT does not include the regularization term.

This optimization problem cannot be solved by the traditional optimization methods, and therefore we resolve to boosting: selecting one best function in each round. Hence, we solve the greedy objective,

L(fk)=โˆ‘i=1nโ„“(yi,ฯ•kโˆ’1(xi)+fk(xi))+ฮฉ(fk),\mathcal{L}(f_k) = \sum_{i=1}^n\ell(y_i, \phi_{k-1}(\mathbf{x}_i) + f_k(\mathbf{x}_i)) + \Omega(f_k),

Using a second-order Taylor expansion of โ„“\ell around ฯ•kโˆ’1\phi_{k-1} leads to a simplified objective. The optimal objective for a given tree structure qq then found to be,

Lโ‹†(q)=โˆ’12โˆ‘t=1Tโˆ‘iโˆˆItgiโˆ‘iโˆˆIthi+ฮป+ฮณT,\mathcal{L}^\star(q) = -\frac{1}{2}\sum_{t=1}^T\frac{\sum_{i \in I_t} g_i}{\sum_{i\in I_t} h_i + \lambda} + \gamma T,

where gig_i and hih_i are the first and second order gradients from the Taylor expansion, respectively. ItI_t is the instance set at leaf tt.

Since it is practically impossible to evaluate all the kinds of possible tree structures, we add another greedy construction where we start with a single leaf node, and keep splitting. The split candidates can then be evaluated, for instance in terms of โ€œloss reductionโ€.

Decision Trees

Decision trees are built by recusive partioning of the feature space into disjoint regions for predictions. The main cost in building a decision tree comes from the split-finding algorithm.

This simplest approach to split-finding is a pre-sorted algorithm, which enumerates all possible split points on the pre-sorted feature values. This is the exact greedy algorithm, and finds the optimal split points. It is, however, inefficient in both training speed and memory consumption.

The alternative, approximate but much faster approach, is to instead build quantiles of the feature distribution, where the continuous features are split into buckets. The quantiles can be built globally once, or locally at each level in the tree. Local splits are often more appropriate for deeper trees.

High-cardinality categorical variables can be handled via applying one-hot encoding to a smaller number of clustered values. Although, it has generally been noted that converting high-cardinality categorical variables to numerical features is the most efficient method with minimum information loss.

The Big Three

XGBoost

TL;DR: (i) With machine learning, XGBoost1 aims to be smarter and faster about split-finding. (ii) With software engineering, XGBoost relies on column blocks for parallelization, cache-aware access patterns to avoid interleaving read/write access, and block compression for out-of-core computation (similar to columnar storage).

Weighted Quantile Sketch: Ideally, we would like to select the ll candidate split points for feature in dimension dd as {sd1,sd2โ€ฆ,sdl}\{s_{d1},s_{d2}\dots,s_{dl}\}, in a manner that they are distributed evenly over the data (sd1s_{d1} is always the minimum feature value and sdls_{dl} is always the maximum feature value). The weights are represented by the second-order gradient values. The constraint is to maintain differences between successive rank functions below some threshold value ฯต\epsilon, such that there are roughly 1/ฯต1/\epsilon candidate points. This is available as the sketch_eps parameter in XGBoost when tree_method=approx.

A version of weighted quantile sketch for non-uniformly weighted data is also proposed with theoretical guarantees.

Sparsity-aware Split Finding: To handle missing feature values, XGBoost aims to learn an optimal default direction from data. This information gain on the optimal direction is computed using the same loss reduction formula above. This also works for the quantile-based buckets where the statistics computed only using non-missing values. This provides a unified way of handling all sparsity patterns.

Misc. Statements of Note:

LightGBM

TL;DR: The efficiency and scalability of XGBoost still remains unsatisfactory with high nn and high dd problems. There are two ways to speed this up - (i) reduce data size, or (ii) reduce feature size. But straightforward subsampling is highly non-trivial. LightGBM2 then essentially addresses (i) via Gradient-based One-Side Sampling, and (ii) via Exclusive Feature Bundling.

Gradient-based One-Side Sampling: Part of the inspiration here is the classical boosting algorithm called AdaBoost where we assign weights to every instance (starting with uniform weighting).

The contention is that when using gradients as a measure of the weight of a sample, uniform subsampling can often lead to inaccurate gain estimation because large gradient magnitudes can dominate. Instead, GOSS relies on a mix of keeping instances whose gradient magnitudes are from a chosen top percentile aร—100%a \times 100\%, and a fraction bb are uniformly sampled only from the remainder of the data, amplifying the gradient values by 1โˆ’ab\frac{1-a}{b} to avoid changing the original data distribution by much.

LightGBM them uses a slightly modified version of information gain for split finding, which relies only on reweighted first-order gradients on a subsampled version of the instance set. Theoretical results show that the estimation error of the information gain decays with rate O(n1/2)\mathcal{O}(n^{1/2}).

Exclusive Feature Bundling: High-dimensional data is usually very sparse, and provides us with an opportunity to design a nearly lossless approach to reduce the number of features by combining the ones which are mutually exclusive (e.g. one-hot encoding is mutually exclusive among dimensions, if one is non-zero, others have to be zero). The objective will be to roughly arrive at the same feature histograms using the feature bundles as we would with individual features.

First, to find the exclusive bundles, we note that this is an NP-hard problem (by equivalence to the graph coloring problem where set of vertices with the same color represent mutually exclusive features). Therefore, we can only build using approximate greedy algorithms. To avoid strict constraints to the graph color, we can randomly pollute features, and allowing a degree of conflicts. The weighted graph construction happens such that the weights correspond to the total conflicts between features.

Second, to construct the bundle, we simply merge them in a manner such that the constructed histogram bins assign different features to different bins.

Also see LightGBM Features for a conceptual overview of many design choices.

Misc. Statements of Note:

CatBoost

TL;DR: The authors argue that existing implementations suffer from a shift in the predictive distribution caused by target leakage. To solve that, CatBoost3 4 proposes Ordered boosting, which efficiently implements target statistic calculations for categorical features via random permutations.

Greedy Target Statistic and Target Leakage: One way to convert categorical variable into a numerical value is to compute some target statistic. It aims to estimate the conditional expected target given the value. The most straightforward way is to, compute the empirical conditional, adjusted by a prior pp (e.g. empirical average of the target value over the full dataset). For a feature dimension dd of input instance ii,

x^id=โˆ‘j=1n1xid=xjdโˆ—yj+apโˆ‘j=1n1xid=xjd+a\widehat{\mathbf{x}}_{id} = \frac{\sum_{j=1}^n \mathbb{1}_{\mathbf{x}_{id} = \mathbf{x}_{jd}}*y_j + ap}{\sum_{j=1}^n \mathbb{1}_{\mathbf{x}_{id} = \mathbf{x}_{jd}} + a}

The problem here is of target leakage. Leave-one-out does not work too. What we want is,

E[x^dโˆฃy]=E[x^idโˆฃyi]\mathbb{E}\left[\widehat{\mathbf{x}}_{d} \mid y \right] = \mathbb{E}\left[\widehat{\mathbf{x}}_{id} \mid y_i \right]

One way to achieve this is to simply use a held-out set (potentially even including only xi\mathbf{x}_i) to compute the target statistic. But this is wasteful, since training data remains unused.

Ordered Target Statistic: A more effective strategy is, inspired by online learning algoritms, to rely on the observed history; in this case a permutation of the training data. Therefore, for a permutation ฯƒ\sigma of the dataset, the target statistic x^i\widehat{\mathbf{x}}_i is computed using data Di={xj:ฯƒ(j)<ฯƒ(i)}\mathcal{D}_i = \{\mathbf{x}_j : \sigma(j) < \sigma(i) \}. To avoid high variance estimates for the preceding instances in the permutation, each boosting round uses a different permutation.

Prediction Shift: As a consequence of the target leakage above, all the subsequent distributions are biased, i.e. the predictive distribution of any training point ฯ•K(x)โˆฃx\phi_{K}(\mathbf{x}) \mid \mathbf{x} does not match that of a testing point ฯ•K(xโ‹†)โˆฃxโ‹†\phi_{K}(\mathbf{x}_\star) \mid \mathbf{x}_\star. Similar is also true for the gradient g(x)โˆฃxig(\mathbf{x}) \mid \mathbf{x}_i against the corresponding test instance distribution.

Practical Ordered Boosting: In principle, the boosting procedure should ensure to compute residuals using models which do not rely on the data point (i.e. trained with the Ordered TS). This is impractical, and increases the computational complexity by a factor of nn. One practical trick is to approximate the gradient in terms of cosine similarity. The other is to only store exponentially spaced ticks for the permutations.

Misc. Statements of Note:

Footnotes

  1. Tianqi Chen and Carlos Guestrin. โ€œXGBoost: A Scalable Tree Boosting System.โ€ย Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Miningย (2016). https://arxiv.org/abs/1603.02754 โ†ฉ

  2. Ke, Guolin et al. โ€œLightGBM: A Highly Efficient Gradient Boosting Decision Tree.โ€ย NIPSย (2017). https://proceedings.neurips.cc/paper/2017/hash/6449f44a102fde848669bdd9eb6b76fa-Abstract.html โ†ฉ

  3. Ostroumova, Liudmila et al. โ€œCatBoost: unbiased boosting with categorical features.โ€ย Neural Information Processing Systemsย (2017). https://papers.nips.cc/paper_files/paper/2018/hash/14491b756b3a51daac41c24863285549-Abstract.html โ†ฉ

  4. Dorogush, Anna Veronika et al. โ€œCatBoost: gradient boosting with categorical features support.โ€ย ArXivย abs/1810.11363 (2018) https://arxiv.org/abs/1810.11363 โ†ฉ