Occam razor vs. machine learning

Whenever possible, substitute constructions out of known entities for inferences to unknown entities Occam’s razor (Russell’s version)

If Russell was studying Machine Learning our days, he’d probably throw out all of the textbooks.

The reason is that ML introduces too many terms with subtle or no difference. It’s hard understand the scale of the problem without a good example.

You can ask your ML-experienced friends how many of the entities are there in this list of terms?

  1. mean squared error (MSE)

  2. logistic loss

  3. cross-entropy

  4. Kullback-Leibler (KL) divergence

  5. logarithmic scoring rule

  6. quadratic scoring rule

  7. Brier scoring rule

Gini splitting criterion (Gini impurity) for decision tree (not Gini coefficient)

  1. entropy splitting criterion for decision tree

binomial / multinomial deviance

  1. ordinary least squares

  2. mutual information

  3. logarithmic loss

  4. information gain

  5. softmax classifier

Not to name some trivial things like sum of squared errors is the same entity as MSE.

The answer is: 2.

This is enough to say that terminology used in ML today is very unfriendly and requires some minimization.

Group 1 contains different synonyms of MSE:

  • mean squared error (MSE)

  • quadratic scoring rule

  • Brier scoring rule

  • Gini splitting criterion (Gini impurity)

  • ordinary least squares

It is not obvious that Gini is a particular version of MSE splitting, but I demonstrate this below.

Group 2 contains different versions of cross-entropy:

  1. logistic loss

  2. cross-entropy

  3. Kullback-Leibler (KL) divergence

  4. logarithmic scoring rule

  5. entropy splitting criterion for decision tree

  6. binomial / multinomial deviance

  7. mutual information

  8. logarithmic loss

  9. information gain

  10. softmax classifier

Entropy is cross-entropy of distribution with itself. KL-divergence is difference of cross-entropy and entropy.

There are many other ways to define the same concept in particular cases: softmax classifier corresponds to optimization of log-likelihood for Bernoulli distribution (in case of two classes) or categorical distribution (in case of more than two classes).

Non-trivial thing here: entropy splitting criterion is the same as logloss minimization.

Why Gini splitting = MSE?

Simplest case is binary classification: $y_i = 0 \text{ or } 1$. Let’s first revisit MSE splitting criterion.

Suppose we split the feature space into leaves of a tree $ \text{leaf} = 1, 2, . . . , L $.

In this case MSE is equal to:

We are going to group summands according to the leaf observations belong to. Penalty for a single leaf depends only on the number of samples $n_\text{leaf, 0}, n_\text{leaf, 1}$ from classes 0 and 1:

So, while building decision tree, we greedily optimize the following global function (but each split optimizes only two terms in the sum):

\text{MSE} = \sum_{\text{leaf}=1}^{L} \dfrac{n_{\text{leaf}, 0} \times n_{\text{leaf}, 1}}{n_{\text{leaf}, 0} + n_{\text{leaf}, 1}}

(where $y_{i,c}$ is 1 if event $i$ belongs to class $c$ and 0 otherwise; $p_{i,c}$ is predicted probability to belong to class $c$).

When selecting optimal predictions in the leaf, we optimize:

\text{LogLoss}\text{leaf} = \sum{i \in \text{leaf}} \sum_{c} y_{i,c} \log p_{\text{leaf}, c}

$$

Easy to prove that optimal point is $p_{\text{leaf}, c} = \dfrac{n_{\text{leaf}, c}}{n_{\text{leaf}}} $

… and we got entropy impurity.