Home > Articles > Business & Management

  • Print
  • + Share This
This chapter is from the book

Decision Tree Learning

Decision trees are a very popular tool for predictive analytics because they are relatively easy to use, perform well with non-linear relationships and produce highly interpretable output. We discuss different methods for decision tree learning below.


Decision tree learning is a class of methods whose output is a list of rules that progressively segment a population into smaller segments that are homogeneous in respect to a single characteristic, or target variable. End users can visualize the rules as a tree diagram, which is very easy to interpret, and the rules are simple to deploy in a decision engine. These characteristics—transparency of the solution and rapid deployment—make decision trees a popular method.

Readers should not confuse decision tree learning with the decision tree method used in decision analysis, although the result in each case is a tree-like diagram. The decision tree method in decision analysis is a tool that managers can use to evaluate complex decisions; it works with subjective probabilities and uses game theory to determine optimal choices. Algorithms that build decision trees, on the other hand, work entirely from data and build the tree based on observed relationships rather than the user’s prior expectations.

You can train decision trees with data in many ways; the sections that follow describe the most widely used methods. The Ensemble Learning section covers advanced methods (such as bagging, boosting, and random forests).


CHAID (Chi-Square Automatic Interaction Detection) is one of the oldest tree-building techniques; in its most widely used form, the method dates to a publication by Gordon V. Kass in 19803 and draws on other methods developed in the 1950s and 1960s.

CHAID works only with categorical predictors and targets. The algorithm computes a chi-square test between the target variable and each available predictor and then uses the best predictor to partition the sample. It then proceeds, in turn, with each segment and repeats the process until no significant splits remain. The standard CHAID algorithm does not prune or cross-validate the tree.

Software implementations of CHAID vary; typically, the user can specify a minimum significance of the chi-square test, a minimum cell size, and a maximum depth for the tree.

The principal advantages of CHAID are its use of the chi-square test (which is familiar to most statisticians) and its ability to perform multiway splits. The main weakness of CHAID is its limitation to categorical data.


CART, or Classification and Regression Trees, is the name of a patented application marketed by Salford Systems based on an eponymous 1984 publication by Leo Breiman.4 CART is a nonparametric algorithm that learns and validates decision tree models.

Like CHAID, the algorithm proceeds recursively, successively splitting the data set into smaller segments. However, there are key differences between the CHAID and CART algorithms:

  • CHAID uses the chi-square measure to identify split candidates, whereas CART uses the Gini rule.
  • CHAID supports multiway splits for predictors with more than two levels; CART supports binary splits only and identifies the best binary split for complex categorical or continuous predictors.
  • CART prunes the tree by testing it against an independent (validation) data set or through n-fold cross-validation; CHAID does not prune the tree.

CART works with either categorical targets (classification trees) or continuous targets (regression trees) as well as either categorical or continuous predictors. This is a key advantage of CART versus CHAID, together with its ability to develop more accurate decision tree models. The principal disadvantage of CART is its proprietary algorithm.


ID3, C4.5, and C5.0 are tree-learning algorithms developed by Ross Quinlan, an Australian computer science researcher.

ID3 (Iterative Dichotomiser) is similar to CHAID and CART, but uses the entropy or information gain measures to define splitting rules. ID3 works with categorical targets and predictors only.

C4.5 is a successor to ID3, with several improvements. C4.5 works with both categorical and continuous variables, handles missing data, and enables the user to specify the cost of errors. The algorithm also includes a pruning function. C5.0, the most current commercial version, includes a number of technical improvements to speed tree construction and supports additional features (such as weighting, winnowing, and boosting).

ID3 and C4.5 are available as open source software. ID3 is available in C, C#, LISP, Perl, Prolog, Python, and Ruby, and C4.5 is available in Java. RuleQuest Research distributes a commercial version of C5.0 together with a single-threaded version available as open source software.

Hybrid Decision Trees

Methods such as CART and C5.0 are patented and trademarked. However, the general principles of decision tree learning (splitting rules, stopping rules, and pruning methods) are in the public domain. Hence, a number of software vendors support generic decision tree learning platforms that offer the user a choice of splitting rules, pruning methods, and visualization capabilities.

  • + Share This
  • 🔖 Save To Your Account