This course introduces decision trees and decision forests.

Decision forests are a family of supervised learning machine learning models and algorithms. They provide the following benefits:

  • They are easier to configure than neural networks. Decision forests have fewer hyperparameters; furthermore, the hyperparameters in decision forests provide good defaults.
  • They natively handle numeric, categorical, and missing features. This means you can write far less preprocessing code than when using a neural network, saving you time and reducing sources for error.
  • They often give good results out of the box, are robust to noisy data, and have interpretable properties.
  • They infer and train on small datasets (<1M examples) much faster than neural networks.

Decision forests produce great results in machine learning competitions, and are heavily used in many industrial tasks. Decision forests are practical, efficient, and interpretable. You can use decision forests for many supervised learning tasks, including:

The material in this course is generic to decision forests and agnostic to any specific library. However, orange boxes like this one contain code examples that use the TensorFlow Decision Forests (TF-DF) library. While specific to TF-DF, those examples are often easily convertible to other decision forest libraries.


This course assumes you have completed the following courses or have equivalent knowledge:

Happy Learning!