Decision Forests
A decision forest is a generic term to describe models made of multiple
decision trees. The prediction of a decision forest is the aggregation of the
predictions of its decision trees. The implementation of this aggregation
depends on the algorithm used to train the decision forest. For example, in a
multi-class classification random forest (a type of decision forest), each tree
votes for a single class, and the random forest prediction is the most
represented class. In a binary classification gradient boosted Tree (GBT)
(another type of decision forest), each tree outputs a logit (a floating point
value), and the gradient boosted tree prediction is the sum of those values
followed by an activation function (e.g. sigmoid).
The next two chapters detail those two decision forests algorithms.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2022-07-18 UTC.
[null,null,["Last updated 2022-07-18 UTC."],[[["Decision forests encompass models composed of multiple decision trees, with predictions derived from aggregating individual tree predictions."],["Prediction aggregation methods vary depending on the specific decision forest algorithm employed, such as voting in random forests or logit summation in gradient boosted trees."],["Random forests and gradient boosted trees are two primary examples of decision forest algorithms, each utilizing a unique approach to prediction aggregation."],["Upcoming chapters will delve deeper into the workings of random forests and gradient boosted trees."]]],[]]