[null,null,["最后更新时间 (UTC):2025-02-25。"],[[["Gradient boosting employs loss functions and trains weak models to predict the gradient of the loss, differing from simple signed error calculations."],["For regression with squared error loss, the gradient is proportional to the signed error, but this doesn't hold for other problem types."],["Newton's method, incorporating both first and second derivatives, can enhance gradient boosted trees by optimizing leaf values and influencing tree structure."],["YDF, a specific implementation, always applies Newton's method to refine leaf values and optionally uses it for tree structure optimization."]]],[]]