A machine learning technique that combines weak models step by step into a strong ensemble -- similar to an experienced mentor learning from every mistake their student makes.
Gradient Boosting is an ensemble learning technique in machine learning. The core idea: Instead of training a single complex model, many simple models are built sequentially. Each new model corrects the errors of the previous one. The result is a "committee" of weak learners that together achieve very high prediction accuracy.
The name combines "Gradient" (the mathematical direction of steepest descent of an error function) and "Boosting" (the iterative strengthening of models). Rather than climbing a mountain, the algorithm goes down the mountain -- step by step in the direction where the error decreases fastest.
The technique was formalized by Jerome Friedman in 1999. Modern implementations like XGBoost, LightGBM, and CatBoost are standard in data science competitions and production systems. Ben Kraiem et al. (2023) achieved 94% accuracy with Gradient Boosting when predicting the optimal project management methodology -- the best performance of all tested models.
Gradient Boosting is characterized by three properties that make it particularly suitable for complex prediction problems:
The main disadvantage: Gradient Boosting is susceptible to overfitting if too many iterations are trained. The solution is regularization techniques (L1/L2 penalties) and early stopping.
Aversight uses gradient-boosting-like techniques for risk score calculation. The system combines up to 15 signals from various sources (SAP, Jira, SharePoint, etc.) and weights them iteratively according to their predictive power. A signal that has frequently predicted budget overruns in the past automatically receives a higher weight. The result: a transparent, continuously improved risk model that does not function as a black box.
30 seconds -- and we will get back to you within 24 hours.
Start Free Maturity Check →