Ben Kraiem et al. (2023) analyze 99 real IT projects with 7 machine learning models. Gradient Boosting achieves 94% accuracy. The key findings for your portfolio.
The Standish Group has documented it for decades: Only 31% of all projects are considered successful. 50% are “challenged” (budget, time, or quality issues) and 19% fail completely. One of the main reasons: The wrong choice of project management methodology.
Ines Ben Kraiem, Mouna Ben Mabrouk, and Lucas De Jode from Sogeti (Capgemini) investigated in their 2023 study presented at the KES International Conference whether machine learning can objectively predict this choice. Their dataset: 140 completed IT projects from France, of which 99 were valid after cleaning. Their question: Which factors determine whether Agile or Traditional is the better method — and can an algorithm reliably predict it?
The authors identified 33 variables in six categories: project factors, organization factors, customer factors, team factors, project success, and methodology selection. After statistical cleaning (Chi-square test, Cramér's V), 18 relevant variables remained.
* Not statistically significant, although business experience suggests otherwise. Likely due to small sample size.
The most surprising result: Budget, project duration, and technological uncertainty were not significant predictors. What matters is human capital — team expertise and organizational culture. A project with clear requirements and engaged stakeholders tends toward Traditional. A project with iterative customer involvement and an agile culture tends toward Agile.
Seven machine learning algorithms were trained and evaluated using accuracy, precision, recall, and F1-score. The dataset was imbalanced (61 Traditional, 38 Agile), so SMOTEN resampling was applied.
Gradient Boosting showed the best performance and stability — with and without resampling. The learning curves showed no overfitting, which is remarkable for such a small dataset. KNN benefited strongly from SMOTEN resampling and jumped from average to 94% accuracy. Neural Networks, on the other hand, showed variations (95% confidence intervals) and weaker performance on the Agile class.
The study is methodologically sound, but not without weaknesses:
The study makes a prediction at project start and treats it as final. But projects evolve. A team with little Agile experience at the beginning can improve significantly within three months. A customer who was highly engaged initially may disengage due to internal restructuring. Methodology selection should be a dynamic, continuously updated process — not a one-time decision at project kickoff.
The results can be distilled into three principles that any portfolio team can apply:
Often, methodology selection is made by the project manager based on gut feeling — or dictated by the customer. The study shows that objective factors (team expertise, culture, customer involvement) can raise the hit rate to 94%. Systematize your methodology selection with a standardized questionnaire or — even better — an algorithm that uses your historical project data.
Budget, duration, and technological uncertainty were not significant in the study. What mattered was team expertise, culture, and stakeholder engagement. Invest in training your teams — not just in tools and processes. A team with high Agile expertise will deliver an agile project more successfully than a team with excellent tools but the wrong culture.
The study makes a prediction at project start. In reality, teams, customers, and requirements change. A method that was optimal in month 1 may be suboptimal in month 6. Implement continuous monitoring of methodology fitness — based on real-time data from your project management tools.
Aversight integrates the principles of the study into the daily workflow: Instead of a one-time methodology selection decision, the system continuously analyzes 15 signals from SAP, Jira, SharePoint, and other sources. Budget consumption, milestone progress, resource utilization, and risk scores are updated hourly. When a project transitions from a stable to a volatile phase, Aversight recognizes the pattern — and recommends a methodology adjustment before damage occurs. That is the difference between static prediction and dynamic intelligence.
Ben Kraiem et al. provide compelling evidence that machine learning can objectively support project management methodology selection. The 94% accuracy of Gradient Boosting shows that the decision does not have to be left to gut feeling.
But the study's limitations (small sample, subjective variables, static analysis, binary classification) also show where the next step lies: from one-time methodology prediction to continuous, data-driven methodology optimization. Those who take this step transform methodology selection from a gamble into a science.
30 seconds — and we will get back to you within 24 hours.
Start Free Maturity Check →