Risk & Compliance Project Management Data & Analytics
Consulting Pharma & Life Sciences Energy & Resources Manufacturing & Automotive Financial Services Consumer & Retail Technology & IT
By outcome
Early Warning Risk Visibility Predictive Intelligence Time Savings Financial Protection Institutional Memory
By use case
Budget Overrun PMO Reporting Portfolio Risk Visibility See all 10 use cases →
How We Work Technology Pricing Start Free Maturity Check →
Research

Agile or Traditional? How Machine Learning Predicts the Right Project Management Method

Ben Kraiem et al. (2023) analyze 99 real IT projects with 7 machine learning models. Gradient Boosting achieves 94% accuracy. The key findings for your portfolio.

Ben Kraiem et al. (2023)Procedia Computer ScienceReading time: 6 min

Context: Why methodology selection makes or breaks a project

The Standish Group has documented it for decades: Only 31% of all projects are considered successful. 50% are “challenged” (budget, time, or quality issues) and 19% fail completely. One of the main reasons: The wrong choice of project management methodology.

Ines Ben Kraiem, Mouna Ben Mabrouk, and Lucas De Jode from Sogeti (Capgemini) investigated in their 2023 study presented at the KES International Conference whether machine learning can objectively predict this choice. Their dataset: 140 completed IT projects from France, of which 99 were valid after cleaning. Their question: Which factors determine whether Agile or Traditional is the better method — and can an algorithm reliably predict it?

Key findings: What the data says

The authors identified 33 variables in six categories: project factors, organization factors, customer factors, team factors, project success, and methodology selection. After statistical cleaning (Chi-square test, Cramér's V), 18 relevant variables remained.

The strongest predictors

Team expertise in AgileHighly correlated
Team expertise in TraditionalHighly correlated
Organization culture (Agile accepted?)Highly correlated
Customer role (involvement level)Highly correlated
Stakeholder engagementModerately correlated
Requirement clarityModerately correlated
Project complexityNot correlated*
Communication frequencyNot correlated*

* Not statistically significant, although business experience suggests otherwise. Likely due to small sample size.

The most surprising result: Budget, project duration, and technological uncertainty were not significant predictors. What matters is human capital — team expertise and organizational culture. A project with clear requirements and engaged stakeholders tends toward Traditional. A project with iterative customer involvement and an agile culture tends toward Agile.

The models compared

Seven machine learning algorithms were trained and evaluated using accuracy, precision, recall, and F1-score. The dataset was imbalanced (61 Traditional, 38 Agile), so SMOTEN resampling was applied.

Models and accuracy

Gradient Boosting (no resampling)94%
K-Nearest Neighbors (with SMOTEN)94%
SVM (with SMOTEN)91%
Naïve Bayes (with SMOTEN)88%
Neural Network88–90%
Decision TreeAverage
Bayesian NetworkAverage

Gradient Boosting showed the best performance and stability — with and without resampling. The learning curves showed no overfitting, which is remarkable for such a small dataset. KNN benefited strongly from SMOTEN resampling and jumped from average to 94% accuracy. Neural Networks, on the other hand, showed variations (95% confidence intervals) and weaker performance on the Agile class.

Critique: The blind spots of the study

The study is methodologically sound, but not without weaknesses:

The missing variable: Continuous monitoring

The study makes a prediction at project start and treats it as final. But projects evolve. A team with little Agile experience at the beginning can improve significantly within three months. A customer who was highly engaged initially may disengage due to internal restructuring. Methodology selection should be a dynamic, continuously updated process — not a one-time decision at project kickoff.

Recommendation: What this means for your portfolio

The results can be distilled into three principles that any portfolio team can apply:

1. The right method is not a matter of faith

Often, methodology selection is made by the project manager based on gut feeling — or dictated by the customer. The study shows that objective factors (team expertise, culture, customer involvement) can raise the hit rate to 94%. Systematize your methodology selection with a standardized questionnaire or — even better — an algorithm that uses your historical project data.

2. Human factors outweigh technical parameters

Budget, duration, and technological uncertainty were not significant in the study. What mattered was team expertise, culture, and stakeholder engagement. Invest in training your teams — not just in tools and processes. A team with high Agile expertise will deliver an agile project more successfully than a team with excellent tools but the wrong culture.

3. Methodology selection is not a one-time event

The study makes a prediction at project start. In reality, teams, customers, and requirements change. A method that was optimal in month 1 may be suboptimal in month 6. Implement continuous monitoring of methodology fitness — based on real-time data from your project management tools.

Aversight in context of the study

Aversight integrates the principles of the study into the daily workflow: Instead of a one-time methodology selection decision, the system continuously analyzes 15 signals from SAP, Jira, SharePoint, and other sources. Budget consumption, milestone progress, resource utilization, and risk scores are updated hourly. When a project transitions from a stable to a volatile phase, Aversight recognizes the pattern — and recommends a methodology adjustment before damage occurs. That is the difference between static prediction and dynamic intelligence.

Conclusion

Ben Kraiem et al. provide compelling evidence that machine learning can objectively support project management methodology selection. The 94% accuracy of Gradient Boosting shows that the decision does not have to be left to gut feeling.

But the study's limitations (small sample, subjective variables, static analysis, binary classification) also show where the next step lies: from one-time methodology prediction to continuous, data-driven methodology optimization. Those who take this step transform methodology selection from a gamble into a science.

Risk intelligence is not a black box. Let us show you how it works.

30 seconds — and we will get back to you within 24 hours.

Start Free Maturity Check →