Have we really improved over the years?

by Paula Holmberg

Paula will examine the ISBSG D&E repository to understand the productivity improvements over the years. She will discuss the contributing factors that make the productivity results what they are today, creating a better understanding.

Parametric Joint Confidence Level Analysis: A Practical Cost and Schedule Risk Management Approach

by Sara Jardine, Christian Smart and Kimberly Roye

Joint Confidence Level (JCL) analysis has proven to be successful for NASA. Bottom-up resource-loaded schedules are the most common method for jointly analyzing cost and schedule risk. However, the use of high-level parametrics and machine learning has been successfully used by one of the authors. This approach has some advantages over the more detailed method. In this presentation, we discuss the use of parametrics and machine learning methods. The parametric/machine learning approach involves the development of mathematical models for cost and schedule risk. Parametric methods for cost typically use linear and nonlinear regression analysis. These methods applied to schedule often do not provide the high R-squared values seen in cost models. We discuss the application of machine learning models, such as regression trees, to develop higher-fidelity schedule models. We then introduce a bivariate model to combine the results of the cost and schedule risk analyses, along with correlation, to create a JCL using models for cost and schedule as inputs. We provide a previous case study of the successful use of this approach for a completed spacecraft mission and apply the approach to a large data set of cost, schedule, and technical information for software projects.

Advance Preview – ICEAA’s Software Cost Estimation Body of Knowledge (SCEBoK)

by Carol Dekkers

The International Cost Estimating and Analysis Association launched a major new project called the SCEBoK, which is now in the final draft stage and getting ready to launch.

This is an important initiative with implications for new customers for the ISBSG repository and subscription products because the SCEBoK positions ISBSG as one of the leading sources of historical software project data – especially valuable for organizations who lack them.

Join presenter and SCEBoK lead author, Carol Dekkers for a preview of this exciting new initiative.

The Third Way and the emergency of historical data: from current ICT contracts to the post-COVID19 years

by Luigi Buglione

In DevOps the “Third Way” means “continuous experimentation & learning” and can be seen as the highest maturity level in an ordinal scale implying an organization store and use (any kind of) data, information and knowledge for the decision-making process.

Looking at the current way ICT contracts use and deal with measures, it’d seem that such maturity is lower than expected: several issues are revealed also from a quick read of bids and technical documents with several inaccurate assumptions that lead all stakeholders to achieve less value than they could.

Best practices (and standards) from the benchmarking field (as well as the ISO 29155 series) could be a valid ingredient to improve and learn.

This presentation shall present examples from the ‘as-is’ situation for a ‘to-be’ one, considering the higher complexity that the Digital Transformation is presenting for the post-COVID19 period.

Agile Teams Performance Measurement – How to measure and benchmark team value creation

by Harold van Heeringen

Managing the IT function in management and development is more important than ever. It is wrongly assumed that agile, DevOps or multidisciplinary teams do not need planning or leadership and that their performance cannot be measured or that this is at the expense of agility or execution power.

The opposite is true: in practice, many projects and programs are almost blind, especially at a time when their size is increasing and the complexity is humanly incalculable. On a daily basis, management faces the hefty bill of uncontrollable journeys “beyond the point of no return”.

Active attention and direction is a precondition for success, but professional commissioning and facilitating leadership should not be confused with classical and oppressive micro-management. A strategic vision, a lived-in architecture with underlying principles, clear technology choices, prioritization, and solution-oriented approaches to practical challenges are therefore essential.

In addition, objectively substantiated insights are necessary to know what value is delivered by development teams and how that translates into size and quality. Not least for the teams, in fact.

There is a way to provide and unlock this for anyone who has or wants to take responsibility for this, regardless of whether this is someone with a technical or non-technical profile. Data – directly extracted from the software code or management systems around it – plays a key role in this. To look back and learn or look ahead and actually live up to ambitions or forecasts: the Plan Do Check Act cycle is complete again.

In this presentation, I’ll show the way the performance of agile teams can be measured in an objective, repeatable and verifiable way. This way, team performance metrics Productivity. Cost Efficiency, Delivery Speed, Sprint Quality and Product quality can be measured, compared to each other and benchmarked against industry data. I’ll show a recent study of 4 teams of one organization, with each team in another European country.

The performance measurement also is used to recalibrate long-term effort and cost estimates based on actual productivity delivered. It will show the way senior management can again understand the progress of certain initiatives, enabling them to show active attention and direction, resulting in more value creation for the given budget and better organizational results.

Integrations distinct sources databases to improve the estimation models

by Francisco Valdés-Souto

One of the main problems in organizations when they start improvement programs in metric-based estimations is that they do not have historical databases or the number of projects they have is not statistically sufficient.
Many studies that have developed estimation models are based on databases that are not always available, or even if they are available, they do not always represent the behavior of the organization that is implementing the estimation improvement program.

In this conference, a solution that we have applied to generate reference databases with a greater amount of data is presented, integrating different databases, including that of ISBSG, as long as statistical assumptions are met for this, guaranteeing its applicability, and improvement results.
This technique has been consistently applied in Mexican industry to generate initial databases that serve organizations to cover the lack of data, which has allowed us to generate consistent and statistically significant estimation models.

Cloud Computing and Costing

by Bob Hunt, Dan Galorath, David DeWitt, Kimberly Roye and Karen McRitchie

Over the past decade, business leaders have been increasingly choosing to move their IT systems and infrastructure into the cloud. Using the cloud allows them to not tie up capital in data center equipment, and to not have to continually increase IT staff to maintain that infrastructure. This enables them to focus on their efforts on getting business value from digital initiatives. Moving to the cloud is essentially an IT outsourcing decision, and effectively understanding cost implications are key to measuring business value. Those tasked with costing digital transformation and cloud migration efforts must be able to answer key questions:
• What cloud services do I need to do to meet my requirements?
• What is the purchase price of required cloud services, and what internal costs do they offset?
• Is the current application portfolio cloud ready?
• Do modifications need to be made to be hosted in the cloud?
• What are the risks for any given vendor?
This presentation will discuss commercial models provided by major cloud services providers and will demonstrate how to use SEER IT to develop a complete life cycle cost for a cloud outsourcing decision.

Ave Caesar, for those about to governance their IT (we salute you)

by Andrés Gutiérrez and Julián Gómez

We will learn from great leaders who had in their hands, feats of (almost) the same nature as the feat of the digital transformation of a company, of our company. From the hand of the Roman emperors, from the hand of Taiicho Ohno with his Toyota Production System (Lean Manufacturing) and some more guests we will learn which are the main practices that will allow us to achieve the maximum of digital transformation: benchmarking. If we transform without getting the benefits we are promised, this transformation will be a failure. Learning the right real practices, practices of recognized success to help us in such a great task, will be a determining point to achieve success. We will see data from real cases to document the techniques shown. It will be a journey towards the success of our transformation accompanied by the greatest experts in Leadership and Governance in history. I wouldn’t miss it. Alea jacta est!

Simple Function Point and Story Point integration in Agile Contract Management

by Roberto Meli

Simple Function Point (SFP) is a new IFPUG Functional Size Measurement Method. Story Point (SP) is an estimation technique largely used in Agile teams to predict the effort needed to implement User Stories along any specific sprint. The SFP gives a product oriented measurement and a process oriented estimation. They are not overlapping and may be effectively integrated in software development governance.

The ratio between them (SP/SFP) is an expected productivity indicator. SPs are not an actual effort value but an estimated effort value for an user story. The ratio between actual effort measurement and SFP is an actual productivity indicator. The ratio between SP and actual effort measurement is an indicator of accuracy of effort estimation.

Till now the Agile community has often disliked using Function Points because the available methods were considered too complex to be applied in a short term iteration process. The cost/effort model based on FP was not reliable for small FP sizes. With the availability of a light weight method like SFP it is now possible to integrate a product oriented measure in the control dashboard with great comparability advantages. The need of a contractual management of Agile projects increases the importance to do this integration to allow a higher explicit control over the classical market variables and practices.

This presentation will show the “Why” and the “How” of this integration from a contractual perspective.

Certification of AD&M benchmarking service providers

by Pierre Almén

There could be different reasons to benchmark development and maintenance of an organizations applications like comparison of productivity, quality, time to market, cost efficiency, need of improving project estimation capability etc. ISO/IEC 29155 provides the overall framework model for IT project performance benchmarking and describes all required activities for a successful benchmarking. The IFPUG Benchmarking Certification represents a standard method through which IFPUG confirms that a benchmarking service provider has fulfilled the requirements deemed necessary to be competent to conduct a benchmark analysis, through the investigation of evidence upon criteria that were defined based on applicable ISO/IEC 29155 tasks and activities.