2021 IT Conference
ISBSG continued from last year with the IT Confidence conference on October 8 and October 14. Experts from around the world presented their latest findings on IT metrics topics.
Agile Teams Performance Measurement – How to measure and benchmark team value creation
Harold van Heeringen
Managing the IT function in management and development is more important than ever. It is wrongly assumed that agile, DevOps or multidisciplinary teams do not need planning or leadership and that their performance cannot be measured or that this is at the expense of agility or execution power.
The opposite is true: in practice, many projects and programs are almost blind, especially at a time when their size is increasing and the complexity is humanly incalculable. On a daily basis, management faces the hefty bill of uncontrollable journeys “beyond the point of no return”.
Active attention and direction is a precondition for success, but professional commissioning and facilitating leadership should not be confused with classical and oppressive micro-management. A strategic vision, a lived-in architecture with underlying principles, clear technology choices, prioritization, and solution-oriented approaches to practical challenges are therefore essential.
In addition, objectively substantiated insights are necessary to know what value is delivered by development teams and how that translates into size and quality. Not least for the teams, in fact.
There is a way to provide and unlock this for anyone who has or wants to take responsibility for this, regardless of whether this is someone with a technical or non-technical profile. Data – directly extracted from the software code or management systems around it – plays a key role in this. To look back and learn or look ahead and actually live up to ambitions or forecasts: the Plan Do Check Act cycle is complete again.
In this presentation, I’ll show the way the performance of agile teams can be measured in an objective, repeatable and verifiable way. This way, team performance metrics Productivity. Cost Efficiency, Delivery Speed, Sprint Quality and Product quality can be measured, compared to each other and benchmarked against industry data. I’ll show a recent study of 4 teams of one organization, with each team in another European country.
The performance measurement also is used to recalibrate long-term effort and cost estimates based on actual productivity delivered. It will show the way senior management can again understand the progress of certain initiatives, enabling them to show active attention and direction, resulting in more value creation for the given budget and better organizational results.
Harold van Heeringen is a business economics graduate from the university of Groningen.
He has worked for IT service provider Sogeti for 17 years as a senior consultant software metrics/cost estimation consultant. He then moved to METRI, as a principal consultant and practice lead for IT Intelligence services (including Estimation & Performance Measurement).
Harold is a director on the ISBSG board and a board member of Nesma. Harold regularly publishes white papers, blogs and articles on software metrics, performance measurement.
The Third Way and the emergency of historical data: from current ICT contracts to the post-COVID19 years
In DevOps the “Third Way” means “continuous experimentation & learning” and can be seen as the highest maturity level in an ordinal scale implying an organization store and use (any kind of) data, information and knowledge for the decision-making process.
Looking at the current way ICT contracts use and deal with measures, it’d seem that such maturity is lower than expected: several issues are revealed also from a quick read of bids and technical documents with several inaccurate assumptions that lead all stakeholders to achieve less value than they could.
Best practices (and standards) from the benchmarking field (as well as the ISO 29155 series) could be a valid ingredient to improve and learn.
This presentation shall present examples from the ‘as-is’ situation for a ‘to-be’ one, considering the higher complexity that the Digital Transformation is presenting for the post-COVID19 period
Luigi Buglione is a Measurement & Process Improvement Specialist at Engineering Ing. Inf. SpA in Rome, Italy.
Luigi is currently the IFPUG Director for Sizing & International Standard (previously he was the Director for Conference/Education from 2013 to 2019) and the President of GUFPI-ISMA (the Italian Software Metrics Association).
He is a regular speaker at international Conferences on Sw/Svc Measurement, Process Improvement and Quality and is actively part of International and National technical associations on such issues. He achieved several certifications, included IFPUG CFPS, CSP and CSMS.
He received a Ph.D in MIS and a degree cum laude in Economics.
Ave Caesar, for those about to governance their IT (we salute you)
We will learn from great leaders who had in their hands, feats of (almost) the same nature as the feat of the digital transformation of a company, of our company. From the hand of the Roman emperors, from the hand of Taiicho Ohno with his Toyota Production System (Lean Manufacturing) and some more guests we will learn which are the main practices that will allow us to achieve the maximum of digital transformation: benchmarking. If we transform without getting the benefits we are promised, this transformation will be a failure. Learning the right real practices, practices of recognized success to help us in such a great task, will be a determining point to achieve success. We will see data from real cases to document the techniques shown. It will be a journey towards the success of our transformation accompanied by the greatest experts in Leadership and Governance in history. I wouldn’t miss it. Alea jacta est!
Andrés has 19 years of professional experience as a software engineer, software development specialist, master in Big Data, Professional Scrum Master I (PSM I) and Certified Function Points Specialist (CFPS) since 2012.
Since 2006 he has been a Function Points, using them in retail, telecommunications, banking, government.
In rare moments of leisure, a science fiction writer.
Simple Function Point and Story Point integration in Agile Contract Management
Simple Function Point (SFP) is a new IFPUG Functional Size Measurement Method. Story Point (SP) is an estimation technique largely used in Agile teams to predict the effort needed to implement User Stories along any specific sprint. The SFP gives a product oriented measurement and a process oriented estimation. They are not overlapping and may be effectively integrated in software development governance.
The ratio between them (SP/SFP) is an expected productivity indicator. SPs are not an actual effort value but an estimated effort value for an user story. The ratio between actual effort measurement and SFP is an actual productivity indicator. The ratio between SP and actual effort measurement is an indicator of accuracy of effort estimation.
Till now the Agile community has often disliked using Function Points because the available methods were considered too complex to be applied in a short term iteration process. The cost/effort model based on FP was not reliable for small FP sizes. With the availability of a light weight method like SFP it is now possible to integrate a product oriented measure in the control dashboard with great comparability advantages. The need of a contractual management of Agile projects increases the importance to do this integration to allow a higher explicit control over the classical market variables and practices.
This presentation will show the “Why” and the “How” of this integration from a contractual perspective.
- PM & Software Management Consultant / Trainer (>35 years).
- President of SiFP Association
- Chairperson on board of directors of GUFPI-ISMA Board
- Coordinator of the GUFPI – ISMA Counting Practices Committee
- Member of IFPUG Board of Directors, >80 published papers
- Chairperson of the COSMIC Measurement Practices Committee
- Developed Early & Quick FP approximation method and the Simple Function Point measurement method
Certification of AD&M benchmarking service providers
There could be different reasons to benchmark development and maintenance of an organizations applications like comparison of productivity, quality, time to market, cost efficiency, need of improving project estimation capability etc. ISO/IEC 29155 provides the overall framework model for IT project performance benchmarking and describes all required activities for a successful benchmarking. The IFPUG Benchmarking Certification represents a standard method through which IFPUG confirms that a benchmarking service provider has fulfilled the requirements deemed necessary to be competent to conduct a benchmark analysis, through the investigation of evidence upon criteria that were defined based on applicable ISO/IEC 29155 tasks and activities.
Pierre is an experienced IT manager, management consultant and bench- marking specialist with a focus on software measurement.
He is an honorary fellow in the International Function Point Users Group (IFPUG) and founder of a software metrics network at the Swedish IT Association.
Parametric Joint Confidence Level Analysis: A Practical Cost and Schedule Risk Management Approach
Sara Jardine, Christian Smart and Kimberly Roye
Joint Confidence Level (JCL) analysis has proven to be successful for NASA. Bottom-up resource-loaded schedules are the most common method for jointly analyzing cost and schedule risk. However, the use of high-level parametrics and machine learning has been successfully used by one of the authors. This approach has some advantages over the more detailed method. In this presentation, we discuss the use of parametrics and machine learning methods. The parametric/machine learning approach involves the development of mathematical models for cost and schedule risk. Parametric methods for cost typically use linear and nonlinear regression analysis. These methods applied to schedule often do not provide the high R-squared values seen in cost models. We discuss the application of machine learning models, such as regression trees, to develop higher-fidelity schedule models. We then introduce a bivariate model to combine the results of the cost and schedule risk analyses, along with correlation, to create a JCL using models for cost and schedule as inputs. We provide a previous case study of the successful use of this approach for a completed spacecraft mission and apply the approach to a large data set of cost, schedule, and technical information for software projects.
Dr. Christian Smart is the Chief Data Scientist with Galorath. He is author of the forthcoming book Solving for Project Risk Management: Understanding the Critical Role of Uncertainty in Project Management. Dr. Smart received the Frank Freiman Lifetime Achievement Award from ICEAA in 2021. He regularly presents at conferences and has won several best paper awards. Dr. Smart received an Exceptional Public Service Medal from NASA in 2010 and has a PhD in Applied Mathematics.
Sara Jardine is an experienced Operations Research Analyst who has worked directly for a broad variety of government agencies, including the Army, Navy, Veterans Affairs, and OUSD AT&L. She is skilled in Cost Management, Project Management, Requirements Analysis, Cost Analysis, Contract Management, and Budget Management. She has an MS in Project Management from The George Washington University and a BS in Mathematics from the University of Michigan.
Kimberly Roye (CCEA®) is a Senior Cost Analyst for Galorath Federal. Starting her career as a Mathematical Statistician for the US Census Bureau, Kimberly transitioned to a career in Cost Analysis over 12 years ago. She has supported several Department of Defense hardware and vehicle programs. Kimberly earned an MS in Applied Statistics from Rochester Institute of Technology and a dual BS in Mathematics/Statistics from the University of Georgia.
Integrations distinct sources databases to improve the estimation models
One of the main problems in organizations when they start improvement programs in metric-based estimations is that they do not have historical databases or the number of projects they have is not statistically sufficient.
Many studies that have developed estimation models are based on databases that are not always available, or even if they are available, they do not always represent the behavior of the organization that is implementing the estimation improvement program.
In this conference, a solution that we have applied to generate reference databases with a greater amount of data is presented, integrating different databases, including that of ISBSG, as long as statistical assumptions are met for this, guaranteeing its applicability, and improvement results.
This technique has been consistently applied in Mexican industry to generate initial databases that serve organizations to cover the lack of data, which has allowed us to generate consistent and statistically significant estimation models.
“President of COSMIC, Dr. Francisco Valdés Souto is an Associate Professor of the Faculty of Sciences of the National Autonomous University of Mexico (UNAM), he has a Doctorate in Software Engineering specializing in Software Measurement and Estimation at the École de Technologie Supérieure ( ETS), two master Degree in México and France. More than 20 years of experience in critical software development, Founder of SPINGERE, the first Mexican company specialized in software measurement, estimation and evaluation, as well as Founder of the Mexican Association of Software Metrics (AMMS).
He is the main promoter of the issue of formal software metrics in Mexico, promoting COSMIC as a National Standard. His research interests are software measurement and estimation applied to software project management i.e. scope management and economics.”
Have we really improved over the years?
Paula will examine the ISBSG D&E repository to understand the productivity improvements over the years. She will discuss the contributing factors that make the productivity results what they are today, creating a better understanding.
1997 – 2011 IBM including Measurement Program manager for Telstra account, service level agreement measurement contract mangagement, benchmarking with IBM Worldwide Benchmarking Group, measurement consulting to Australian government departments and IBM internationally.
2015 ISBSG Metrics Consultant
2017 ISBSG Executive Director
Advance Preview – ICEAA’s Software Cost Estimation Body of Knowledge (SCEBoK)
The International Cost Estimating and Analysis Association launched a major new project called the SCEBoK, which is now in the final draft stage and getting ready to launch.
This is an important initiative with implications for new customers for the ISBSG repository and subscription products because the SCEBoK positions ISBSG as one of the leading sources of historical software project data – especially valuable for organizations who lack them.
Join presenter and SCEBoK lead author, Carol Dekkers for a preview of this exciting new initiative.
- IFPUG CFPS (Fellow)
- Professional Engineer (Canada)
- Lead author of the International Cost Estimating and Analysis Association (ICEAA) SCEBoK
- IFPUG Past President
- ISO/IEC Project editor
- Published author of several books including: IT Measurement Compendium; The Program Management Toolkit; and others
Cloud Computing and Costing
Bob Hunt, Dan Galorath, David DeWitt, Kimberly Roye and Karen McRitchie
Over the past decade, business leaders have been increasingly choosing to move their IT systems and infrastructure into the cloud. Using the cloud allows them to not tie up capital in data center equipment, and to not have to continually increase IT staff to maintain that infrastructure. This enables them to focus on their efforts on getting business value from digital initiatives. Moving to the cloud is essentially an IT outsourcing decision, and effectively understanding cost implications are key to measuring business value. Those tasked with costing digital transformation and cloud migration efforts must be able to answer key questions:
• What cloud services do I need to do to meet my requirements?
• What is the purchase price of required cloud services, and what internal costs do they offset?
• Is the current application portfolio cloud ready?
• Do modifications need to be made to be hosted in the cloud?
• What are the risks for any given vendor?
This presentation will discuss commercial models provided by major cloud services providers and will demonstrate how to use SEER IT to develop a complete life cycle cost for a cloud outsourcing decision.
Bob Hunt has over 40 years of cost estimating and analysis experience. He has served in senior technical and management positions at Galorath (President Galorath Federal and Vice President Galorath), CALIBRE Systems (Vice President), CALIBRE Services (President), SAIC (Vice President), the U.S. Army Cost and Economic Analysis Center (Chief of the Vehicles, Ammunition, and Electronics ICE Division, U.S. Army Directorate of Cost Analysis (Deputy Director for Automation and Modeling), and other Army analysis positions.
He is the author of multiple technical papers published for ICEAA, IEEE, DCAS, and other professional journals. Mr. Hunt has provided IT systems and software program assessments, and IT and software cost estimating for commercial and federal clients including the U.S. Army, Customs and Border Patrol, the Department of Defense, NASA, and various commercial clients.
Mr. Hunt has held leadership positions and made technical presentations for the American Institute of Aeronautic and Astronautics (AIAA), served as the Chairman of the Economics Technical Subcommittee of the AIAA, the National Association of Environmental Professionals (NAEP), and the IEEE.