2020 IT Conference

ISBSG hosted the first virtual IT conference on 28th August and 4th September 2020.  Experts from around the world presented their latest findings on IT metrics topics.

Using Software Measurement in Outsourcing – Paula Holmberg

Paula shows how to use software measurement in managing outsourcing contracts. This ensures the outsourcer meets/exceeds software industry standards.  She examines the pros/cons of outsourcing, how measurement can  manage this and how to adjust targets as the contracts progress.

Read/download Paula’s presentation

Watch Paula’s presentation 

Paula is the current Executive Director of ISBSG.  She worked in software measurement for 15 years mainly for IBM and specifically in software measurement.  She has been IFPUG certified 3 times, participated in the board of the Australian Software Metrics Association and the infancy of ISBSG.  Paula worked at leading a software measurement team to deliver contractual metrics obligations and consulted in the software measurement area.  Paula loves to explore data to see what insights can be gained.

Measuring Microservice Architectures and Cloud Applications – Marcello Sgamma

Fragmentation of software architectures in recent years has led, starting from the monolithic approach, to SOA architectures, and then to microservice architectures and their deployment in the cloud, often mixing developments (PaaS) and configurations of existing functionalities (SaaS).

Considering functional measurement of software complexity, initial stages of FPA have gained importance: type and scope of the measurement, appropriate identification of boundaries and correct granularity of BFCs. These aspects often have different relevance and impact in the various architectures.

Counting examples will be presented in cloud architectures, highlighting critical aspects of functional measurement and their relevance on the accuracy of the results.

Read/download Marcello’s presentation.

Watch Marco’s presentation

I graduated in Information Sciences at the University of Pisa, with a diploma from the Scuola Normale Superiore.  I chose industry, entering Tecsiel.  For almost 30 years I followed the transitions of company name, often changing “brand” but  staying with the same colleagues and often the same customers.  I contributed to the construction of different systems, such as technical assistance, judiciary, governance, and various e-commerce portals. New professional interests, the opportunity to teach at the University and recently also a strong commitment on metrics.

Automated Function Points, the Game Changer! – Harold van Heeringen

For decades, Functional Size Measurement has been considered best practice for mature organizations, facilitating crucial management processes such as project estimation, project control, performance measurement, benchmarking, vendor management, contracting and others.  Functional size measurement methods are ISO standardized and therefore objective, repeatable, verifiable, auditable, and defensible.

The industry needs a way to objectively size the amount of functionality provided to the business. The concept of Automated Function Points promises a solution to the issue. When functional size per sprint or release can be measured automatically, the ‘waste’ argument of the teams is not longer valid, while the metrics based on automated function points can be used to compare outside of the team, and for instance in vendor management and contracts.

In this presentation,  I’ll show how Automated Function Points has the potential to completely change the game, resulting in more mature IT organizations that are in control using standardized metrics instead of people’s opinions

Read/download Harold’s presentation

Watch Harold’s presentation

Harold van Heeringen is a business economics graduate from the university of Groningen. He has worked for IT service provider Sogeti for 17 years as a senior consultant software metrics/cost estimation consultant. He then moved to METRI, as a principal consultant and practice lead for IT Intelligence services (including Estimation & Performance Measurement). Harold is a past president of ISBSG and is a director on the board. He is also board member of Nesma, where he is responsible for international partnerships and cooperation. Harold regularly publishes white papers, blogs and articles on software metrics, performance measurement. He can be reached through e-mail at harold.van.heeringen@metrigroup.com. He is active on LinkedIn and invites everybody who is interested to link with him. Harold also shares his professional thoughts on twitter: @haroldveendam.

IFPUG and SNAP FP Analysis of Wearable Devices (FitBit). New Questions demand New Solutions! – Sushmitha Anantha /  Saurabh Saxena

Internet of Things is changing the way data and information is collected, processed, analyzed and consumed. In a way, the devices are getting their own identity, their own limited intelligence leading to analysis and decision making, however limited they are currently. Application of these concepts seem to be unlimited and certainly open huge stash of opportunities and challenges in every aspect of software engineering.

As the device-centric applications continue to increase, it is essential to focus on measurement of such class of applications. Challenge is to define approaches to establish the applicability and at the same time ensure the repeatability of such measurements.

This study is intended to apply IFPUG Function Point and SNAP methods in the field of IoT wearable device taking FitBit as an example.

We cover:

  • Establishing a generic model based on Fitbit architecture and its communication with external world
  • Applying Rules of IFPUG Function Point and SNAP methods to establish the applicability of these methods
  • Identify the challenges in each of the approaches and discuss possible solutions

Wearable devices such as FitBit are revolutionizing various fields such as Healthcare and Insurance domains. With other IoT devices included, the set of applications that could be sized with the proposed approach is limitless. Study of these applications from effort and cost perspective is another angle to explore. Questions that arise at this point are can the existing benchmarks be directly used for IoT backed applications or should there be special treatment? Some of these questions may not be readily answerable due to lack of literature in this field, however its worth attempting a discussion.

Though wearable devices form a fraction of overall smart devices and internet enabled appliances, the approach may still be applicable for the larger group with minor changes. If FPA must be done on any specific case, detailed study of the communication model and architecture shall help in identifying the approach in general.

Read/download Sushmitha and Saurabh’s presentation

Watch Sushmitha and Saurabh’s presentation

Sushmitha is a Function Point Expert and Productivity Champion working for a leading Services Organization. She is the current IFPUG Partnerships & Events Committee Chair. Sushmitha has worked for more than a decade in the fields of function point, related metrics, and function points productivity measurement in different domains, methodologies and technologies. She has authored various whitepapers related to function point measurement and metrics

Saurabh Saxena is the chairperson of IFPUG International Membership Committee. He is also an active member of SNAP and PEC (Earlier CEC) committees. He is a certified Project Management Professional (PMP), FP/SNAP consultant, trainer and specializes in Project Estimations, Productivity, Cost & Quality Analysis. Professionally he is a ‘Program Manager’ in Amdocs and is based in Pune, India. His whitepapers on Project Management, Estimations & Process Improvements have been published and well appreciated. Besides that, he is a regular speaker in international conferences. For more details about him kindly visit: https://www.linkedin.com/in/saurabh-saxena-pmp-cfps-a109266/

Benchmarking Test Density – Thomas Fehlmann

In Automotive, but also in many other areas such as the Internet of Things (IoT), Trains or Airplanes, Software Testing is necessary to get certificates and trust. Or would users ever dare to let an autonomous car drive them through traffic without trusting the software tests ensuring safety (SOTIF — Safety of The Intended Functionality, ISO 21448)? Drivers even switch off their ADAS (Advanced Driving Assistance System) because of mistrust. Airline passengers likely will avoid flights with new software releases of the Boeing 737 MAX unless the constructor can exhibit a test density that restores trust.

Test Density can be measured and benchmarked. This talk is about how to measure tests, defects, test density and predict reliability, even for software that relies on Artificial Intelligence (AI) and Support Vector Machines (SVM), as many visual recognition systems do.

Read/download Thomas’s presentation

Watch Thomas’s presentation

Dr. Thomas Fehlmann is a senior expert in software metrics and testing, a Lean Six Sigma Black Belt for lean and agile software development and promoter of customer-oriented product design and testing. He spent most of his professional life as a quality manager for software organizations. As such, he has led a few companies to global market dominance using Quality Function Deployment (QFD) and Six Sigma for Software. He has run the Euro Project Office since 1999 and is internationally recognized as QFD expert and as software metrics expert. Since 2016, Thomas has been an academic member of the Athens Institute for Education and Research. Thomas is a frequent presenter at conferences on topics including Six Sigma, QFD, software quality, software testing, project management, software engineering and software metrics.

Application Portfolio Estimation – How to Get the Most Value Out of My Investment – Frank Vogelezang

Most organizations have an expanding software application portfolio. To keep this up to date, organizations must prioritize its software investment, since it requires limited resources. Better estimates enable limited resources to be used in the most efficient way.

Improved estimation maturity helps organizations get the most value out of investments in their portfolio. The basic prioritization mechanism is the WSJF (weighted shortest job first) from the SAFe framework. WSJF means that the investment where [Value]/[Cost] gives the highest result should be done first. This principle can be used at any level, from the portfolio level down to individual sprints. At the epic or portfolio level formal techniques add the most value, where at the sprint level also informal techniques can be used.

With WSJF organizations have a balanced mechanism to determine the value of investment proposals. This is expressed as Cost of Delay. The Cost of Delay is built up from the user-business value, time criticality, risk reduction value and the opportunity enablement value. Most organizations understand the user-business value, but the other three are essential as well to keep the portfolio technologically healthy and ready for future developments.

To come up with a cost estimate at least two, preferably three, different approaches need to be combined. The estimation approaches we see as the most valuable are:

  1. Sprint or team estimates (people times duration)
  2. Analogy comparison with previous investments
  3. Probabilistic estimate, based on historical data

All estimates should lead to a range or a three-point estimate. For organizations with a low estimation maturity that’s the first Learning point. It is a change in thinking going from a fixed number estimate (with almost 100% probability of not hitting that number) to a range that reflects the (un)certainty of the estimate.

When the change in thinking has sunken in into the decision making process the use of probabilistic techniques is quickly accepted, since each probabilistic estimate is accompanied by a certainty percentage. So decision makers can choose to claim critical resources with a high certainty percentage (usually 70% or up) and use a lower percentage for normal investment proposals.

In the presentation I will show real examples (anonymised) of how this process works and what the effects are with increasing estimation maturity.

Read/Download Frank’s presentation

Watch Frank’s presentation

Frank is Senior Consultant IT Intelligence at Metri, where he advises major public and private organizations to help them get a grip on the value and cost of their IT. He is an active in the areas of pricing, parametric estimating, functional sizing and cost calculation. Frank is President of the Board and member of the Measurement Practices Committee of COSMIC and a member of the Counting Practices Committee of Nesma.  Frank was Program Chair of the International Workshop on Software Measurement in Rotterdam in 2014 and in Haarlem in 2019. He is a regular speaker on international conferences on the pricing, valuing and estimating of IT projects and services. Since 1999 Frank is active in the field of pricing, valuing and estimating of IT projects and services.

Cost and Data Issues facing Today’s Cybersecurity Analysts – Bob Hunt

Current studies imply that it is more costly to defend against a cybersecurity attack than to execute the attack. Studies also indicate that the typical breach is not detected until about 200 days after the breach. We need to understand the scope of security. Is it physical, computer, hardware, people, policy, or all of the above? This presentation evaluates three approaches; economic, industrial engineering, and parametric, to costing cyber security. The strengths and weaknesses of each approach are discussed.

Read/download Bob’s presentation

Watch Bob’s presentation

Bob Hunt has over 40 years of cost estimating and analysis experience. He has served in senior technical and management positions at Galorath (President Galorath Federal and Vice President Galorath), CALIBRE Systems (Vice President), CALIBRE Services (President), SAIC (Vice President), the U.S. Army Cost and Economic Analysis Center (Chief of the Vehicles, Ammunition, and Electronics ICE Division, U.S. Army Directorate of Cost Analysis (Deputy Director for Automation and Modeling), and other Army analysis positions. He is the author of multiple technical papers published for ICEAA, IEEE, DCAS, and other professional journals. Mr. Hunt has provided IT systems and software program assessments, and IT and software cost estimating for commercial and federal clients including the U.S. Army, Customs and Border Patrol, the Department of Defense, NASA, and various commercial clients. Mr. Hunt has held leadership positions and made technical presentations for the American Institute of Aeronautic and Astronautics (AIAA), served as the Chairman of the Economics Technical Subcommittee of the AIAA, the National Association of Environmental Professionals (NAEP), and the IEEE.

Validation Information in the Mexican Reference Database Using the ISBSG Database – Francisco Valdés-Souto

Since 2007 in Mexico, the use of functional size measurement has been promoted using the COSMIC standard (ISO / IEC 19761). COSMIC is a Mexican Standard (NMX-I-19761) and functional size is considered a as a significant metric. It has three qualifiers defined by the Mexican Association of Software Metrics (AMMS):

  • BASIC – international standards that allow the generation of derived metrics in the future;
  • TRANSVERSAL – they serve to all economic actors and software development roles, to perform their functions (development, test, D&A, self-management, etc.) and transactions (buying and selling, bids, using, etc.);
  • TRANSCENDENT – being basic is intended to allow comparison over time (forward and backward) and through different practices, technologies, which are changing.

One of the first steps of the AMMS was to create a reference database for the Mexican Software Development Industry, which was carried out through a public call for projects to be measured using COSMIC.

The approach to dimension the size of the collected projects was the use of the only approximation method that does not require a local calibration (EPCU approximation approach) according to the Expert guide of Early measurement and approach of the COSMIC method.

The reason for using size approximation was that the full requirements of all the projects could not be accessed, sometimes because they did not exist or because they were confidential.

Once the database was made up, with data from the project, effort and cost, as well as the size dimensioned using the EPCU approximation approach, the next question was whether the information was correct, although the approach method used has been broadly studied, how to know that the information obtained was correct?

Some of the studies carried out by ISBSG using the database of the Mexican Software Development Industry were replicated with two purposes:

1.- Validate that the behavior was equivalent, to have a validation that the information obtained was not counter-intuitive

2.- Provide formal reports to the agents of the Mexican Industry of Software development, which serve as references to mitigate the market inefficiency called information asymmetry.

This conference presents an analysis and comparison of some of the results obtained with the AMMS database and the studies carried out by ISBSG.

Read/download Francisco’s presentation

Watch Francisco’s presentation

Validation of Supplier Estimates Using COSMIC Method – Francisco Valdés-Souto

In the software development industry, it is well known that software development organizations (suppliers) need a better and formal estimation approaches in order to increase the success rate of software projects developed.

Considering a systematic view, any project requested by a customer needs to be validated in the estimation provided by the supplier, regardless of how formal or not the estimation method utilized was.

However, very often the customers do not know the information used by the suppliers to make their estimations. The software decision-makers must face a validation estimates problem where the more useful solution is used the expert judgment, with several problems related to it.

In this paper, a real case study is described where a validation estimates model was generated from a reference database based on the COSMIC method. The defined model using a density function helps the customer to define validation criteria considering the probability that the supplier estimate will be met according to an industry reference database.

See Francisco’s presentation

Watch Francisco’s presentation

President of COSMIC, Dr. Francisco Valdés Souto, is an Associate Professor of the Faculty of Sciences of the National Autonomous University of Mexico (UNAM). He has a Doctorate in Software Engineering, specializing in Software Measurement and Estimation at the École de Technologie Supérieure ( ETS) and two master’s Degrees in México and France. More than 20 years of experience in critical software development, Founder of SPINGERE, the first Mexican company specialized in software measurement, estimation and evaluation, as well as Founder of the Mexican Association of Software Metrics (AMMS). He is the main promoter of formal software metrics in Mexico, promoting COSMIC as a National Standard. His research interests are software measurement and estimation applied to software project management (i.e. scope management and economics).

Agile Benchmarking in 2020 – Where Are We Today (and why you should care)? – Joe Schofield

At ISBSG’s first IT Confidence Conference 2013, I presented “Using Benchmarks to Accelerate Process Improvement.” This year’s presentation will provide a high-level “refresh” of how organizations can benefit from benchmarking. A simple list of recommended Do’s and Don’ts will be offered.

The primary focus of this presentation however, will be around experience-based and research-based agile benchmarking data. The experience-basis will incorporate client reactions and limitations to any agile data. The research-basis will trend industry-leading benchmarks including the use of agile methods, practices, and benefits since 2016 with an obvious focus on the most recent data. Insights and cautions will be shared for the attendee’s consideration. Finally, recommendations on agile-related measures and metrics will be provided.

Read/download Joe’s presentation

Watch Joe’s presentation

Joe continues to enable enterprise-wide agile transformation through executive coaching; organization training, certification, and practice; policy and process codification; ongoing improvement; organizational alignment; collaborative teaming; and value delivery. He is a Past President of IFPUG. He was a Sandia National Laboratories’ “Distinguished Member” of the Technical Staff. He served as the SEPG Chair for an organization of about 400 personnel which was awarded a SW-CMM® Level 3. He continued as the migration lead to CMMI® Level 4 until his departure. Joe is an Authorized Training Partner with VMEdu and a Scrum Certified Trainer with SCRUMstudy™. He has facilitated ~200 teams in the areas of software specification, team building, and organizational planning using lean six sigma, business process reengineering, and JAD. Joe has taught over 100 college courses, 75 of those at graduate level. He was a certified instructor for the Introduction to the CMMI for most of the past decade. Joe has over 80 published books, papers, conference presentations and keynotes. Joe has presented several worldwide webinars for the Software Best Practices Webinar Series sponsored by Computer Aid, Inc.

Software Estimation: How to use ISBSG data for early software sizing? – Alain Abran

In software estimation figuring out early the expected software size is the first critical step. This is quite a challenge when at estimation time the requirements are still fuzzy and incomplete!

This is even more challenging than to figure out total volume of an iceberg when only its tip is visible: for software, there is no physics law to help!

How can ISBSG data help? This presentation will present an overview of early software sizing techniques available to practitioners to tackle this software-iceberg challenge.

Read/download Alain’s presentation

Watch Alain’s presentation

Dr. Abran research expertise includes software estimation, software quality measurement, software functional size measurement, software project & risks management and software maintenance management. Dr. Abran holds a Ph.D. in Electrical & Computer Engineering from Ecole Polytechnique (Canada) and master degrees in Management Sciences and Electrical Engineering from U. of Ottawa (Canada).
His 20 years of work in software development and management within the Canadian banking industry was followed by +20 years of teaching and research at École de Technologie Supérieure (ETS) & Université du Québec à Montréal (UQAM) (Canada) where he graduated over 45 doctoral students in software engineering. Now retired from his full-time university position, he remains active in professional associations and in R&D as an adjunct professor at ETS. Dr. Abran industry-oriented research has influenced a number of international standards in software engineering, such as: ISO 9126, ISO 15939, ISO 19759, ISO 19761 and ISO 14143-3. He has published + 500 peer-reviewed papers with +11,000 citations (Google Scholar) as well as a number of books (including translations in Chinese, Japanese and Korean):

The 20M€ Tender Challenge, a Real Case Study of Estimation of a Very Big Agile Initiative – Manuel Buitrago

We received an offer from a client that seemed impossible. They needed us to estimate the effort and the market-adjusted cost of a +20M€ agile development. This had to include a precise estimate of tests to be carried out and needed to be delivered within 10 days! Doesn’t it sound impossible to achieve?

This talk will be about what we did and how we successfully managed this challenge in just 10 days.

Since 2012 LedaMC has been participating in several international IT conferences presenting studies related to function point price vs rates behaviour in IT outsourced contracts, and productivity vs quality of software development projects.

This time, LedaMC will showcase a new presentation for a case study that involves estimation, testing and agile initiatives.

LedaMC is a Spanish consulting company helping big companies to manage their IT development costs, software productivity & quality and software vendors relationship. Thanks to its database with more than 60.000 projects, LedaMC is a reference in benchmarking studies and Estimating & Productivity models.

Read/download Manuel’s presentation

Watch Manuel’s presentation

Manuel Buitrago is a senior expert in software metrics, software functional size measurement, software project & risks management. Manuel is also a member of the IFPUG Certification Committee and IFPUG Certified Function Point Specialist. With over 15 years of experience, he’s been dedicated to maximize efficiency of software development teams and projects for different international clients all across Europe. Manuel is currently working as service manager for a major Telco client, leading a software measurement team to ensure that contractual metrics agreements are met for vendors that provide both waterfall and agile development teams.