Using Software Measurement in Outsourcing – Paula Holmberg
Paula will show how to use software measurement in managing outsourcing contracts to ensure standards of software development by the outsourcer are meeting/exceeding industry standards. She will look at the pros/cons of outsourcing, how to use measurement to manage those and how to adjust targets as the contracts progress.
Paula is the current Executive Director of ISBSG. She has worked in software measurement for 15 years mainly for IBM and specifically in software measurement. She has been IFPUG certified 3 times, participated in the board of the Australian Software Metrics Association and the infancy of ISBSG. Paula has worked at leading a software measurement team to deliver contractual metrics obligations and consulted in the software measurement area. Paula loves to explore through the data to see what insights can be gained.
Executive Director, ISBSG
Benchmarking Test Density – Thomas Fehlmann
In Automotive, but also in many other areas such as the Internet of Things (IoT), Trains or Airplanes, Software Testing is necessary to get certificates and trust. Or would users ever dare to let an autonomous car drive them through traffic without trusting the software tests ensuring safety (SOTIF — Safety of The Intended Functionality, ISO 21448)? Drivers even switch off their ADAS (Advanced Driving Assistance System) because of mistrust. Airline passengers likely will avoid flights with new software releases of the Boeing 737 MAX unless the constructor can exhibit a test density that restores trust.
Test Density can be measured and benchmarked. This talk is about how to measure tests, defects, test density and predict reliability, even for software that relies on Artificial Intelligence (AI) and Support Vector Machines (SVM), as many visual recognition systems do.
[I add a similar presentation from EOQ 2019 in Lisboa; however, the ISBSG talk will focus much more on benchmarking and test metrics for product marketing and also include a note about using other, less technical sizing methods such as IFPUG and NESMA]
Dr. Thomas Fehlmann is a senior expert in software metrics and testing, a Lean Six Sigma Black Belt for lean and agile software development and promoter of customer-oriented product design and testing.
He spent most of his professional life as a quality manager for software organizations. As such, he has led a few companies to global market dominance using Quality Function Deployment (QFD) and Six Sigma for Software. He has run the Euro Project Office since 1999 and is internationally recognized as QFD expert and as software metrics expert.
Since 2016, Thomas has been an academic member of the Athens Institute for Education and Research.
Thomas is a frequent presenter at conferences on topics including Six Sigma, QFD, software quality, software testing, project management, software engineering and software metrics.
Dr. Thomas Fehlmann
Euro Project Office, AG
Automated Function Points, the Game Changer! – Harold van Heeringen
Already for decades, Functional Size Measurement is considered a best practice for mature organizations, facilitating crucial management processes such as project estimation, project control, performance measurement, benchmarking, vendor management, contracting and others. This is possible, as functional size measurement methods are ISO standardized and therefore objective, repeatable, verifiable, auditable, and therefore defensible.
Nowadays, the agile way of working is conquering the world of software development, aiming to bring as much value to the business as early as possible by self-organizing teams. With agile, also the concept of Story Points becomes popular. This is a relative effort estimation approach used by teams to estimate the amount of work to be done in the next one or two sprints. A rather subjective approach and metrics based on Story Points can’t be used outside the team. Although very useful on team level, senior management in many organizations make the mistake to abandon standardized functional size based metrics, mainly because the team considers measuring functional size as waste and ‘a thing of the past’. Senior management now loses grip on the aforementioned processes, which can’t be carried out any longer due to a lack of standardized functional size based metrics.
The industry needs a way to objectively size the amount of functionality provided to the business, while not considered being waste to the teams. The concept of Automated Function Points promises a solution to the issue. When functional size per sprint or release can be measured automatically, the ‘waste’ argument of the teams is not longer valid, while the metrics based on automated function points can be used to compare outside of the team, and for instance in vendor management and contracts.
In this presentation, I’ll cover the industry trends as described above to provide the context, using a few customer cases from my work as principal consultant at Metri, a fact-based IT Advisory firm based in the Netherlands. I’ll show the Automated Function Points standard, and compare this standard with the traditional manual standards that exist, which will show that manual functional size methods will profit from an increased popularity of Automated Function Points as well. The main message is to show how Automated Function Points has the potential to completely change the game, resulting in more mature IT organizations that are in control using standardized metrics instead of people’s opinions
Drs. Harold van Heeringen graduated from the university of Groningen in business economics in 1997 after which he worked for IT service provider Sogeti for 17 years as a senior consultant software metrics / software cost estimation consultant. In 2015, he moved to independent sourcing advice and benchmarking services provider METRI, where he works as a principal consultant and practice lead for IT Intelligence services (including Estimation & Performance Measurement) services.
Harold has been president of the International Software Benchmarking Standards Group (ISBSG) from 2011 until 2019 and is still an active director in the board. Harold is also board member of Nesma, the international organization focusing on size measurement and cost estimation. In Nesma, he is responsible for international partnerships and cooperation.
Harold regularly publishes white papers, blogs and articles on software metrics, performance measurement and metrics. He can be reached through e-mail at email@example.com. He is active on LinkedIn and invites everybody who is interested to link with him. Harold also shares his professional thoughts on twitter: @haroldveendam.
Harold van Heeringen
ISBSG Data Collection and Product Manager
Application Portfolio Estimation – How to Get the Most Value Out of My Investment – Frank Vogelezang
Organizations increasingly rely on software. The result is that most organizations have an expanding application portfolio. To keep this portfolio up to date organizations need to prioritize the investment in software, since it requires resources that are limited. They also need better estimates to be able to use the limited resources in the most efficient way.
What we have observed is that an improved estimation maturity helps organizations get the most value out of investments in their portfolio. The basic prioritization mechanism is the WSJF (weighted shortest job first) from the SAFe framework. WSJF means that the investment where [Value]/[Cost] gives the highest result should be done first. This principle can be used at any level, from the portfolio level down to individual sprints. At the epic or portfolio level formal techniques add the most value, where at the sprint level also informal techniques can be used.
With WSJF organizations have a balanced mechanism to determine the value of investment proposals. This is expressed as Cost of Delay. The Cost of Delay is built up from the user-business value, time criticality, risk reduction value and the opportunity enablement value. Most organizations understand the user-business value, but the other three are essential as well to keep the portfolio technologically healthy and ready for future developments.
To come up with a cost estimate at least two, preferrably three, different approaches need to be combined. The estimation approaches we see as the most valuable are:
- Sprint or team estimates (people times duration)
- Analogy comparison with previous investments
- Probabilistic estimate, based on historical data
All estimates should lead to a range or a three-point estimate. For organizations with a low estimation maturity that’s the first Learning point. It is a change in thinking going from a fixed number estimate (with almost 100% probability of not hitting that number) to a range that reflects the (un)certainty of the estimate.
When the change in thinking has sunken in into the decision making process the use of probabilistic techniques is quickly accepted, since each probabilistic estimate is accompanied by a certainty percentage. So decision makers can choose to claim critical resources with a high certainty percentage (usually 70% or up) and use a lower percentage for normal investment proposals.
In the presentation I will show real examples (anonimized) of how this process works and what the effects are with increasing estimation maturity.
Frank is Senior Consultant IT Intelligence at Metri, where he advises major public and private organizations to help them get a grip on the value and cost of their IT. Next to his professional work Frank is an active member of several communities in the areas of pricing, parametric estimating, functional sizing and cost calculation.
Frank is President of the Board and member of the Measurement Practices Committee of COSMIC and a member of the Counting Practices Committee of Nesma. Frank was Program Chair of the International Workshop on Software Measurement in Rotterdam in 2014 and in Haarlem in 2019. He is a regular speaker on international conferences on the pricing, valuing and estimating of IT projects and services.
Since 1999 Frank is active in the field of pricing, valuing and estimating of IT projects and services. Frank’s personal objective is the creation of insight. Insight is the starting point of making informed decisions, whether it is on pricing, estimating or managing IT projects and services. Decisions based on insight are better defended, less questionable and lead to better results.
Senior Consultant IT Intelligence, Metri
Software Estimation: How to use ISBSG data for early software sizing? – Alain Abran
In software estimation figuring out early the expected software size is the first critical step. This is quite a challenge when at estimation time the requirements are still fuzzy and incomplete!
This is even more challenging than to figure out total volume of an iceberg when only its tip is visible: for software, there is no physics law to help!
How can ISBSG data help? This presentation will present an overview of early software sizing techniques available to practitioners to tackle this software-iceberg challenge.
Dr. Abran research expertise includes software estimation, software quality measurement, software functional size measurement, software project & risks management and software maintenance management.
Dr. Abran holds a Ph.D. in Electrical & Computer Engineering from Ecole Polytechnique (Canada) and master degrees in Management Sciences and Electrical Engineering from U. of Ottawa (Canada).
His 20 years of work in software development and management within the Canadian banking industry was followed by +20 years of teaching and research at École de Technologie Supérieure (ETS) & Université du Québec à Montréal (UQAM) (Canada) where he graduated over 45 doctoral students in software engineering. Now retired from his full-time university position, he remains active in professional associations and in R&D as an adjunct professor at ETS.
Dr. Abran industry-oriented research has influenced a number of international standards in software engineering, such as: ISO 9126, ISO 15939, ISO 19759, ISO 19761 and ISO 14143-3.
He has published + 500 peer-reviewed papers with +11,000 citations (Google Scholar) as well as a number of books (including translations in Chinese, Japanese and Korean):
Dr Alain Abran
Professor, École de Technologie Supérieure (ETS), Canada
IFPUG and SNAP FP Analysis of Wearable Devices (FitBit). New Questions demand New Solutions! – Sushmitha Anantha / Saurabh Saxena
Internet of Things is changing the way data and information is collected, processed, analyzed and consumed. In a way, the devices are getting their own identity, their own limited intelligence leading to analysis and decision making, however limited they are currently. Application of these concepts seem to be unlimited and certainly open huge stash of opportunities and challenges in every aspect of software engineering.
As the device-centric applications continue to increase, it is essential to focus on measurement of such class of applications. Challenge is to define approaches to establish the applicability and at the same time ensure the repeatability of such measurements.
This study is intended to apply IFPUG Function Point and SNAP methods in the field of IoT wearable device taking FitBit as an example.
We intend to cover:
- Establishing a generic model based on Fitbit architecture and its communication with external world
- Applying Rules of IFPUG Function Point and SNAP methods to establish the applicability of these methods
- Identify the challenges in each of the approaches and discuss possible solutions
Wearable devices such as FitBit are revolutionizing various fields such as Healthcare and Insurance domains. With other IoT devices included, the set of applications that could be sized with the proposed approach is limitless. Study of these applications from effort and cost perspective is another angle to explore. Questions that arise at this point are can the existing benchmarks be directly used for IoT backed applications or should there be special treatment? Some of these questions may not be readily answerable due to lack of literature in this field, however its worth attempting a discussion.
Though wearable devices form a fraction of overall smart devices and internet enabled appliances, the approach may still be applicable for the larger group with minor changes. If FPA must be done on any specific case, detailed study of the communication model and architecture shall help in identifying the approach in general
Sushmitha is a Function Point Expert and Productivity Champion working for a leading Services Organization. She is the current IFPUG Partnerships & Events Committee Chair. Sushmitha has worked for more than a decade in the fields of function point, related metrics, and function points productivity measurement in different domains, methodologies and technologies. She has authored various whitepapers related to function point measurement and metrics
“Saurabh Saxena is the chairperson of IFPUG International Membership Committee. He is also an active member of SNAP and PEC (Earlier CEC) committees. He is a certified Project Management Professional (PMP), FP/SNAP consultant, trainer and specializes in Project Estimations, Productivity, Cost & Quality Analysis. Professionally he is a ‘Program Manager’ in Amdocs and is based in Pune, India. His whitepapers on Project Management, Estimations & Process Improvements have been published and well appreciated. Besides that, he is a regular speaker in international conferences.
For more details about him kindly visit: https://www.linkedin.com/in/saurabh-saxena-pmp-cfps-a109266/”
Validation Information in the Mexican Reference Database Using the ISBSG Database – Francisco Valdés-Souto
Since 2007 to date, in Mexico the use of functional size measurement has been promoted using the COSMIC standard (ISO / IEC 19761), COSMIC is a Mexican Standard (NMX-I-19761) and the functional size is considered a as a significant metric, understanding that it has three qualifiers defined by the Mexican Association of Software Metrics (AMMS) which acronym is BTT:
- BASIC because they are internationally generally accepted standards that allow the generation of derived metrics in the future;
- TRANSVERSAL because they serve to all economic actors and software development roles, to perform their functions (development, test, D&A, self-management, etc.) and transactions (buying and selling, bids, using, etc.);
- TRANSCENDENT because being basic is intended to allow comparison over time (forward and backward) and through different practices, technologies, which are changing.
One of the first steps of the AMMS was to create a reference database for the Mexican Software Development Industry, which was carried out through a public call for projects to be measured using COSMIC.
The approach to dimension the size of the collected projects was the use of the only approximation method that does not require a local calibration (EPCU approximation approach) according to the Expert guide of Early measurement and approach of the COSMIC method.
The reason for using size approximation was that the full requirements of all the projects could not be accessed, sometimes because they did not exist or because they were confidential.
Once the database was made up, with data from the project, effort and cost, as well as the size dimensioned using the EPCU approximation approach, the next question was whether the information was correct, although the approach method used has been broadly studied, how to know that the information obtained was correct?
Some of the studies carried out by ISBSG using the database of the Mexican Software Development Industry were replicated with two purposes:
1.- Validate that the behavior was equivalent, to have a validation that the information obtained was not counter-intuitive
2.- Provide formal reports to the agents of the Mexican Industry of Software development, which serve as references to mitigate the market inefficiency called information asymmetry.
This conference presents an analysis and comparison of some of the results obtained with the AMMS database and the studies carried out by ISBSG
Validation of Supplier Estimates Using COSMIC Method – Francisco Valdés-Souto
In the software development industry, it is well known that software development organizations (suppliers) need a better and formal estimation approaches in order to increase the success rate of software projects developed.
Considering a systematic view, any project requested by a customer needs to be validated in the estimation provided by the supplier, regardless of how formal or not the estimation method utilized was.
However, very often the customers do not know the information used by the suppliers to make their estimations. The software decision-makers must face a validation estimates problem where the more useful solution is used the expert judgment, with several problems related to it.
In this paper, a real case study is described where a validation estimates model was generated from a reference database based on the COSMIC method. The defined model using a density function helps the customer to define validation criteria considering the probability that the supplier estimate will be met according to an industry reference database.
President of COSMIC, Dr. Francisco Valdés Souto, is an Associate Professor of the Faculty of Sciences of the National Autonomous University of Mexico (UNAM). He has a Doctorate in Software Engineering, specializing in Software Measurement and Estimation at the École de Technologie Supérieure ( ETS) and two master’s Degrees in México and France. More than 20 years of experience in critical software development, Founder of SPINGERE, the first Mexican company specialized in software measurement, estimation and evaluation, as well as Founder of the Mexican Association of Software Metrics (AMMS).
He is the main promoter of formal software metrics in Mexico, promoting COSMIC as a National Standard. His research interests are software measurement and estimation applied to software project management (i.e. scope management and economics).
Dr. Francisco Valdés Souto
President of Cosmic
Measuring Microservice Architectures and Cloud Applications – Marcello Sgamma
Fragmentation of software architectures in recent years has led, starting from the monolithic approach, to SOA architectures, and then to microservice architectures and their deployment in the cloud, often mixing developments (PaaS) and configurations of existing functionalities (SaaS).
Considering functional measurement of software complexity, initial stages of FPA have gained importance: type and scope of the measurement, appropriate identification of boundaries and correct granularity of BFCs.
These aspects often have different relevance and impact in the various architectures.
In this perspective, some counting examples will be presented in cloud architectures, highlighting critical aspects of functional measurement and their relevance on the accuracy of the results.
I graduated in Information Sciences at the University of Pisa, with a diploma from the Scuola Normale Superiore. I chose industry rather than an academic career, entering a little jewel of research and technology that Tecsiel was in those years. For almost 30 years I followed the transitions of company name, often changing “brand” but essentially staying with the same colleagues and often the same customers. I contributed to the construction of different systems, such as technical assistance, judiciary, governance, and various e-commerce portals.
A quiet, almost monotonous career, years dedicated to the family and to raising two wonderful children (who now are graduating). On reaching my 50s, a couple of slaps of fate hit both myself and my wife, the greatest fortune of my life, made us rediscover aspects of life that had been buried by the dust of time. We throw ourselves on the Via Francigena, 360 kms by foot from San Miniato to Rome, just me and her, without any training: 20 days full of joy. New professional interests, the opportunity to teach at the University and recently also a strong commitment on metrics. In short, sometimes life starts again at 50.
Senior Consultant, NTT Data
Cost and Data Issues facing Today’s Cybersecurity Analysts – Bob Hunt
Current studies imply that it is more costly to defend against a cybersecurity attack than to execute the attack. Studies also indicate that the typical breach is not detected until about 200 days after the breach. We need to understand the scope of security. Is it physical, computer, hardware, people, policy, or all of the above? This presentation evaluates three approaches; economic, industrial engineering, and parametric, to costing cyber security. The strengths and weaknesses of each approach are discussed.
Bob Hunt has over 40 years of cost estimating and analysis experience. He received his Society for Cost Estimating and Analysis (SCA) Certification in 1991. He has served in senior technical and management positions at Galorath (President Galorath Federal and Vice President Galorath), CALIBRE Systems (Vice President), CALIBRE Services (President), SAIC (Vice President), the U.S. Army Cost and Economic Analysis Center (Chief of the Vehicles, Ammunition, and Electronics ICE Division, U.S. Army Directorate of Cost Analysis (Deputy Director for Automation and Modeling), and other Army analysis positions. He is the author of multiple technical papers published for ICEAA, IEEE, DCAS, and other professional journals. His published works include How to Estimate and Manage Large Federal Software Development Programs, Price To Win, Should Cost, Cost Estimating for Agile Software Development, and Applying Earned Value to Agile Software Development. He has served as a track chair and technical presenter for multiple SCEA/ISPA/ICEAA Conferences.
Mr. Hunt has provided information technology systems and software program assessments, and IT and software cost estimating for commercial and federal clients including the U.S. Army, Customs and Border Patrol, the Department of Defense, NASA, and various commercial clients. In addition to his ICEAA Treasurer responsibilities, Mr. Hunt has held leadership positions and made technical presentations for the American Institute of Aeronautic and Astronautics (AIAA), served as the Chairman of the Economics Technical Subcommittee of the AIAA, the National Association of Environmental Professionals (NAEP), and the IEEE.
Mr. Hunt received his Masters Degree from Virginia State University and his Bachelors of Science degree from Virginia Commonwealth University both degrees Mathematics Education.
Mr. Hunt has been married to his wife Barbara for 50 years. They have two children and three grandchildren. He is an active member of Regester Chapel United Methodist Church. Mr. Hunt has been an active participant in local Public Service. He was elected to the Tri-County/City (TCC) District of the Virginia Soil and Water Conservation Board (SWCB). He served as Chairman of that Board in 2009 and Treasurer in 2010/2011. He was also elected to the Stafford County School Board from the Aquia District. He was appointed to the Stafford County Utilities Commission and Served as Chairman in 2008 and 2009. He serves on the Agricultural/Purchase of Development Rights Commission of Stafford County. Mr. Hunt has served as President and Secretary of the Stafford Education Foundation; a 501(c)3 IRS Charity dedicated to supporting public education in Stafford County.
President, Galorath Federal
The 20M€ Tender Challenge, a Real Case Study of Estimation of a Very Big Agile Initiative – Manuel Buitrago
We received an offer from a client that seemed impossible. They needed us to estimate the effort and the market-adjusted cost of a +20M€ agile development. This had to include a precise estimate of tests to be carried out and needed to be delivered within 10 days! Doesn’t it sound impossible to achieve?
This talk will be about what we did and how we successfully managed this challenge in just 10 days.
Since 2012 LedaMC has been participating in several international IT conferences presenting studies related to function point price vs rates behaviour in IT outsourced contracts, and productivity vs quality of software development projects.
This time, LedaMC will showcase a new presentation for a case study that involves estimation, testing and agile initiatives.
LedaMC is a Spanish consulting company helping big companies to manage their IT development costs, software productivity & quality and software vendors relationship. Thanks to its database with more than 60.000 projects, LedaMC is a reference in benchmarking studies and Estimating & Productivity models.
Manuel Buitrago is a senior expert in software metrics, software functional size measurement, software project & risks management. Manuel is also a member of the IFPUG Certification Committee and IFPUG Certified Function Point Specialist. With over 15 years of experience, he’s been dedicated to maximize efficiency of software development teams and projects for different international clients all across Europe
Manuel is currently working as service manager for a major Telco client, leading a software measurement team to ensure that contractual metrics agreements are met for vendors that provide both waterfall and agile development teams.
Agile Benchmarking in 2020 – Where Are We Today (and why you should care)? – Joe Schofield
At ISBSG’s first IT Confidence Conference 2013, I presented “Using Benchmarks to Accelerate Process Improvement.” This year’s presentation will provide a high-level “refresh” of how organizations can benefit from benchmarking. A simple list of recommended Do’s and Don’ts will be offered.
The primary focus of this presentation however, will be around experience-based and research-based agile benchmarking data. The experience-basis will incorporate client reactions and limitations to any agile data. The research-basis will trend industry-leading benchmarks including the use of agile methods, practices, and benefits since 2016 with an obvious focus on the most recent data. Insights and cautions will be shared for the attendee’s consideration. Finally, recommendations on agile-related measures and metrics will be provided.
Since 2012:Joe continues to enable enterprise-wide agile transformation through executive coaching; organization training, certification, and practice; policy and process codification; ongoing improvement; organizational alignment; collaborative teaming; and value delivery.
Selected Key Roles:Joe Schofield is a Past President of the International Function Point Users Group. He retired from Sandia National Laboratories as a Distinguished Member of the Technical Staff after a 31-year career. During twelve of those years he served as the SEPG Chair for an organization of about 400 personnel which was awarded a SW-CMM® Level 3 in 2005. He continued as the migration lead to CMMI® Level 4 until his departure.
As an enabler and educator:Joe is an Authorized Training Partner with VMEdu and a Scrum Certified Trainer with SCRUMstudy™. He has facilitated ~200 teams in the areas of software specification, team building, and organizational planning using lean six sigma, business process reengineering, and JAD. Joe has taught over 100 college courses, 75 of those at graduate level. He was a certified instructor for the Introduction to the CMMI for most of the past decade. Joe has over 80 published books, papers, conference presentations and keynotes—including contributions to the books: The IFPUG Guide to ITand Software Measurement(2012), IT Measurement, Certified Function Point Specialist Exam Guide, and The Economics of Software Quality. Joe has presented several worldwide webinars for the Software Best Practices Webinar Series sponsored by Computer Aid, Inc.
Life long learning:Joe holds eight agile-related certifications: SAFe Agilist 5.0, SCT™, SSMC™, SSPOC™, SMC™, SDC™, SPOC™, and SAMC™. He is also a Certified Software Quality Analyst and a Certified Software Measurement Specialist. Joe was a CMMI Institute certified Instructor for the Introduction to the CMMI®, a Certified Function Point Counting Specialist, and a Lockheed Martin certified Lean Six Sigma Black Belt. He completed his Master’s degree in MIS at the University of Arizona in 1980.
Community & Family:Joe was a licensed girl’s mid-school basketball coach in the state of NM for 21 seasons–the last five undefeated, over a span of 50 games. He served seven years volunteering in his church’s children’s choir; eventually called to coordinate 150 children and 20 staff. Joe is a veteran having served four years in the U.S. Air Force and six more in the Air National Guard. He was appointed to serve on the state of New Mexico’s Professional Standards Commission. By “others” he is known as a husband, father, and grandfather.
Agile Enterprise Transformation Coach | International Speaker | Author | Independent Consultant | Certification Enabler