Analysis of the Factors that Affect Productivity of Enterprise Software – T. Furuyama
This presentation reports the analysis results of clarifying factors that affect productivity of enterprise software projects as follows.
(1) Productivity is inversely proportional to the root of fifth power of the test case density and fault density respectively.
(2) Projects, where high security or reliability level software is required has low productivity, and projects where objectives and priorities are very clear, projects where documentation tools are used, and projects where sufficient work space is provided all have high productivity.
(3) Productivity of the project managed by skillful project manager is low because he/she tries to detect many faults.
(4) If work conditions of a project where high security, reliability, or performance and efficiency level software is required are poor such that work space is narrow or role assignment and each person’s responsibility are not clarified, the project has remarkably low productivity.
Why Can’t People Estimate: Estimation Bias and Strategic Mis-Estimation- D. Galorath,
Many people view an estimate as a quick guess that no one believes anyhow. But producing a viable estimate is core to project success as well as ROI determination and other decision making.
In decades of studying the art and science of estimating it has become apparent that:
- most people don’t like to and/or don’t know how to estimate;
- those that estimate are often always wildly optimistic, full of unintentional bias;
- strategic mis-estimating provides misleading estimates when it occurs. However, it is also obvious that viable estimates can make projects successful, outsourcing more cost effective, and help businesses make the most informed decisions.
That is why metrics and models are essential to organizations, providing the tempering with that outside view of reality that is recommended by Daniel Kahneman in his Nobel Prize winning work in estimation bias and strategic mis-estimation.
Sizing for estimating, measurement and benchmarking – C. Green
This presentation talks about how sizing can be a normalising factor for both estimating, measurement and benchmarking. It introduces the need to utilise a size measure for both functional as well as non-functional size. This utilises the IFPUG method Function Point Analysis (FPA) as well as Software non-functional Assessment Process (SNAP).
The presentation takes the view of estimating to measurement for projects – to benchmarking for organisations utilising industry data as the competitive comparison.
The presentation touches on issues with requirements. It examines how to utilise FPA and SNAP to re-cover this. It examines Accuracy levels of size assessment for estimating, high-level view of other data then size that should be collected – but focus is on sizing as a measure – not a full measurement program.
Measuring Tests using COSMIC – T. Fehlmann & E. Kranich
Information and Communication Technology (ICT) is not limited to software development, mobile apps and ICT service management but percolates into all kind of products with the so-called Internet of Things.
ICT depends on software where defects are common. Developing software is knowledge acquisition, not civil engineering. Thus knowledge might be missing and consequently leading to defects and failures to perform. In turn, operating ICT products involves connecting ICT services with human interaction, and is error-prone as well.
There is much value in delivering software without defects. However, up to now there exists no agreed method of measuring defects in ICT. UML sequence diagrams is a software model that describes data movements between actors and objects and allows for automated measurements using ISO/IEC 19761 COSMIC. Can we also use it for defect measurements that allows applying standard Six Sigma techniques to ICT by measuring both functional size and defect density in the same model? It allows sizing of functionality and defects even if no code is available. ISO/IEC 19761 measurements are linear, thus fitting to sprints in agile development as well as for using statistical tools from Six Sigma.
New topics of “IPA/SEC White Paper 2014-2015 on Software Development projects in Japan” – M. Saeki
By analyzing historical data from the software industry, it is possible to improve the software productivity and quality. This is done through benchmarking and management decisions about software development practices.
Software Reliability Enhancement Center (SEC) of Information-technology Promotion Agency, Japan continuously collects new data from software development projects. This is done with co-operation from more than twenty companies and is published in the “IPA/SEC White Paper on Software Development projects in Japan” periodically.
The White Papers report the analysis of software development/maintenance projects in the Japanese IT industry. This quantitatively demonstrates technological competence concerning software productivity and quality. IPA/SEC will publish “IPA/SEC White Paper 2014-2015 on Software Development projects in Japan” and the addendum in this autumn. Their quantitative analyses are backed by a 3,541 project data set. This will contain more than 10 new analyses concerning software productivity and quality.
In this presentation, new analyses about the following topics will be shown:
(1) The relationship among function size, product size, and effort in each development phase.
(2) Productivity variation factors – Productivity (for example, development effort per function point) varies due to reliability requirement grades, number of pages of design documents per function point, and number of test cases per function point.
(3) Reliability variation factors – Reliability (for example, number of identified defects in service per function point) varies due to reliability requirement grades and maturity level of development organization (for example, quality assurance system).
Towards an Early Software Effort Estimation Based on the NESMA Method (Estimated FP) – S. Ohiwa, T. Oshino, S. Kusumoto & K. Matsumoto
The function point (FP) is a software size metric that is widely used in business application software development. Since FPs measure the functional requirements, the measured software size remains constant regardless of the programming language, design technology, or development skills involved. In addition, when planning development projects, FP measurement can be applied early in the development process. A number of FP methods have been proposed.
The International Function Point Users Group (IFPUG) method and the COSMIC method have been widely used in software organizations.
FP is considered one of the most promising approaches in software size measurement, but nevertheless it does not prevail over all Japanese software industries. One of the reasons prohibiting the progress of introducing FPs into software organization is that function point counting needs a lot of effort. According to the IPA/SEC White Paper on Software Development Projects in Japan 2010-2011, the penetration rate of FP in Japanese software development companies is only 43.8 percent. Also, the survey on Information System User Companies by JUAS disclosed that the penetration rate of FP in Japanese information system user companies is less than 30 percent.
NESMA provides some early function point counting methods. One of them is the estimated function point counting method (called NESMA EFP). In the EFP, a counter first determines all functions of all function types (ILF, EIF, EI, EO, EQ) in the target specifications. Then, the counter rates the complexity of every data function (ILF, EIF) as Low, every transactional function (EI, EO, EQ) as Average, and calculates the total unadjusted function point count. The counting effort is quite small in comparison with the IFPUG method, but there are not many articles that show the usefulness of the NESMA EFP based on actual software project data, especially for application of software cost prediction.
This paper aims to evaluate the validity of using the NESMA EFP as an alternative to the IFPUG FP in the early estimation of software development effort. In the evaluation, we used the software development data of 36 projects extracted from a software repository that maintains 115 data items of 512 software development projects collected by the Economic Research Association from 2008 through 2012. Common characteristics of these 36 projects are as follows:
• Software was newly developed.
• Software development includes the following five software-specific low-level processes; architectural design, detailed design, construction, integration, and qualification testing.
• Actual FP and total amount of effort are available.
• Actual functional size of each function type in all functions is available.
• The function types for each function have realistic functional sizes. For example, the average functional size of ILF of each function is from 7 to 15.
Main results of the empirical evaluation, and these contributions to software development are as follows;
(1) There is an extremely high correlation between the IFPUG FP count and the NESMA EFP count
Figure 1 is a scatter plot showing the relationship between the IFPUG FP count and the NESMA EFP count in 36 software development projects. The coefficient of determination between these two FP counts is 0.970.
This result is not inconsistent with previous empirical evaluation by NESMA reported in the document “Early Function Point Counting.” In the NESMA evaluation, the upper bound of the FP count was about 3,000. On the other hand, the upper bound is about 30,000 in this evaluation. It implies that we can use the NESMA EFP more widely as an alternative to the IFPUG FP in software development projects in Japan than before. Also, the NESMA EFP may be useful for individuals and companies, who are considering whether to use the IFPUG FP in their software development projects, to evaluate feasibility of the IPFUG FP application.
(2) There is a high correlation between the NESMA EFP count and the software development effort
Figure 2 is a scatter plot showing the relationship between the NESMA EFP count and the total amount of software development effort in 36 software development projects. The coefficient of determination between these two FP counts is 0.823. It implies that we may be able to use the NESMA EFP to predict software development effort in the early stages of software development project.
Early software effort estimation is one of the most important issues in software project management, so this result also encourages many individuals and companies who are considering whether to use the IFPUG FP in their software development projects. The coefficient is high enough, but we should continue further discussion and data analysis to eliminate or adjust some outliers to improve the accuracy of effort prediction by the NESMA EFP.
Software Rates vs Price of Function Points: A cost analysis – R.D. Fernández, R. De La Fuente & D. Castelo,
Implementing productivity models helps in the understanding of Software Development Economics, which up to now is not entirely clear. Most organizations believe that the only way to achieve improvements is lowering software rates. With a background of three years of statistical data from large multinational clients, LEDAmc presented at UKSMA 2012 Conference a study showing how the relationship between software rates and cost per function point differs from what could be expected, sometimes even far from expected. The experience gained by LEDAmc through the implementation of software productivity models over the last two years brings new and updated insights to this study, which will be presented during the conference.
Beyond the Statistical Average: The KISIS Principle (Keeping it Simple is Stupid) – J. Ogilvie,
Based on the speaker’s experience negotiating and managing many outsourcing contracts using Function Points as a Key Performance Indicator, this presentation describes the pitfalls that can be experienced if one takes too simplistic a view of the meaning and use of Function Point data and suggests ways in which they may be avoided.
Starting with a typical outsourcing scenario, and using ISBSG project data, techniques to improve the effectiveness of a Function Point program are demonstrated.
Particular emphasis is made on the importance of setting baselines appropriate to the environment to be measured and deciding how to determine if agreed performance targets are achieved.
The use of statistical analysis beyond just averages, to enable a more sophisticated and pragmatic interpretation of measurement data is demonstrated. The view that a little statistical analysis can actually uncover “lies and damn lies” is offered.
Finally, a template for design of a successful Function Point Program is presented.
New Look at Project Management Triangle – P. Forselius
Almost every Project Management book introduces the project management triangle. Almost every certified Project Manager thinks that she or he understands the relationships between the elements of triangle correctly: “The larger the scope, the more cost and time needed”. However, especially in ICT industry majority of the projects overrun both the budget and schedule, and deliver less functionality than expected. In this presentation we take another look at the project management triangle, to learn how to get more outcomes with spending less money and time.