Home
 > Research and Reports > TEC Blog > Function Points: Are They Really the Right Unit of Measur...

Function Points: Are They Really the Right Unit of Measure for Software Sizing?

Written By: Murali Chemuturi
Published On: December 15 2008

Software estimation is one of the first steps in software project planning. Software estimation includes estimation of software size, development effort, cost, and schedule. Function points have become the most widely used unit of measure for software size measurement.

Function points (FPs) were developed at IBM by Allan J. Albrecht in the late 1970s to measure the size of the software proposed to be developed. It appears that FP methodology was oriented toward development of software on IBM platforms using COBOL for batch processing. From then on and until end-user computing (EUC) and graphical user interface (GUI) software came to be, FPs were the right unit of measure for sizing the software.

That was a good two decades ago—a very long time in the computer field indeed. Since then, FP methodology has become so popular, there is now an association called the International Function Point User Group (IFPUG). IFPUG champions FP methodology, and it conducts examinations to offer adherents of FPs a Certified Function Points Specialist (CFPS) certificate.

Software development progressed from waterfall-batch programs to event-driven GUI development. Flat files were replaced by database management system tables. Except for DB2 of IBM, other relational database management systems (RDBMSs) do not use the term "logical file." These developments rendered interpretation of FP difficult for software sizing.

Now Web applications are fast replacing client-server applications, but Web applications have brought with them quite a few complications of their own. For example, a hard-coded error message is to be considered a data element type (DET) in FP methodology. But developers are not able to digest that their confirmation screen, proudly displaying a "Thank You" message on an HTML Web page, is merely a DET, as considerable effort was spent to generate that message programmatically.

The discussions that take place on the IFPUG Bulletin Board bring to light the amount of confusion software developers face when mapping various aspects of their software to FP methodology. Given the changes the landscape of the software industry has undergone since FP methodology’s introduction nearly 30 years ago, the question arises as to whether FPs are still relevant for sizing software today.

Following is an examination of some of the aspects of FP methodology to try to answer this question. As you will see, what follows is mostly criticism of FPs. If you disagree with my discussion and inferences, please feel free to e-mail me and vent your anger—you are welcome.

What Is an FP?

Originally, an FP was considered a user-identifiable, distinct function, but this is a paradox. If this were true about FPs, then FP methodology should not include Internal Logical File (ILF) and ExternalInterface File (EIF) in its calculations, because a user, not expert in software development, would not be able to identify these two FP transactions.

External Input (EI), External Output (EO), or External Query (EQ)?

Today, most output is directed to the screen. Even reports are displayed on screen, with a facility to print them out on a printer if desired. EI and EO are elementary processes, where data crosses either into or out of the system boundary. EQ—also known as external inquiry—explicitly mentions an input and an output. Today all applications are event-driven, meaning there is always an input associated with every process. So does that mean EI and EO processes are inquiries?

Add, Modify, Delete, and Inquire

In the past, these four functions needed different programs, and hence were counted as four distinct FPs. Now, given improvements in programming languages and RDBMS technology, all four are made possible through one single screen. Should we still count four separate FP transactions?

External Input

In the good old COBOL days, a screen was an EI. Now with GUI, a screen is not just an EI; it could be an EQ or an EO too. So there is confusion as to what portion of the screen is EI and what portion is an EQ or an EO.

External Output

Again, a report that is directed toward the printer or screen was considered an EO—all columns were aligned through programs. Now we have report writers that take care of most of these programmer tasks. Still, the programmer needs to include some code to call the report file and pass necessary parameters to it. This additional code is just a few lines long, since the programmer is not writing code to derive data, align report columns or derive groups, or grand totals. He is just designing the report, which simplifies data derivation to a great extent. The effort needed by the programmer to produce reports has diminished greatly; so much so that a simple multi-column report with group totals and a couple of tables can be produced in half an hour with the help of a report writer. Therefore, how valid are the normalization and weight rules of EOs to FPs?

ILF and EIF

With COBOL, a file had to be either described in detail in every program that used it or brought in from a library. A change in a file caused a change in every program that used it. A mistake in the file description in the library caused program errors, and debugging the programs was an onerous task. Maintaining data files was also a tedious task. With RDBMS now having an interactive user interface for defining and maintaining tables and automatic management of indexes, it has become so easy to maintain files that most present-day programmers may not use indexed sequential access method (ISAM) files, inverted files, sequential files, and so on. I have met quite a few present-day programmers who have four to five years of experience and who have never heard of flat files. Now, is assigning high weight to ILFs and EIFs justified anymore?

Relevance of Complexity

Again, back in the good old COBOL days, computer hardware had limited capability, and computers had a limited number of peripherals to open multiple files simultaneously. Now with hard disks and RDBMS, any number of files can be opened concurrently. As well, computers’ RAM was limited, which made it necessary to limit the size of a program and to use variables inside the program with great care. Now that massive amounts of RAM are available, these restrictions are no longer relevant.

Also in the past, programming was a complex activity because you needed to manage the number of variables declared, to not declare unnecessary variables, and to insert statements to release excess memory used by variables that were no longer required. When more files had to be opened concurrently, you had to break up the functionality into more programs. But today this is no longer the case, and therefore complexity no longer applies.

Complexity and Software Size

FP methodology uses three levels of complexity (i.e., low, average, and high), and it normalizes each FP transaction. A simple transaction results in a lesser number of FPs. As complexity increases, the number of FPs per transaction increases. There are two concerns with this:

  1. The term "complexity" itself is a misnomer here. We say something is complex if we do not understand it or if we are not trained to do it. But for persons trained in software development, how can the term complexity logically apply? Perhaps the term "density" is more appropriate, as it implies that more components are to be packed on a single screen.

  2. Complexity does not increase the size, but it does increase the rate of achievement. For example, a mile is a mile irrespective of the terrain, paved or hilly. We can walk faster on a paved road than on a hilly terrain, but the distance remains one mile. FP methodology increases the size.

To me, increasing software size based on complexity seems untenable.

General System Characteristics and Value Adjustment Factor (VAF)

FP methodology makes use of 14 general system characteristics (GSCs) to compute a VAF that is used to adjust the size of software. My first criticism of this is that it is not right to use VAF or any other factor to adjust software size, because size does not change. VAF should only be used to influence the rate of achievement (productivity). My second criticism is that most of these factors are not right for today’s environment. A massive revision is required in these, if they are to be used at all.

Conversion of Software Size to Effort

FP practitioners suggest using a single productivity figure to convert FPs to effort. For example, let’s say 10 hours per FP is needed. If the size of a software product is estimated to be 100 FPs. The effort needed, then, is 100 multiplied by 10: 1,000 person hours. Now, the software development work is classified into four major classes, namely, 1) requirements analysis, 2) software design, 3) coding, and 4) testing. These four classes differ significantly in nature from one another, and productivity varies from class to class. Therefore, it stands to reason that four different productivity figures ought to be used to obtain the effort required of four different classes of software engineers.

Conclusion

When it was first developed, FP methodology filled a vacuum; it was an excellent measure for software size. But FP methodology has become woefully out-of-date, mainly due to improvements in hardware and software development tools. Present-day software development needs a totally different paradigm for estimating the size of software. Other methodologies and units of measure are more relevant for today’s environment. Examples include use case points (UCPs), which are based in unified modeling language (UML) methodology; object points (OPs), which simplify the counting and does not make use of any environmental factors to influence the software size; and software size units (SSUs), which use neither the complexity nor environmental factors to influence software size.

About the Author

Murali Chemuturi is a Fellow of the Indian Institution of Industrial Engineering and a senior member of the Computer Society of India. He is a veteran in the software development industry and is presently leading Chemuturi Consultants, which provides consultancy in software process quality and training. Your feedback is greatly appreciated—please e-mail the author at murali@chemuturi.com.

 
comments powered by Disqus

Recent Searches
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Others

©2014 Technology Evaluation Centers Inc. All rights reserved.