Software estimation, namely, software size, effort, cost, and schedule (duration) are causing frequent, animated discussions among the community of software estimators. Normally, it is the senior project leaders and project managers who carry out this activity.
Software development consists of a few disparate activities needing specialized knowledge, namely, in requirements gathering, analysis, and management; software design, coding, and independent verification and validation (IV&V); and rollout, deployment, installation, and commissioning. Each of these activities is carried out by a differently skilled person who uses various tools that range in complexity.
Productivity is defined as the rate of output for given inputs. Productivity is expressed as "so many units of output per day" or "so many units of output per hour." Productivity is also defined as the ratio of output to input.
In the context of this article, productivity will refer to the rate of producing a unit of output using a set of inputs in a defined time unit.
Concerns with Software Size Estimation
The present scenario in the software industry is that there are multiple units of software size measures. These units of measure include function points, use case points (UCP), object points, feature points, Internet points, test points, mark II function point analysis (FPA), lines of code (LOC), etc. There is no accepted way of converting software size from one unit of measure into another.
One odd aspect of these measures is that the size of the software is adjusted (increased or decreased) based on such factors as complexity, for example. Yet size is something that does not change. For example, a pound of cheese does not become heavier or lighter if the person weighing it is less or more experienced in weighing things, or if the scale is a mechanical one or an electronic one, right? As another example, the distance of one mile remains one mile whether a young person is walking it or an old person is walking it, or whether the mile is a freeway or a busy city street.
But the rate at which results are achieved changes. Taking the examples above, the old person will likely complete a one-mile walk more slowly than the younger person will, and one can travel more quickly on a freeway than on a busy city street.
Further, there is no agreement on how to count LOC. Should one count logical statements or physical statements? And how does one treat inline documentation? Should inline documentation be counted, or not?
These are some of the major issues with size measurement.
Concerns with Productivity
The software development world is obsessed with giving a single, empirical, all-activities-encompassing figure for productivity.
Attempts have been made to specify the productivity as 10 person-hours per function point, but with a rider that the person-hours per function point could vary from 2 to 135 depending on the product size, team size, and other factors. What is meant by "specify the productivity" is to assign a number that represents the effort in person-hours needed to develop one unit of software size for converting software size into function points to software development effort in person-hours. Sometimes ranges are given instead, such as fifteen to thirty hours per UCP. At other times, empirical formulae are worked out depending on a set of factors, such as in constructive cost model (COCOMO).
One of the concerns with these productivity figures is that they lump all activities—requirements analysis, design, review, testing, and so forth—into a single measure. Yet the skills required for these activities are different, as are the tools used, the inputs, and the outputs. Lumping them all together under the heading of "software development" and giving one single figure of productivity at best can only result in a very rough estimate—never an accurate one.
The Productivity Path
Software development includes the following activities:
- pre-project activities, including a feasibility study, financial budgeting, and approvals for the project (that is, financial and technical approval, and "project go-ahead")
- project start-up activities, such as identifying the project manager, allocating the project team, and setting up the development environment; project planning; setting up various protocols, namely, service level agreements and progress reporting formalities; and project-related training
- software engineering activities, namely, user requirements analysis; software requirements analysis; software design, coding, and unit testing; the various types of testing—integration, functional, negative, system, and acceptance; and preparation of the build and documentation
- rollout activities, including installing the hardware and system software; setting up the database; installing the application software; performing pilot runs; training users; performing parallel runs; and rollover
- project cleanup activities, including documenting good and bad practices; analyzing the project (project postmortem); archiving records; releasing resources; releasing the project manager; and initiating software maintenance
Now, when talking of industry "rules of thumb" (accepted, common sense procedures) for productivity, it is not clear how many of the above activities are included in the productivity figure. Interestingly, no one would like to stake his or her life on the productivity figure, which is an industry rule of thumb that is floating around.
Look at the nature of these activities:
- Requirements analysis—understanding and documenting what the user needs, wants, and expects so that the software designers fully understand and can design a system in strict conformance with the stated requirements. Dependence on external factors is high.
- Software design—considering the alternatives in hardware, system software, and development platforms; arriving at the optimal choice for each; and designing an architecture that will meet the stated requirements and fulfill customer expectations. The architecture must be compatible with the current technologies, and the design documented in such a way that the programmers understand and deliver a product that conforms to the original specifications of the user. Quite a few alternatives exist, and with software design being a key and strategic activity, errors here have serious consequences.
- Coding—developing software code that conforms to the design and that is as failure-free as possible (it is so easy to unintentionally leave "bugs" inside).
- Code review—walking through code written by another programmer, deciphering its functionality, and trying to predict the possible errors that the customer might come across when using the software.
- Testing—trying to unearth all the defects that could be left in the software. However, it is an accepted fact within the industry that finding 100 percent of the defects is impossible. Furthermore, testing 100 percent of the software is an impractical endeavor.
Now, with such variance in the nature of these activities, it is obvious that the productivity for all of them is not uniform (that is, not the same figure). The pace of work differs for each of these activities.
These activities do not depend on the amount of software code produced, but rather on other factors, such as
- requirements, which depend on the efficiency and clarity of their source (either users or documentation);
- design, which depends on the complexity of processing, alternatives available, and constraints within which the functionality is to be realized;
- code review, which depends on the style of coding;
- testing, which depends on how well the code is written (the more errors that are left, the more time it takes to test and to re-test); and
- the coding itself, which depends on the quality of design.
Therefore, we need to have separate productivity figures for each of these activities.
Drawing a parallel from the manufacturing industry—punching holes in a sheet—the following are the activities to be carried out: 1) machine set-up; 2) tool set-up; 3) loading the job; 4) punching the hole; 5) deburring the hole; 6) clean up; and 7) delivering the sheet for the next operation.
If multiple holes are punched, "per hole" time comes down, as set-up activities are one-time activities.
Therefore, if we look at coding a unit, for example, the activities to be carried out could be 1) receive the instructions; 2) study the design document; 3) code the unit; 4) test and debug the unit for functionality; 5) test and debug the unit for unintended usage; 6) delete the trash code from the unit; 7) regression-test the unit; and 8) release the unit for the next step.
Similarly, we can come up with micro-activities for each software development phase.
Productivity Figures: Empirical or Study-based?
Each of the above activities has a different rate of achievement. Standard times for each of these activities must be established. Once this is done, work study techniques, such as synthesis or analytical estimation, should be used to arrive at the overall time to complete the job.
Whether time study techniques are used to arrive at individual productivity studies or to gather empirical data, the answer to the query above is that software development must be acknowledged as neither totally mechanical nor totally creative in nature. It is also impractical to time activities that have a creative component; work study methods allow for this aspect of software development. Lots of research is being conducted on "white-collar productivity," and perhaps methods to "time" software development productivity figures may be available in the future. Presently, empirical data seems to be the solution of choice.
Where do we get empirical data? One way is through time studies using industrial engineering techniques. Another way, which is easier and more reliable, is from historic data from time sheets.
Most of the time sheet software available and being used in the industry is oriented toward payroll and billing. It does not capture data at the micro-level so that it can be used to arrive at the productivity data. Most time sheets capture data at two or three levels in addition to date and time. A project is always at the first level, and the second and third levels can be occupied by module and component, component and activity, or a similar combination. In addition to the date and time worked of each employee, the time sheet needs to capture data at five levels, namely, project, module, component, development phase, and the task accomplished. Thus, data would be available to establish productivity data empirically in a realistic manner.
The present focus for all activities of software development is on macro-productivity. This needs to change, and we need to shift our focus from macro- to micro-productivity for all activities. The way to achieve this shift is to modify our time sheets and the depth of data they collect.
Benefits of studying productivity at the micro-level include the following:
- Better predictability of software development
- Better quality estimates for pricing assistance during project acquisition and sanction stages
- More precise target-setting while assigning work, which leads to better morale among the software developers
- More accurate cost estimation
It is important to understand the difference between the terms productivity and capacity. Productivity is the rate of achievement for a micro-activity of human endeavor; capacity is the rate of achievement for a facility (factory, organization, etc.), and multiple activities are included in the figure denoting the capacity. For the purpose of software estimation, the focus must shift from macro-productivity (capacity) to productivity (for micro-activity). Empirical data-gathering is preferred in order to arrive at productivity figures for various software development activities, as time-and-motion study techniques cannot provide satisfactory results where some component of creativity exists (as it does in software development). One way of gathering the empirical data is by improving the time sheet. This is the way forward for computing micro-level productivity figures.
About the Authors
Murali Chemuturi is a fellow of Industrial Engineering with the Indian Institution of Industrial Engineering. His career has spanned over thirty years of experience with professional organizations, including ECIL, TCS, Metamor, and Satyam. He worked initially in manufacturing, and then in IT. He is presently leading Chemuturi Consultants, focusing on software products for the software development industry. He has conducted a number of in-company training programs for software project management and software estimation. He can be reached at email@example.com.
Sarada Kaligotla has completed her Masters in computer applications, and is a certified project management professional (PMP) from the Project Management Institute (PMI), and a certified software quality analyst (CSQA) from the Quality Assurance Institute (QAI). She presently works for Blue Cross Blue Shield of Massachusetts. Kaligotla has six years experience in the software industry, along with development and project management experience. She can be reached at firstname.lastname@example.org