Home
 > Research and Reports > TEC Blog > Program Testing Methodology Part One: Preparing for Testi...

Program Testing Methodology Part One: Preparing for Testing

Written By: Dr. Edward J. Moskal
Published On: August 6 2004

Introduction

Before any system can be completely implemented on a production computer, the analysts and programmers working on the system must be able to state unequivocally that the programs work exactly as they were designed to work and that any errors or mistakes found in any of the data to be processed by the system will be handled properly. Since testing is quite unpredictable in terms of the results and, in some cases, the availability of the hardware required for testing, it is difficult to establish a day-to-day detailed testing schedule in advance. It should be possible, however, to estimate with some accuracy the time which will be required to test any given program. The most likely area for the "slipping" of deadlines in the implementation falls within the area of program and systems testing. When testing schedules are established, there should be adequate time allocated for testing.

To effectively test a program, the systems analyst should establish procedures which are to be followed. Basic rules for program testing and debugging that should be followed are summarized below.

  1. Individual programs should be compiled and all diagnostics removed prior to using test data.

  2. Test data should be created by the programmer that first tests all main routines.

  3. Additional test data should be created to assure that every routine and every instruction has been executed correctly at least once.

  4. Program testing should include testing with as many types of invalid data as is likely to occur when the system is in production.

  5. After each program has been individually tested and debugged, related programs in the system should be tested as a unified group. This is called "link" or "string" testing.

  6. A final "systems test" should be performed using data prepared by the systems analyst and, in some cases, data which has been previously processed through the "old" system.

This is Part One of a two-part note.

Part One will discuss the roles of programmers and analysts during testing; how to test individual programs; and what type of test data should be created to ensure a successful system implementation.

Part Two will discuss the modes of testing and management and user approval.

Testing Individual Programs

The testing of each individual program, called unit testing, should be handled by the programmer who has written the program. The amount of testing which is required to certify that a program is ready for production and the amount of reliability which can be given to a program can be controversial. However, from a programmer's standpoint, a program should never enter systems testing and be put into production if the programmer has any doubt that the program will work.

Programs should first be compiled without using test data in order to eliminate all diagnostics in the program due to programming errors.

After a "clean" compilation is obtained, that is, one without any compilation diagnostics, the programmer should then desk check his or her program. Desk-checking refers to the process of "playing computer" with the source code/listing, that is, following every step in the program and analyzing what will take place as the routine is processed. Desk-checking is probably the most useful tool in debugging a program and it is the most neglected and abused. Many programmers, immediately upon obtaining a compilation with no diagnostics, resubmit the program for a test run with test data. This is not a good testing technique as time should be taken to review the source listing.

Desk-checking has an added benefit of re-familiarizing the programmer with the program. In a complex program, there may be a period of ten or more weeks between the time the program is started and the time it is compiled. During this time the programmer could forget some of the routines or other portions of the program which were written. When desk-checking, however, the programmer must go completely through these routines again. Thus, the routines will be refreshed in the programmer's mind. This can be of great aid if the program fails because the programmer is more likely to be able to isolate the problem faster and with more accuracy than if he or she had not reviewed the program in detail.

Creating Test Data

After the program has been desk-checked, it must be tested on the computer using test data. To properly test the program there must be good test data available. In all applications, the programmer should use data to test the program which has been designed specifically to test the routines within the program. Test data should be designed to test main routines first. When it is found that the main routines produce the desired output, additional test data should be created that tests all other routines. This data should contain both valid and invalid data to test both the normal processing routines and the error routines of the program. In addition, the test data should be designed so that limits within the program are tested. For example, data should contain both minimum and maximum values which can be contained within a field. The data should be designed to allow maximum values and minimum values to occur in any intermediate fields which are to contain totals. There should be variations in the various formats which the program can process so that all possibilities can be covered. All of the codes which can be used in records should be contained in the test data so that the various routines which are called based upon different codes can be tested.

Another important area which must be tested is the files which are to be used in the program. If an indexed file is used, the file must be loaded and, in addition, must be tested using data to add records and delete and change the records within the file. When a direct file is used, the algorithm used to determine record addresses must be tested and also the routine which handles synonyms must be heavily tested. Any time data is stored on a file, such as disk, CD, or cartridge, whether it is relational, sequential, indexed sequential, or direct, the data should be "dumped" by using a utility program so that the file contents can be examined in detail. A programmer cannot assume that the data is correct because the file was built or that the data on the file was used successfully as input to another program. The data must be closely examined.

Most of the responsibility for program testing rests with the programmer who wrote the program. The programmer should design the test data, conduct the tests, and check the output from the tests. The analyst, however, can play an important role in program testing by first attempting to ensure that good testing techniques are followed and then reviewing and making suggestions to the programmer concerning data to be tested. The analyst can look at on-line screens, reports, and file dumps to ensure, early in the program testing, that the results correspond to what is expected. If there are variances, they can be corrected. The analyst should look at the test data which is being used to determine if he or she sees any areas which should be tested and which have been overlooked by the programmer. The analyst should not dictate to the programmer what data should be used to test the program. The analyst serves in an advisory capacity. The only time this may not be true is if the programmer is having difficulty with the program.

Summary

To effectively test a program, the systems analyst should establish procedures for testing and debugging. The programmer who wrote the program should then be responsible for conducting unit tests, desk-checking, and creating the test data. With clearly defined roles, this critical aspect of implementation can be handled successfully.

This is Part One of a two-part note.

Part One will discuss the roles of programmers and analysts during testing, how to test individual programs, and what type of test data should be created to ensure a successful system implementation.

Part Two will discuss the modes of testing and management and user approval.

About the Author

Dr. Moskal is a professor in the Computer Science Department at Saint Peter's College located in Jersey City, New Jersey. Prior to becoming a professor three years ago, Dr. Moskal worked for over twenty years for fortune 100 companies directing and managing system implementation projects on mainframe, mid-range and client-server computing platforms. Systems included financial, human resource, manufacturing, marketing, customer relationship management, e-commerce, enterprise resource planning, retail, legal, project management, shareholder, and administrative. His web link is http://edmoskal.tripod.com.

 
comments powered by Disqus

Recent Searches
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Others

©2014 Technology Evaluation Centers Inc. All rights reserved.