Home
 > Research and Reports > TEC Blog > Program Testing Methodology Part Two: Running Tests and G...

Program Testing Methodology Part Two: Running Tests and Getting Approval

Written By: Dr. Edward J. Moskal
Published On: August 7 2004

Introduction

Before any system can be completely implemented on a production computer, analysts and programmers working on the system must be able to unequivocally state that the programs work exactly as they were designed to work and that any errors or mistakes found in data processed by the system will be handled properly. Programming testing methodology should accommodate any scheduling slippage that may result from the test results. Once testing and data parameters have been determined testing can begin. After link or string testing; system testing; and backup and restart testing have been conducted with confidence, approval from management and users should be sought.

This is Part Two of a two-part note.

Part One discussed the roles of programmers and analysts during testing; how to test individual programs; and what type of test data should be created to ensure a successful system implementation.

Part Two will discuss the modes of testing and management and user approval.

Link or String Testing

After the individual programs have been tested using data designed by the programmer, the programs within the system which depend upon one another must be tested together. This is called link or string testing. For example, in a system with an edit and an update program, the output of the edit program must be properly formatted and contain the correct data to input to the update program. The programmer who wrote the edit program may not be the same programmer who wrote the update program. Therefore, there may be some discrepancy between the format in which the data is created in the edit program and the format in which the data is expected in the update program. The series of programs must be tested together.

Another reason for testing the series of programs is to ensure that the job stream is correct. In computer systems there is the operating system, and the features and functions of the operating system must be invoked through the use of the job stream. The job stream consists of those job control statements which are necessary to invoke the proper programs to be processed, to define the files to be created and processed, and to designate the devices which are to be used for the files. If files are defined incorrectly or programs are called improperly, the system will not function in the specified manner; therefore, the testing of the job stream is as important as the testing of the programs within the system.

The data which is used in string or link testing should be developed by the systems analyst. This data should include data which will test certain conditions which may not have occurred in program testing. For example, three programs may process a certain field and the data in this field is accumulated in the three programs. Although each program may be able to process a maximum size for the field, the accumulation in the three programs may cause the field to be too small in the last program. When the programs were tested individually, each programmer allowed the proper size field but when the actual processing takes place, the field size may have been improperly specified by the analyst. It must always be noted that not only is each program being tested by the string testing, but the system itself is being tested, and all of the specifications developed by the analyst, are also being tested. Therefore, data must be developed which not only tests the programs but also tests the specifications for the system which were designed by the analyst.

It is very important for the analyst to define what is to be tested during any given test run. It is inefficient and error-prone to attempt to test everything on the first test run. Therefore, the analyst should specifically designate portions of the system to be tested on each "pass" through the related programs within the system. It may be necessary to design 1020 "passes" through the system in order to adequately test each operation, each file definition and size, and all the other variances which can be found in data and in the processing.

The testing procedure should run from the normal to the abnormal. That is, the first tests of the related programs within the system will process test data which represents those cases which occur most often within the system and present no abnormal situations. For example, in a payroll system, the first data tested may be for employees that work a standard forty hour week with no overtime and with standard deductions. When this data is tested properly, the analyst should then mix this data with data which includes overtime and test the overtime routines. The analyst should then switch to test data which has some invalid times in the transaction input to be sure that this data is processed correctly.

System Testing

After the link or string testing is completed, the entire system must be run through a new series of tests. In these "systems tests", the entire system should be tested beginning with the preparation of on-line transactions running all of the processing programs within the system, and generating output from the tests. The main objectives of the systems test are as follows:

  1. To perform a final test of the programs against the original programming specifications.

  2. To ensure that the computer operations staff has adequate documentation and instructions to operate the system properly on the computer and properly handle the incoming and outgoing data from the system.

  3. To ensure that the user departments are able to properly enter data into the system and to properly disburse and use the output information.

  4. To ensure that the "flow" of the system works properly, that is, the channels for the delivery of information from the user or other departments are established, the input data moves smoothly from the point where it is received through its preparation for use on the computer, and the output data is properly handled to allow for its distribution to the user departments.

As can be seen from the objectives of the system testing, more than just the testing of computer programs is involved, and, additionally, more personnel and departments are involved than just the Information Systems Department. Because of this, there are a number of elements which must be considered by the systems analyst, including the following:

  1. A determination must be made concerning what constitutes a satisfactory performance of the system. That is, at what point can the system be declared ready to go into production.

  2. The methods of evaluation must be determined, that is, how is the performance of the system to be measured.

  3. The different "cases" to be tested must be determined by the analyst and the data to test these cases must be designed and prepared.

  4. The tests must be scheduled, both in the Information Systems Department and in conjunction with the user departments which may take part in the tests.

  5. All efforts in terms of data preparation, scheduling, and evaluation must be coordinated so that the testing takes place on schedule and both the data processing department and the user departments can make accurate determinations concerning the readiness.

During systems testing, the test data should be designed by the analyst exclusively to satisfy the objectives of the systems testing. After all of the testing has been satisfactorily completed, however, it may prove useful to run several complete systems tests using live data, that is, data which has been previously processed through the currently existing system.

System testing is one of the most important steps in ensuring that a reliable system is being placed in production yet is often one of the most neglected areas in testing a system. The primary reason it is neglected is that it is the last step before system implementation. Therefore, if any schedule slippage has occurred, it is likely that the system is behind schedule when systems testing is to take place and, to try to meet the schedule, systems testing is quickly performed, if at all. This can be a mistake because it means the complete system has not been adequately tested and, therefore, any errors which are in the system will not be found until it is actually in production. Thus, it is imperative that complete systems testing be performed, even if it means the system will not be implemented exactly on schedule.

Backup and Restart Testing

Backup and restart testing is performed to ensure that the files which are kept as backup are adequate and that the programs which are used to both copy the master files and other files which are saved and the programs used to reload the files work properly. Even though these programs are not part of the system in terms of daily, weekly, or monthly use, it is important that they work properly because without them, recovery will not be possible when recovery is necessary.

Restart testing must be performed to ensure that the restart points within the system are carefully and accurately defined. If they are not, then it will not be possible to restart the system with all files in tact. Although in a batch system there are not a great deal of restart problems, this problem can become more difficult when the programs processing the data are very long running programs and checkpoints are used, or when an on-line application is used and there is not an accurate record of transactions received or updates processed. If a file is destroyed, then restarting is difficult and it is in these areas that testing and organizing must take place if a system is to be restarted properly.

No matter what the backup and restart procedures are for the system, and no matter how simple or difficult they may be, they must be tested. Therefore, in addition to testing the entire system as a whole, the analyst must be sure that all backup and restart procedures are adequate. If they are not, and if the analyst does not fulfill his responsibility in this area, there could at some future time be an adverse result that impacts business operations because of a failure within the system. It is important that these backup and restart procedures be reliable before the system is implemented.

Management and User Approval

After the final systems testing is completed, the analyst should obtain management and user approval. The purpose of management and user approval is to show them what they are getting out of the system and to ensure that the functionality of the system is working properly. In addition, the procedures for processing the data through the system should be exhibited so that all concerned are aware of what is necessary in order to efficiently operate the system.

There is no way that changes in files or on-line screens, processing routines, or report formats can be accommodated at this time without substantially delaying the project. These changes should have been made when the presentation was given after the system design phase. The only changes which should be made at this time are corrections of errors which are discovered when the users and management review the output of the system. If there are errors in programming, then the errors will have to be corrected and the system re-tested. Again, however, it is imperative that no changes, other than corrections, be accepted by the analyst after the system has been extensively tested and before it has been implemented.

Changes which must be made to the programs in order to add additional features or restructure files, reports, etc. can be made to the system after it is in production and when there is time to properly make changes.

It is important to have the user's and management's approval of the system which is shown to them. As with previous meetings and review sessions with management and the users, it is quite helpful to the analyst to have approvals on everything which is going to be produced from the system. In this way, if there is a problem at a later date with some component or module in the system, the analyst can point out that whatever caused the problem was approved at the conclusion of the testing of the system. There is some protection for the analyst and it many times causes the user to very closely examine the system.

Conclusion

Program testing and debugging is one of the most critical aspects of implementing a computer system because without programs which properly work, the system will never process information and produce that output for which it was designed.

This is Part Two if a two-part note.

Part One discussed the roles of programmers and analysts during testing; how to test individual programs; and what type of test data should be created to ensure a successful system implementation.

Part Two discussed the modes of testing and management and user approval.

About the Author

Dr. Moskal is a professor in the Computer Science Department at Saint Peter's College located in Jersey City, New Jersey. Prior to becoming a professor three years ago, Dr. Moskal worked for over twenty years for fortune 100 companies directing and managing system implementation projects on mainframe, mid-range and client-server computing platforms. Systems included financial, human resource, manufacturing, marketing, customer relationship management, e-commerce, enterprise resource planning, retail, legal, project management, shareholder, and administrative. His web link is http://edmoskal.tripod.com.

 
comments powered by Disqus

Recent Searches
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Others

©2014 Technology Evaluation Centers Inc. All rights reserved.