Home
 > Research and Reports > TEC Blog > Quality Reassurance: How To Get What You Need from Accept...

Quality Reassurance: How To Get What You Need from Acceptance Testing

Written By: James Lyndsay
Published On: July 4 2006

Acceptance Testing

Some software is made by the people who use it. Some is bought in bulk. Taking the middle ground are systems that are built on a shared core, but are customized or configured for a unique purchaser. Large commercial technology information (IT) projects typically fall within this group. The software is neatly tailored to an organization, and is different enough to provide strategic value, but is more mature and cost-effective than software built from scratch.

If the scenario sounds familiar, then this article is for you. You already know that the cost of the system to your organization is greater than the license fees and customization charges paid to your supplier. In monetary terms, the value of a properly-working system is greater to you than to your supplier—and you may well be exposed to greater risk should it fail.

If you are buying software, you may have little direct influence on the quality of the work done upstream. Your supplier may not have the means to predict the behavior of the system in your business environment, and only your organization can genuinely judge value and risk. Existing quality assurance strategies based around early prevention of problems may not fit well with your situation. How then can you know that your new system is going to be valuable? How will you know that you have taken reasonable steps to discover the risks?

Acceptance testing is the work that you do to examine your supplier's product. This work falls into three broad areas: assessing the value, assessing the risk, and the vital support for these, that comes from assessing your supplier's working practice.

1 Assessing the Value

A key part of acceptance testing concentrates on showing that the system does what it should. At its most basic, this involves a list of expectations. Starting with the most important, the list is compared against the software. At the end of the list, you're done.

It is easy to let this list of expectations become a simple list of functionality. However, the value of the system may lie more in how that functionality works together, and how it works in practice. Is it responsive? Clear to use? How much will we have to change our workflow? Can we take advantage of everything we expected to—and are there unexpected opportunities?

The system's value will also depend on its reliability, security, and flexibility. Without these, the value of the system will plummet, but although they are obvious if absent, their vulnerability to circumstance cannot be assessed by looking at value alone.

2 Discovering Risks

While your organization may not be making the software, it is certainly taking on all the system's issues. Discovering risks is an open-ended puzzle; you can't reliably predict its scope, and you can't say that you're finished simply because you've completed a set of tasks. Instead, you'll make a judgment based on how long (and how well) you've been looking for problems, and how many you've found.

To make the best of the available resources, it is important to look as widely as possible and to be able to round up problems when they are found. To sustain the effectiveness of your search, you need to recognize low-yield approaches and move on to viable alternatives.

Some strategies concentrate primarily on problems in the core program or requirements. If you are buying mature software in use across many sites and are confident that such problems have already been found and fixed, you might concentrate your efforts elsewhere. It is productive to look for unexpected problems that would not pop up in other organizations. In particular, you will want to look for problems related to the changes and configuration that make your version of the software unique, and problems with the way the system interacts with your unique systems, processes, and teams.

3 Audit your supplier's work

Your supplier will be busy testing as they customize and extend their system to meet your particular requirements; testing is a key and substantial part of software development, and may well be separated out in the budget negotiations. Testing itself, however, produces no working deliverable. Instead, it produces information and intelligence about the product and the project. This information is valuable to both the supplier and the acquirer.

As the acquirer, you should expect to see some of this information—and to get particular value from the test plan (including details of test work completed), and the ever-changing list of outstanding, known problems. You, as the bill payer, may also be involved in the overall test strategy, particularly if you are not directly involved in ongoing decisions about quality and timeliness during the development.

Contracts and the Acceptance Period

Clearly, you're going to need some time to do this work. The longer you spend, the greater the delay between the delivery of the system and the point at which it has any chance of bringing value to your organization. On the other hand, a new system will bring risk as well as value, so there is a tradeoff to be made. Problems will turn up in testing or even in live operation. Questions about who should pay for diagnosing and fixing these should be covered in the contract, which introduces the important idea of an "acceptance period".

The key element of a contract that we're concerned with here is the schedule of payment, and particularly the decisions supporting that schedule. A proportion of the payment may be made on or before delivery, and the remainder is withheld until you are happy that the supplier has done a good job. There is clearly scope for negotiation, both in the contract and at the point of payment. That negotiation is typically around the length and flexibility of the period, and the way that the acquirer might assess the quality of the work done by the supplier.

Sometimes, delivery date and payment date are both agreed upfront. If the delivery date moves, the payment date should move in step, protecting the time you need to assess the software as delivered. This arrangement is in the supplier's favor, particularly if payment is linked to a fixed live date, or if your organization is not experienced in assessing software.

Other approaches may be more specific about the quality of the deliverable, and may include measurements such as the number of outstanding severe problems, or the length of time that the deliverable must operate without failure. Sometimes penalties are imposed if the software fails too often, or if the supplier takes too long to fix the problems. This approach can work in the acquirer's favor, although imposing redevelopment costs and payment delays on the supplier can make for and adversarial relationship. If these costs leave the supplier with little commercial incentive to fix or complete the system, both sides lose.

You may build a "warranty" period into your contract. This period is typically substantially shorter than the warranty on capital items. A warranty period on a property or vehicle might be measured in years, while the same term applied to software is often measured in months or weeks, if at all. A better term is acceptance period—and this terminology also exposes the underlying truth—; after the acceptance period is up, the acquirer pays to fix the problems.

Conclusion

Your investment in a new system isn't about the license cost, or even the cost of implementation. It's about the opportunities that a new system will open, and about the risks that it introduces. In this article, I've outlined three key elements of acceptance testing, and highlighted some negotiable issues.

 
comments powered by Disqus

Recent Searches
Others A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

©2014 Technology Evaluation Centers Inc. All rights reserved.