Home
 > Research and Reports > TEC Blog > CRM Testing Throughout Implementation

CRM Testing Throughout Implementation

Written By: James Lyndsay
Published On: October 13 2004

Introduction

You've selected your vendor and ordered your customer relationship management (CRM) system. Someone uttered the magic words "strategic partnership". Now contracts have changed hands, and there's a dialogue going on about your company's deep secrets, and, perhaps, money has started to move.

No matter how strategic your partnership, as an acquirer, you have a different set of values and desires than your supplier. This article is written mainly for those of you who are handing over the "green and folding." It is up to you to extract value from whatever you get in return—this is a swift introduction to some of the opportunities, pitfalls, and decisions you may come across.

Supplier or Acquirer

Once you are committed to using a particular CRM system, you are unlikely to have a viable alternative if it doesn't do what you expect. Your CRM data is a strategic asset at the heart of your business; you are going to want to take time and care, before you let a new system take over.

Your organization has already spent time selecting a vendor and a product. There may have been reference visits and demonstrations; perhaps you already have colleagues who use the very CRM system you have started to acquire. You may trust your supplier as your customers trust your organization—but you are going to have to justify your confidence that your business will continue to work with their software.

A supplier's test results are an important asset of their software, like a logbook is an important asset of a used car. As a general rule, the more you are paying for software and the less widely-used that software is, the more need you have for your supplier's test results. You'll want to cast an expert eye over the range of their testing to see if it is comprehensive enough to justify your trust—and if you are planning for bespoke software, you may already know how much testing, and of what type, you need to see before handing over that first installment. Age and rate of change are important factors in your assessment; if the tests are older than the software, the results aren't worth a bean.

It is also important to share knowledge about problems. Your supplier may be unwilling to disclose details of fixed problems, but should certainly tell you about problems that ship with your system. You will want to be able to tell your supplier clearly and promptly about problems you have found in implementation and live operation. A linked or shared defect tracker (or a shared view of each other's) can reduce friction by providing timely and coherent information about problems and their resolution.

However, there are always some risks—and some measurements—that can't be assessed by your supplier. It is hard for your supplier to judge, before delivery, how well the system will work on your equipment, on your site, with your staff, your customers, and your particular set of third-party applications. That responsibility is the acquirer's, which means you're going to need to do some testing.

Acceptance Testing

Testing something you are about to buy is generally called acceptance testing'. There are three basic flavors: user acceptance, operational acceptance, and contractual acceptance. The first tells you if your staff can live with the system, the second tells you if your business can live with the system, and the third tells your supplier when they can trouser the cash.

User acceptance testing (UAT) can and should happen from the earliest stages; fast feedback from the people who use a system back to the people who build it is a great time-saver, even if it introduces a few backward steps early on. Early UAT overlaps with design, particularly for customer-driven iterative approaches such extreme programming.

On projects with long periods between deliveries, early UAT works well if a small number of expert users and reviewers test documents, data, workflows, and prototypes. This work is necessarily unscripted, and typically concentrates on finding problems before they get into a delivered system.

Later work, however, tends to involve a larger group of users teamed with professional testers and plenty of kit. Your suppliers should be expected to find and fix many code problems before delivery to this team; ideally, UAT at this stage will use predominantly scripted techniques to verify value by showing that workflows and paths through the system match business requirements. These scripts can also show up errors missed in earlier UAT, particularly in interactions with other systems. With appropriate tools, your scripts may be automated into valuable regression tests so that future releases can be assessed in the same way. Data-driven and model-based test techniques may allow you to reduce the cost of script maintenance while greatly extending their range.

CRM systems impose their own requirements, and to adequately test some areas of the system you will need to step away from simply replaying a user's actions. Many CRM systems have a role in supporting strategic decisions. Errors in this functionality may not be easy to spot without carefully designed test data, but could have a significant impact on the survival of your organization. If your CRM system has been brought in to improve call center response time and yet has usability issues, the business case for its introduction may be lost. UAT may be your only timely opportunity to observe data corruption, or issues around the meaning of business values in data. You may want to do end-to-end testing, tracking a transaction or other information from the point where it enters your business (not just the new system) to the point where it leaves. You may need your tests to be synchronized across many platforms. You could examine the life cycle of a customer account, or the ways that the new system works with your existing systems. Test automation tools will help, but will struggle to provide a complete solution.

Your testers need not only to be observant, but also need to understand the operation and vision of the business. Exploratory and diagnostic skills will help them step away from the strictures of the scripts and toward an efficient way of uncovering and assessing unexpected risks. However, a really buggy system can cause great problems to scripted testing. If you find that, close to your live date, your testing and change management is still being driven by the discovery of new problems and unanticipated risks, you may be in trouble.

It is all too easy to concentrate on functional tests around data entry, as this is where many tools excel. If your attention during UAT is driven by your tools—or by your supplier's existing tests—you may miss areas that are important to your business. Don't let your suppliers dictate your test strategy.

No two testers seem to agree on precisely what operational acceptance testing (OAT) might actually be but it is typically less to do with the business functions of the system, and more to do with service levels and overall fit with your technical and business environment.

Much of OAT involves tests and measurements that your supplier might be able to model, but can't actually do themselves. Many OAT activities will call for exclusive access to working software running on production-sized hardware and data, and testing is often carried out by specialists working closely with key members of your supplier's design team. OAT may be carefully planned, although the tests and measurements themselves tend to evolve from prior results. An alternative (sometimes, an unavoidable alternative) is an intensely monitored "bedding-in" period after the system has gone live.

CRM systems need to be secure and reliable, and generally have to support many simultaneous users. Assessments of performance, resilience and security will be taken as part of OAT, and the system is likely to undergo tuning, where subtle adjustments are made to hardware and software to achieve agreed service levels. Tuning and security modifications can have unexpected effects on functionality, and automated scripts from UAT may be used to check that nothing untoward has happened. UAT scripts can also help to simulate multiple users although a separate tool may be needed for control and measurement.

Contractual Acceptance Testing holds a dirty secret; it's not really about acceptance at all. You know from UAT and OAT whether your system is acceptable, but by the time you have reached that assessment, too much may be at stake to simply hand it back. Both supplier and acquirer are, naturally, interested in making each other happy—to a point. Beyond that point, it's all about pain and money. If your testers are under instruction to find as many lurid bugs as possible, and your supplier is batting every one of them back for ever-more unlikely reasons, neither of you are concentrating on getting the greatest value from your investment.

Contracts and negotiations, rather than testing, are central to this activity; if your contract has something along the lines of "fourteen days without failure"—and no-one's sure whether that's fourteen individual days, or fourteen days between failures, a fun time will be had by all.

The Joy of Data

If you've got customers, you've got customer data. Your CRM system—whether it's the latest tool or a stack of sticky notes—exists to handle this information, and whatever you do to the system, you need to keep the data.

However, migrating it to a new system is not trivial. Data for one system will need to be translated before it gets to another to preserve meaning and relationships. Old data will need to be cleaned up and trimmed down. You'll want to do this systematically not only to increase your speed, but also to reduce the chance of error—many problems experienced in production are related to avoidable errors in data. As the data owner, your organization is responsible for data integrity and migration, and you will develop or configure your own tools to support the task. Don't forget to plan time to test these tools, too.

Test data brings an all-new set of challenges—but can be the solution to a whole bunch of headaches. Your UAT people will need a controllable set of data that fits into their environments, yet supports their scripts and has wide variations to allow them to test those one-in-a-million situations that crop up so often. Your OAT testers will need a large volume of data that matches your customer profile. Once word gets around, your trainers will probably want a go, too.

One solution is to use your existing production data, but this will need to be simplified, migrated, and further manipulated to ensure that

  • dates and business data are sensible and meaningful
  • data integrity is preserved and data faults minimized
  • customer information remains private, and data protection regulations are not broken

Flexible, powerful, data load, and analysis tools can address all these issues at once. Once established, analysis tools can be used for controlled data generation, which in turn brings further power and agility to UAT and OAT. Production data can be useful in testing, but is a clumsy and blunt tool compared with well-organized data sets that match your test requirements.

Going, Going, Gone Live

You may have taken a decision already on whether to go live all-at-once, or to use a phased approach. Each has different requirements from testing, and you will want to reflect this in your test strategy. Information from testing will not only help your organization avoid the risks inherent in each, it can also help decide if the chosen approach is feasible.

With a phased approach, you introduce your new system gradually to the business. A pilot phase keeps the costs of failure relatively low, and will help you see whether your tests missed anything, and whether your models and predictions from OAT make sense. However, throughout the transition, both your business and your technical people will also have to deal with the problems of having two systems running at once—and twice the options can mean way more than twice the trouble. You may need to carefully consider how to ensure that this dual system does not introduce systematic problems, particularly when dealing with interfaces and communications.

If you are planning a "big-bang" approach, you become more vulnerable to large consequences from small issues, so your testing needs to be meticulous. CRM systems with a global customer base may need twenty-four hour availability, and it may not be acceptable to cut yourself of from your customers while you change over.

With either approach, you will need to consider what to do with new and changed data that arrives as you are migrating. You may be able to "freeze" data that is being migrated, but if you work with high rates of change (e.g., banking or telecoms), it may not be possible to hold new information back for much time at all.

Testing isn't over when the system goes live, although teams may be scaled back and responsibilities handed over with the closure of the implementation project. If you are within a warranty period, problems found may have a direct effect on the movement of money and personnel. Typically, a new CRM system is closely monitored to compare information with your supplier's benchmarks, and to get early warning of ongoing problems. In the longer term, continued monitoring and testing in the live environment can let you assess the impact of changes on service levels and response time, while a handful of "test" customers in your live system can let you pilot new configurations and check live fixes. Be sure they don't get into those strategic decision support reports.

Tool Up for Super Powers

You may feel that you need some kind of super-tester to do all that work. But while super-testers are hard to come by, super powers, oddly enough, are not. Good tools, used in the right way, can make the impossible an everyday occurrence.

Data manipulation tools pay great dividends if supplied with good data and used within a controlled environment. Sophisticated tools exist for large datasets and may be necessary for some aspects of CRM load testing, but you could find that spreadsheets and scripting languages give you everything you need for smaller volumes with no extra license cost.

Test automation tools, used skillfully, can give you extraordinary reach and speed, particularly in combination with good data tools. They are heavily promoted and aren't always as straightforward to use as they appear—but their flexibility and power cannot be denied. If you encourage your team to go beyond simply replaying user actions, these tools can give you access to opportunities that are quite impossible to reach otherwise.

Tools used to monitor a system for service levels live operation are also useful in OAT, giving tremendous insight into trends and overall behaviors. The information they expose can be a vital resource for anticipating and diagnosing problems.

There are plenty of tools to choose from, and the variety is far wider then the three types illustrated above. Remember though, as with all super powers, tools can be a force for evil as well as good.

Conclusion

There is more to implementing a CRM system than testing it, and you can draw useful lessons from any prior experience of moving from an old to a new system and managing a supplier relationship. There are more pressures and needs than testing, but that is another story . . .

Testing is, fundamentally, a service that provides information; you can build code or buy a system without testing a thing. Without testing your new system, you don't know for sure whether it is genuinely valuable, or worse than useless. Decisions of what to test and how well to test it lie among you, your test strategist, and your regulators.

Your supplier is responsible for building a system that works. As an acquirer, you are responsible for ensuring that your business continues to work with the new system.

About the Author

James Lyndsay is a software test strategist. He is the principal consultant at Workroom Productions, and is well-known in the testing industry as a consultant, speaker, and award-winning author. Involved in testing and test management for more than fifteen years, James is particularly recognized for his insight into agile techniques.

James can be contacted at jdl_tec@workroom-productions.com

Workroom Productions is a London-based consultancy that focuses primarily on test strategy. Formed in 1994, it works with interesting companies in a variety of business areas.

See www.workroom-productions.com for more details.

Links and Further Reading

Confused about the meaning of all these technical terms? So are your testers. A common glossary can help here's one that has been adopted as a standard in the UK and is going worldwide: BS-7925-1 (official: http://bsonline.techindex.co.uk/)
(working copy: http://www.testingstandards.co.uk/bs_7925-1_online.htm)

For an idea of the range of tools that are out there and useful to testers, see my previous article: A Guide to Software Test Tools.

I stole the phrase "super powers" in the context of test tools from a great article by James Bach:
http://www.satisfice.com/articles/boost.htm

For deep information on model-based testing, see Harry Robinson's model-based testing web site
http://www.model-based-testing.org/

For information on test data, see my article The Importance of Data in Functional Testing at
http://www.workroom-productions.com/papers.html

 
comments powered by Disqus

Recent Searches
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Others

©2014 Technology Evaluation Centers Inc. All rights reserved.