The childhood rhyme “Eeny, Meeny, Miny, Mo” was useful when, as a child, you were choosing an ice cream flavor. But this simple counting-out game is not likely to give you great results when you are about to spend several hundred thousand dollars or more for a new piece of software. In my experience though, the decision processes of many organizations are not a great deal more sophisticated. Criteria are identified and software is assessed, but what criteria are selected and how the assessments are weighted is not clear. There is a sense that, at the end of the day, everything is thrown up in the air with the hope that there will be a clear result when it all lands.
Is there a better way to select the right software for your company? Yes, there is. Following are four basic rules that can improve an organization's decision making process.
Don't Confuse Criteria with Functions
Criteria are the decision factors—the elements that will lead you to choose one piece of software and reject others. Functions are what the software does—specific tasks it can carry out or automate. Many decision processes confuse the two.
When you have a large number of “criteria,” look closely to see whether you are truly describing criteria, or if you are just providing a list of functions you would like—that is, reciting the requirements document. A common trap for many decision makers is to add functions as criteria to make sure nothing is missed. So, for example, a team may include as a criterion, “Commission rates can be assigned by product.” This may be a real requirement, but setting it up as a criterion may not be helpful to the decision process.
In some instances, one of the criteria is that certain functions must be present or be easy to use. The way to deal with these “musts” is to create a list of required functions, and set a criterion that all required functions must be present.
Distinguish between functions and criteria, and your decision process will be easier.
Distinguish between Gatepost and Poker Chip Criteria
In general, there are two broad types of criteria to consider: gateposts and poker chips.
Gateposts are non-negotiable criteria. They must be met, or the alternative is abandoned. For example, in one recent selection process, the budget needed to be less than $190,000 (CAD). The organization didn't really care what the cost was, so long as it was below that gatepost level. Other examples of gatepost criteria include the ability to track lots (used in the food and pharmaceutical industries) and the ability to calculate commissions.
Gatepost criteria are binary; they are either true or false. Ask whether a piece of software will be ruled out if the criterion you are considering is not met, regardless of how great the rest of the application is.
As an example, one client set as a requirement—a gatepost—that the reassignment of warehouse locations for stock must be easy. The issue wasn't that reassignment had to be done—it had to be easy to do. The client knew that almost all enterprise resource planning (ERP) solutions will allow you to reassign warehouse locations, but it was also aware that this task happened frequently in its business, and knew that the warehouse staff was not at ease with computers. If reassignment was difficult, it wouldn't get done. Since this engineer-to-order (ETO) firm was highly dependent on knowing the availability and location of its components, it was willing to drop any solution where reassigning locations was difficult, regardless of the rest of the application's features.
The number of gatepost criteria should not be large, and the number of very specific criteria should be much smaller. In one recent analysis, a client had sixty-three gatepost criteria. This number reaches the recommended upper limit. Most of the criteria were general statements. For example, one criterion was “Capable of supporting ISO 9001 documentation requirements.” Notice that this criterion states nothing about how easy it is, only that it is possible. Of the sixty-three criteria, only a dozen or so were very specific—and this is about the right number to have. If more than sixty-five to seventy gatepost criteria are listed, you probably should re-evaluate whether they are all truly necessary.
Remember: the test of a gatepost criterion is that if it is not true, you will abandon the option. Most selection teams, when challenged by this stipulation, see that many of their “musts” are really just strong preferences. They realize that if enough of the other criteria are met, they could accept a solution that misses on that one item (that one “must”).
The other type of criterion can be described as a poker chip. Poker chip criteria are tradable. With this type of criteria, you will be willing to give up performance in one aspect of the software in return for better performance in another. Deciding on the trade-offs is where the hard work happens.
Poker chip criteria are assessed on a continuous or stepped scale rather than the binary evaluation of gateposts. The assessments are what you use to make the trade-offs. If a software solution is twice as easy to use, how much can you give up in terms of the reliability of the vendor? If an application performs only half of the desired functions (but all the necessary ones) and costs half as much, is that a good trade-off?
Measuring the performance of an option requires a structured approach. For example, in a recent evaluation, ease of use was one of the criteria. The team was given an established test plan for the software. For each task, it rated how intuitive the software was. In a discussion at the start of the testing, the team decided what the expectations were and what would count as intuitive. The team decided to consider how many errors it made while executing the test, how often team members had to ask for help, and how many screens they had to navigate through. The average rating across all the test tasks was the measured score for ease of use. The differences among the scores of the different alternatives provided compelling input into the decision.
Distinguishing between gatepost criteria and poker chip criteria will make your job easier. But how do you account for all the requirements that are not gateposts? One effective approach is to assess what percentage of the requirements is met by the alternatives being considered. If necessary, you could divide your requirements into two different priority levels and assess each one independently.
Use No More than Ten Poker Chip Criteria
How many criteria should you consider? Eight to ten. Use too few, and the complexity of the decision will probably not be captured effectively. Use more than ten, and one of two things will happen. One possible outcome is that you will measure the same aspect of the solution more than once. The other possible outcome is that you will measure elements that really don't matter much. If you have ten evenly weighted criteria, then each is only worth ten points. But criteria are rarely weighted evenly, so adding an eleventh criterion will probably not improve your chances of making the correct decision.
In fact, there is a good statistical reason why adding extra criteria actually makes it harder to decide. The law of large numbers states that as the number of independent variables grows, the sum of the variables converges on the sum of the means. In essence, with large numbers of independent variables, the normal variations of the variables cancel each other out. So with large numbers of variables, the ability to discriminate between alternatives actually declines. If the variables aren't independent, you're back to the problem of measuring attributes of the solution twice.
How do you exclude criteria? A simple example illustrates the reasoning. If you are looking to buy a new car, it is possible to assess the styling of the headlights on each alternative. However, at the end of the day, it is highly unlikely that headlight style will actually drive your decision. The fact that you have preferences and can rate the alternatives does not get around the fact that you should not formally include headlight style as a criterion. Similarly, you may have preferences for the typeface used by the application or the background color of the screen, but it would be a very rare situation where either of these features would bear directly on what computer software to purchase. One exception is that if the typeface was difficult to read, or if the background didn't have enough contrast, then these could affect an ease of use rating. The focus, then, is not on the aesthetics, but rather on ease of use.
Getting down to the eight to ten criteria takes work. The team must spend time working through which factors really matter. And the criteria need to be practical. Table 1 indicates five tests for determining effective criteria.
Table 1. Tests to determine effective criteria
The ease-of-use evaluation described above met all these tests.
Limit the Weighting of Criteria
Excluding alternatives that don't meet the gatepost criteria and using eight to ten important poker chip criteria make reaching a decision on software selection significantly easier than it would be otherwise. The work required to identify the criteria helps the selection team reach a common understanding of which alternative will be the most suitable solution. As a result, the decision is often more obvious. Because the team is only considering issues that matter, there is less clutter from data that should not affect the decision in the first place.
There is often debate about whether to weight the scores of the various criteria when making the decision. Weighting turns out to be not that critical. Research in the mid-70s demonstrated that a decision outcome was far more influenced by the selection of criteria than by how the criteria were weighted. A key finding was that independence of criteria was essential. It is more important to get the criteria right.
If you do decide to weight the criteria, in my experience, it is sufficient to use three different weighting levels: 1, 1.5, and 2. This gives enough of a spread to place emphasis on certain criteria that can support broader organizational strategies without making any of the criteria under- or over-weighted.
Easier and Faster Decisions that Stand Up to Scrutiny
Software selection decisions will be easier and faster when teams don't confuse functions and criteria, distinguish between gatepost and poker chip criteria, and limit the number of poker chip criteria and the weighting of the various criteria.
There is one more significant benefit from this approach to software selection decisions in addition to its speed and ease. The focus on getting the criteria right at the outset makes it much easier to defend the solution. For example, one firm selected software from a third party firm despite its general preference for an integrated solution. But because the criteria were clearly described and the assessment methods clear, there was no significant debate with the project team's conclusions when they were presented to management. What could have been a source of controversy became a non-issue because the management team felt comfortable about the decision process.
Although the methods described here are not difficult, they are rarely practiced. Software decisions have historically been focused on features and functions, so the decision process has tended to result in comparisons of checklists. Limited consideration of how to assess the decision criteria has often resulted in incomplete or inconclusive assessments that seem to depend more on individual preferences than an evaluation of evidence. And confusion of functions and criteria has made it difficult for decision teams to know what to focus on.
The structured approach to selecting criteria offers immediate help in the software selection process. Distinguishing between functions and criteria means the team can focus on what matters to the organization. Making the distinction between gatepost and poker chip criteria simplifies the trade-offs the team will face. Well-selected poker chip criteria will be easy to assess and will limit the influence of weighting. Furthermore, the process as a whole will increase the confidence of those who must eventually write the check.
About the Author
Hugh R. Alley, PEng project management professional, is a senior consulting manager with Grant Thornton LLP, chartered accountants and management consultants, in Vancouver, British Columbia (Canada). With twenty years' experience in process improvement work, Alley has been advising management about setting requirements for IT systems for fifteen years. Educated at Ontario's (Canada) University of Waterloo in systems design engineering and at Cornell University (Ithaca, New York, US) in resource economics, Alley has taught at both Simon Fraser University and University of British Columbia (Canada). He writes and presents frequently on the subjects of decision processes, project risk management, lean manufacturing, and lean processes. Alley can be contacted at HAlley@GrantThornton.ca.