Home
 > Research and Reports > TEC Blog > Software Evaluation and Software Selection

Software Evaluation and Software Selection

Written By: TEC staff
Published On: December 17 2007

It is daunting for corporate IT buyers to discern the true capabilities, strengths, and weaknesses of a given enterprise application suite. Buyers' project teams are inundated with marketing information from vendors struggling to differentiate themselves. Functional cross-over and software integration have caused product overlap and a lot of confusion in the market. Mergers and acquisitions are also creating problems, as companies cannibalize the competition to gain access to a client base or functionality, which may result in solution overlap or forced migration. As a result, organizations are surrounded by ambiguity when making their implementation decisions.

Evaluating and selecting enterprise software is a complex process characterized by both striking potential and dramatic risks. Executed properly, this process and its outcomes can deliver exceptional benefits. If executed poorly, however, the results can range from disappointing to devastating. Organizations that select the wrong hardware, middleware, or software will learn the hard way that the money they've lost is a result of inadequate vendor information and evaluation processes. Such losses are increasingly apparent within price-sensitive, small and medium enterprises, which require accurate IT information to be collected quickly and cost-effectively during the software evaluation process. Vendor's hype, consultants' conflicts of interest, user doubt, tediously long selection processes, and unclear decision rationale are some of the unfortunate watchwords for most selection processes.

Problem Overview

Generally, when an enterprise seeks to implement a new software solution, it determines a team of stakeholders, evaluates internal processes, defines needs, and then sends requests for information (RFIs). Eventually, the enterprise sends requests for proposals (RFPs) to a shortlist of vendors. Shortlisted vendors are typically invited to demonstrate their solutions, and from there, bids are placed, and a decision is made. Unfortunately, throughout this process, the selection teams face a number of difficulties in their projects.

First, because of the great appreciation on product functionality and product technology requirements, other criteria that can determine the eventual success or failure of a new system are left out. For example, vendor corporate strategy; service and support capabilities; financial viability; cost and qualitative measures with regards to process fit, ease-of-use, and navigation; market feedback; vendor diligence; and product flexibility are all important, yet difficult to measure.

Second, when the selection team organizes and ranks criteria and measures product functionality, it usually creates an RFI by using spreadsheets. Despite the best efforts of spreadsheet designers, the choice between providing flexibility and ease-of-use to evaluators and clients while maintaining integrity by protecting fields can lead to risky trade-offs in spreadsheet design. Spreadsheets can neither be easily adjusted nor effectively analyzed to give reasonable results. Spreadsheets are also notorious for propagating calculation errors: formulae used to measure criteria are often hidden, causing errors when cells are moved or when fields are accidentally typed over. Broken links between spreadsheets can invalidate results. Simple mistakes, such as copying or rewriting a formula to the wrong cell, are common. Moreover, the nature of the spreadsheet model makes it easy for spoiler companies to take advantage. This leads to delays in processing, retraining of the client personnel, and the need for added security in the data audit process, as well as a decline in the relationship between consultant and client.

Third, because software selection teams often have poor access to quality, unbiased information, their evaluation methodologies are saddled with serious flaws from the start. As a result, teams find it difficult, if not impossible, to justify the rationale behind selecting a particular vendor's solution. Most decision makers rely on “gut feelings,” executive mandates, or tiresome spreadsheet compilations that fail to discern the best solutions. Thus, the majority of enterprise technology evaluations run into overtime and accumulate costs well over the original budget. When finally selected, most of these software implementations fail to meet functional expectations, return on investment (ROI), and total cost of ownership (TCO).

To summarize, during the enterprise software selection process, prospective customers typically struggle with the following issues:

  • Project selection teams have no effective way to define their business requirements, and cannot effectively identify critical vendor and product questions (criteria) from all the stakeholders.

  • Gathering all required criteria from stakeholders is time-consuming and becomes a problem at the early stages of the project. There is a push for the project to get underway and for a fast implementation, because delays mean lost revenue and lost user buy-in.

  • When criteria are selected and submitted to the vendor, the project team often lacks the ability to effectively prioritize the different criteria relative to their supporting business requirements. As a result, priorities are derived from internal political agendas rather than the true needs of the company. Without having a professional decision support tool to organize and keep priorities in check, as well as to conduct simulations and analyses of the vendors' performances or fit-to-purpose after priorities are changed, one department's “needs” can contribute an unreasonably high value to certain criteria in the decision.

  • Project teams do not have the ability to obtain objective, vetted data on available vendor solutions. Vendor demonstrations are often marketing pitches that fail to focus on the solution's ability to satisfy user needs, and are often misleading. Unfortunately, most project teams lack the ability to separate fact from hype, especially because the strategic technology selection is the first attempt of its kind, or the first in an extended period within a specific organization.

The problem is clear: having inadequate fit-gap information (as identified above) for the selection phase means an inability to properly plan and execute the implementation.

Measuring Criteria

Measuring criteria is crucial when determining the viability of a software solution. Numerous variables must be considered and measured against different offers. Technology Evaluation Centers (TEC) suggests that organizations engage modern decision support tools to compare solutions before structuring their RFPs. Doing so will help create an appropriate shortlist of vendors to consider in more detail. How a vendor ranks, however, depends on how well it can satisfy an organization's priorities. There are a number of analyst firms and other companies offering comparison and evaluation tools and services, including The CRM Matrix, Gartner, and 2020 Software. The following, however, will focus on how to use TEC's tools to solve the problems identified. TEC's Shortlist Wizard helps create a profile of needs through basic questions, and compiles a list of vendors that should be considered in a more detailed evaluation.

TEC's evaluation centers (online decision support tool) support the analysis and comparison of thousands of criteria on hundreds of vendor solutions that have been vetted by TEC's analysts, using industry standards and benchmarks. Because vendors respond to TEC's RFIs without a project in mind, the responses are more typical of their capabilities.

The backbone of an evaluation center is a knowledge base—a combination of criteria and methods required to analyze one or more enterprise solutions to make a decision. The knowledge bases use industry-standard terminology and match the features, functionality, and services offered by the largest variety of vendors. Each knowledge base is organized in a hierarchical tree structure, and contains a breakdown of the detailed features and functions of a vendor's solution, as well as the business priorities that users assign to these features. This structure allows users to specify requirements from a high level business view, and then refine the functional details by drilling down the tree structure to more detailed levels of criteria.

Priorities express the importance assigned to various business criteria, which allows vendors or providers to be ranked on how well they can meet priorities. The Shortlist Wizard helps the user create a project, which is an instance of the knowledge base that is specific to the user, including user permissions, priorities, vendor selections, and scores. At this point, the user is guided through a more detailed analysis where she can select vendors; prioritize needs; compare results; conduct what-if scenarios; view ratings and graphics; and generate reports (see figure 1).


(click here for larger version)
Figure 1. The user is guided through a detailed analysis where they can select vendors.

Users can view all the vendors available in the selected knowledge base, and the vendors selected by the Shortlist Wizard will already be highlighted. Users can also modify the list of selected vendors, or add a custom vendor, if these features have been enabled in their projects. Comments about vendors or details about implementation or licensing costs can also be added.

Prioritizing Needs and Comparing Results

To effectively match user needs to vendor solutions, users need to prioritize their needs in a manner that gives greater priority to criteria that are considered more important. Weighted averages (WA) are common techniques because they are simple to calculate. However, it only appears to estimate performance reasonably well; it does not indicate which pattern is ideal for an enterprise's needs, nor does it identify or measure these patterns as risks inherent to the choice. Consequently, WA falls short of discerning a vendor's true capabilities because it uses one number to describe a pattern of criteria without compensating for any imbalances. A weighted average composite index (WACI) allows greater priority to be attributed to criteria that are considered more important. A composite index can be used to better understand data that is obscured by the WA. WACI combines the composite index and the WA, and measures a vendor's overall performance against the pattern match. In TEC's case, the resultant number is called the BestMatch Factor™ (BMF). BMF is a powerful companion to weighted averages.

Users set their needs using a scale. For example, the scale in the TEC evaluation system is

Critical (10 + a threshold condition for the minimum acceptable rating)
Must Have (10)
Very Important (8)
Important (6)
Nice to Have (4)
Not Important (2)
No Need (0)

The numbers in parentheses represent the numerical equivalent value (that is used in the calculations) of the listed items. Internally, TEC's evaluation centers will use these numbers for user priorities. Priorities need to be properly grouped and nested within a hierarchy in order to best determine their weights, allowing users to drill down into the details of each module and submodule. For example, a pie chart (see figure 2) shows the distribution of the current priorities for the selected level in the knowledge tree and a corresponding list of priorities, which are arranged into modules. As needs are set, the pie chart changes to reflect priority settings.


(click here for larger version)
Figure 2. Prioritize Needs

Once all of the user's priorities have been defined, the WA and BMF are used to illustrate how the selected vendors measure up to the user's requirements. Because this data has been collected and analyzed in a knowledge tree, users can pinpoint the results in different functional categories and see how the vendors performed in a particular area. Costing can also be calculated and compared using WA and BMF (see figure 3). Ultimately, these scores allow the user to compare vendor solutions, side by side, accurately, and without the pitfalls of using a spreadsheet.


(click here for larger version)
Figure 3. Compare results using the BMF and the WA.

Graphs of Product Comparison, and Reports

TEC's online decision support tool offers three key graphs that users can leverage to understand how vendor solutions compare to one another. As previously explained, technical and functional criteria are not the only variables that need to be considered when evaluating and selecting software; other criteria include service and support. The original results are immediately recalculated, and the vendors' strengths and weaknesses are depicted graphically in the Product Comparison graph (see figure 4) across the selected level of the knowledge tree. This graph provides users with a detailed overview of the selected systems' strengths and weaknesses. Users can immediately pinpoint key differentiators between the solutions and drill down to investigate where the gaps between the solutions are most prominent and what their sources are. Calculating what-if scenarios enables users to determine what the impacts of these variables are on different vendors (see figure 5).


(click here for larger version)
Figure 4. What-if scenarios


(click here for larger version)
Figure 5. Product comparison graph

The Product Differences graph (see figure 6) illustrates the gap in performance (the WA) between the selected baseline product and others. The selected vendor is displayed and is the Base option. It acts as the zero line in the graph. The WA of the currently selected level is compared to the WA of the base option on that criterion. The difference is what is displayed on the graph. This is done for every criterion on the currently selected level in the knowledge tree.

The Sensitivity Analysis graph (see figure 7) allows users to compare solutions based on a long-term business vision to verify if a software package will still be an ideal solution based on future needs. It uses the currently selected criterion as a baseline, and varies the priority for that criterion between 0 and 100 in intervals of 10. For each priority setting, the overall WA is calculated and displayed for each selected option for the project. Alternatively, the Sensitivity Analysis graph can also be used to determine if a product's current capabilities can fall within a range of volatility or uncertainty related to certain key business requirements.


(click here for larger version)
Figure 6. Product Difference graph.


(click here for larger version)
Figure 7. Sensitivity Analysis

One of the most crucial aspects of the software evaluation process is management buy-in. Given the large expenditure and the impact an enterprise system can have on an organization, it is essential that there is ample support to justify selecting a particular vendor and to illustrate that the selection process involved rigorous business analysis. The valuable information that has been compiled during this process is available in a number of reports that can be printed out and delivered to upper management and decision makers. As a result, the selection team will have quantifiable information to justify a selection. Additionally, these graphics and reports can be used during vendor demonstrations to separate the marketing pitches from product functionality. Scripted scenarios can be created, where vendors will have to demonstrate their products' functionality in relation to the organization's needs. Consequently, the selection team can further validate its selection.

The right enterprise solution can lead to numerous benefits for an organization, and there is a trickle-down effect when information is accurate. By using accurate and relevant criteria on enterprise software functionality, organizations are better informed of their options. By avoiding spreadsheet calculations, the selection team can make accurate assessments about how well a vendor can satisfy the organization's needs, which should ensure the most appropriate enterprise software is selected.

 
comments powered by Disqus

Recent Searches
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Others

©2014 Technology Evaluation Centers Inc. All rights reserved.