Home
 > Research and Reports > TEC Blog > Managing Risks, Managing Measures: Decision Support Metho...

Managing Risks, Managing Measures: Decision Support Methodologies in Business Part 1: Guidelines

Written By: PhD
Published On: February 28 2002

Executive Summary

When it comes to making decisions using some tool that purports to measure value, it is wise to know something about the tool and what it is measuring, as well as why. Poor measurement methods contribute at least in part to project failures and corporate inefficiencies that can take off millions or more from bottom lines and poor technology selections alone have contributed to tens of billions of dollars of unnecessary costs. Methodologies implicit within decision aid tools can bring out value for the users, but the question is if these values are really representing what the stakeholders think they are being shown or need. This article provides some guidance into these issues, and how a manager may avoid making an expensive mistake.

This is Part One of a two-part article.
 
Part Two concerns Decision Aids in Business with a listing of some of the tools for business decision making.

The Point of Measuring

Why measure? Probably the biggest fallacy in measuring anything is to understand what it is you are trying to estimate. Most consider this to be performance or expected performance: in reality, it is not. It is trying to estimate and predict the result if a particular action is taken. This difference in understanding what measuring is all about, though subtle, has major implications.

To underline this difference consider:

If you think it is a measure for performance, your expectations are that this is what will be delivered. This is not the case: the measure indicates what can be achieved, not what will be achieved.

If you think that this is an attempt at predicting the outcome to a decision, then you can ask the questions relevant to all the likely influences that could derail or better /insure the likelihood of the outcome. You will also be less surprised that you didn't get everything that appeared to be promised, and can be prepared re-prioritize to what can be accomplished, if necessary. Predicting means understanding the likely risks and possible consequences, and having contingency plans in place should key elements look like they cannot be achieved.

Determining the Tools

When a major complex decision is at hand, executives tend to search around to find assistance to make the decision and create team buy-in. There are of course many options calling in consultants is one, and if the consultants are worth any salt, they'll come with a toolkit of methods intended to help the executive and his or her team chart their way through the issue at hand. So, who are these consultants and what are the tools that they have? In some cases they may be toting a decision aid tool, and though these tools may be touted as having great academic or long time serving credentials, neither of these pillars may be the judge of the real usefulness of the methodology implicit in the tool, nor if it is valid in your circumstances.

Making informed decisions on whether the tool is right for you by asking a few telling questions up front can mean the difference between getting to any one of a group of solutions, and getting to a solution that is most appropriate and most likely to succeed. Methodologies used inappropriately can lead to wrong, or lower value decisions. Among the first questions to ask about any tool-toting consultant are:

  1. Is it a measurable process that's needed that is, is there something to measure, or is it something to understand and make decisions based on a descriptive ("this solution does not have this so I can eliminate it") approach; and
     
  2. If there is a measurable (usually there are many of these), what is it, and does the methodology actually help measure it, or is it going to measure something else, or lack a key component needed to understand the implications of the solution proposed by the tool?

The former point simply states that some decisions really don't need any measure: they become obvious once you can figure the way through the issue. The second is much less obvious, and how to measure soft issues and put them in the context of your business direction is quite another matter. Most managers worth their salt will not give 100% of the decision making to a tool, simply because it is impossible to measure everything, and may not make sense (we'll discuss this below).

Firstly, what method can you select to make these measurements? Let us take the case of a procurement decision. In this context there are a number of issues you may want to consider for 'measurement': These can be divided roughly between those items that deliver value, and those items that represent a risk or cost. The method generally adopted breaks down as:

  1. What are the requirements?
     
  2. Who are the vendors
     
  3. How do the vendors meet the requirements?

Each vendor that can be selected represents a risk in terms of its deliverables. These risks can be classified into three parts:

  1. What are the gaps in the deliverables: i.e., what will be delivered that will not meet the full requirement, and needs to be fixed prior to implementation;
     
  2. How is the vendor going to deliver the solution: what is the vendor's strategy in delivering the solution? Is this going to leave the business down while the implementers tinker, or can it be smoothly integrated in?
     
  3. How certain are you that it is the best solution is there a clear 'winner', and is that winner good enough (a sub-question might be is the deliverable adequate enough or will it cause more trouble than it is worth). Some tools can provide relative comparisons among solutions without the benefit of the perspective of real value to required major goals and sub-objectives. Meeting a major goal may be misleading if important sub-goals are missed.
     
  4. What do you mean by best solution? There are a number of dimensions to this question. Most business managers may regard cost as the major differentiator, but in reality the cost of ownership is more important since it must take into account all solution lifecycle risks. These can include (in no particular order):
     
    1. fit to future business needs
       
    2. support costs
       
    3. vendor strategic interests which may differ from your own
       
    4. disruption of business processes during solution implementation
       
    5. security issues related to data
       
    6. poor project management leading to implementers being 'out of control', or later, 'in control' of key elements of your business
       
    7. creeping requirements which build as a project continues, leading to a never-ending project, cost overruns, continual work disruption, not to mention opportunity costs as managers must deal with frustrating technology issues rather than focusing on the business

Of these, (d) is most overlooked or overly optimistically considered a small part of the cost/benefit equation. The risks here should be much more carefully looked at, as the process can lead to closure of the business. Often the cause is related to insufficient information on the gaps that need to be closed (discussed in 1. above) for a successful implementation, sometimes a result of a lack of the implementer's knowledge of your business in detail: suddenly that home-spun inventory system needs to become a JD Edwards ERP system for the whole implementation to work.

A good measurement system doesn't necessarily measure value: it should identify the issues that can cause risk and uncertainty in the outcome. Anyone who offers one number as the means of determining a solution is doing only a small part of the job that measurement provides.

Of Paradoxes and Fallacies

Balanced against trying to discriminate among alternative solutions by measurement are Fredkin's paradox and McNamara's Fallacy, which respectively state

  • Fredkin's paradox: The more equally attractive two alternatives seem, the harder it can be to choose between them no matter that, to the same degree, the choice can only matter less.
     
  • McNamara Fallacy: "The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily really isn't important. This is blindness. The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide."

Your first step is to decide what you need to measure, and what you need to leave offline as part of a descriptive process. As a rough guide, the partition comes between where the solution can deliver or improve a business capability measurable in ROI terms, and where the details needed to ensure the value is delivered.

In procurement, for example, business capabilities are provided by the functional components of the product or service - such as a button that sends e-mail to a fixed list of recipients, or the design features of a particular engine: these are 'fixed' assets of the vendor offering that directly feed value to enabling a business capability. However, it may be possible to place 'measures' on the quality of the vendor's management team that will deliver the solution: in practice, you need to have some real faith they can carry it through and either you treat this as a value proposition in itself - for example, set minimum standards of the team from 'measurables' such as level of education, years of experience, kind of experience, and so forth for needed 'performance', or you treat it offline in a negotiation process to insure you have high quality management (the variable or negotiable component of the offering). This can mean the tool and the methodology you need delivers the means to (a) provide a real measures of the team's likely capability to deliver, not a relative measure compared to other teams, (b) provides the capability to set minimum performance requirements, (c) identifies issues that need to be raised with a vendor for corrective action to minimize risk or negotiated price reductions, and so on.

Tools and methods that allow for this provide a higher value return than tools that measure only relative value i.e. simply compare one alternative to another. In some cases, it can mean the difference between the success or failure of a project. In IT, it is estimated that $30B of failed projects is direct result of poor selection procedures: this is only representative of the hole created by poor decision methodologies.

Summary

If your team is likely to need guidance to measure values, it needs to consider what guidance and what tools you need to be equipped with. The tools often have inherent methodologies, which more often than not will determine what they should be used for.

The values that you see are always predictions, not absolutes. To paraphrase the statement the SEC requires, current estimated performance does not mean future deliverable performance. View these values as best possible outcomes to the decision, and look at the risks on the way. A good tool gives some idea of those risks, and you should use the tool to not identify, but prepare you to mitigate those risks with solution providers as well as internally in your organization.

The table in Part Two summarizes some tools that have been used for business decision-making. Of these, two methods stand out as the most common at the executive level. These are Multli-attribute utility theory (MAUT), and somewhat as a subset derived from MAUT Analytic Hierarchy Process (AHP). Both claim to measure value, and the main intrinsic difference between them lies in that MAUT is probably more adept at measuring 'absolute' value, while AHP is more resource-allocation oriented, and tends to measure relative value among the solutions. Both have their place.
 

About The Author

Dr. Eddie Robins, was formerly Chief Scientist of the Technology Evaluation Center (TEC), was responsible for the scientific development of TESS, the software engine that drives the selection models developed by TEC research analysts.

Dr. Robins has over twenty years of industry experience, and since 1995 has been instrumental at Arlington Software, the forerunner of TEC, in developing new mathematical algorithms which has led to patented methods of decision analysis and novel decision-making processes. These provide increased accuracy of risk assessment in selection processes. The work was based on experience garnered from over 200 client organizations that include Fortune 1000 companies and government departments.

Dr. Robins regularly presents at academic meetings and conferences in Operations Research and Management Science. He has managed and led multidisciplinary development teams in a variety of scientific and technical projects, including the development of pre-press equipment, life-test and reliability of integrated circuits for a Trans-Atlantic fiber-optic cable system, data processing and data acquisition systems at the Canadian Center for Magnetic Fusion, and power device development at GEC Hirst Research Centre, UK. He has also been a consultant to the International Civil Aviation Organization (ICAO) under its Technology Assistance Program.

Dr. Robins is a physics honors graduate from Imperial College, London, and the University of Manchester Institute of Science and Technology.

Dr. Robins currently consults for start-up and leading edge technology organizations, related to business analytics, soft computing, CRM, Decision Support and Knowledge Management, and can be reached at esro@attbi.com.

 
comments powered by Disqus

Recent Searches
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Others

©2014 Technology Evaluation Centers Inc. All rights reserved.