> Research and Reports > TEC Blog > Why don’t the Potential Benefits of Spend Analysis Come E...

Why don’t the Potential Benefits of Spend Analysis Come Easily?

Written By: Predrag Jakovljevic
Published On: August 30 2011

For years (if not decades) now, but especially during tough economic times, companies have been trying to better analyze their enterprise spend over their comprehensive pools of sourcing categories (and individual items and commodities within these categories) and suppliers.

The idea here is to find room for improvement and savings by pinpointing strategic centralized (consolidated) procurement opportunities for a better negotiating power, discovering better (and worse) performing contracts and their individual terms and clauses, by eliminating costly maverick spending, and by dealing only with the best and most reliable suppliers. For more information, see my previous blog series entitled “Are Spend Management (or SRM) apps Suited for the Mid-market?” and TEC’s article entitled “Thou Shalt Manage (and Cherish) Thy (Best) Suppliers.”

While many companies have experienced significant benefits and improvements by deploying spend analysis solutions from specialists such as Ariba, BravoSolution, Emptoris, Oracle, Proactis, SAP, SAS, and Zycus, those benefits do not come by easily or cheaply. Namely, every comprehensive spend analysis implementation is, in fact, an implementation of a sophisticated business intelligence (BI) solution.

Traditionally, BI solutions have focused on corporate departments that serve strategic functions such as financial planning, sourcing, and procurement. Because of the targeted user population, traditional BI tools focus on historical data analytics such as online analytical processing (OLAP) analysis and other “slice-and-dice” and “drill-down (and/or up)” tools where users typically evaluate large volumes of historic data. As a result, traditional BI solutions have been associated with OLAP cubes, data warehouses (DWs)data marts (DMs), and other centralized and aggregated data stores.

Needless to say, these massive data stores are pre-defined and have rigid (static) database schemas, without much room for new data fields and amendments on-the-fly. OLAP cubes might be able to overcome relational databases’ inability to provide near instantaneous analysis and display of large amounts of data, but they are still not able to accommodate an ever growing number of new (perhaps esoteric) data attributes and dimensions.

As an anecdotal example, OLAP-based reports will produce impressive historical data and trends on percentages of NBA players coming from multiple major colleges, but will choke at the specific inquiry to, e.g., “find all basketball point-guards that played in Europe during high school.”

Why Is Spend Analysis (and BI in General) Hard?

Traditionally, a BI or OLAP tool has been modeled, designed, and engineered to address important strategic business decisions. Accordingly, a typical BI implementation project scenario has been as follows:

  • A dedicated expert team builds the front-end BI tools and back-end data warehouse, often using an OLAP server. In the case of spend management, spend data has to be acquired from disparate enterprise resource planning (ERP) systems from multiple countries and thousands of purchase orders (POs), suppliers’ invoices, etc.

  • Acquired data needs to be Extracted, Transformed, and Loaded (ETL) from those different sources into a central data warehouse and OLAP cubes. Data quality issues should be addressed in this ETL process. In the case of spend management, some refer to this step as “Cleanse, Classify, and Enrich” via algorithms for some sort of automated data classification, such as to the United Nations Standard Products and Services Code (UNSPSC) eight-digit taxonomy, four-digit eCl@ss product classification (in Germany’s energy sector), Dunn & Bradstreet’s (D&B) Standard Industrial Classification (SIC) codes and North American Industry Classification System (NAICS) codes, etc.

  • Data models are structured for clearly defined static requirements. For example, a spend management DW or data mart schema will typically include accounts payable (A/P), PO, inventory, supplier, and purchasing card (p-card) data. Changes, such as to inject data on, say, buyer, risk, currency exchange rates, budget, forecast, industry trends/indices, etc., typically require high-level expertise and lengthy customization. In the case of permanently including these data fields in the DW, one can only imagine the prohibitively bloated size of the data storage.

  • Analysis and presentations are concentrated on high-level views from aggregated historic data using an OLAP tool (or Microsoft Excel-based extensions). OLAP users are highly trained analysts and managers who value computing power far more than flexibility or ease of use.

Needless to say, OLAP tools are too complex for non-IT savvy business users to learn. This cumbersome process clearly has its disadvantages of cost and effort. While traditional information management philosophy requires IT departments to be the gatekeeper of a "single truth" of enterprise data definitions, it is unrealistic to believe and expect that this situation can ever be fully accomplished.

Moreover, by limiting end-user abilities and initiatives, IT departments burden its staff with unnecessary change requests, work backlogs, and administrative overhead.

The static and tardy nature of spend management data is another problem of traditional spend analysis systems. It is typical practice that an initial project (and data upload) takes about six months, with subsequent data refreshes taking place monthly or quarterly. This leads to the phenomenon of running a business by looking in the rear-view mirror.

Can Expert Systems Help Here?

The aforementioned traditional shortcomings of traditional heavy lifting spend analysis deployments have prompted some vendors to come up with different innovative approaches. Some vendors have espoused artificial intelligence (AI) pattern-matching inference engines to classify messy A/P and PO data into a commodity coding taxonomy (e.g., UNSPSC in the US).

The problem with this expert system (i.e. auto-classification, a global database of rules, triangulation, etc.) is that, without room for each customer’s idiosyncrasies, it is a “black box” and “one-size-fits-all” attempt. These vendors “sell” that they have this powerful automated capability that is supposed to work like a charm to “magically” auto-classify transactions accurately, but the truth is that it doesn’t always work in the real world.

The reason is that whatever universal algorithm the vendor builds must work for all of its customers all of the time. If and when one tries to apply a one-size-fits-all approach to every customer, it must break at some point. For example, if you try to automate the classification of a transaction description that contain the word “Polaris,” is the rule/AI engine going to classify that as:

  1. An all-terrain vehicle (ATV),

  2. A heart implant device, or

  3. A pool cleaner?

All of these three options can be correct in theory, so in essence the rule could classify Polaris inaccurately over 66 percent of the time. And how many more brand names with multiple possible meanings can we think of (i.e., search for the word “Aurora”)?

In an anecdotal example of a customer sending its data to one spend analysis vendor for cleansing & classification, the AI tool saw a transaction description with the word “Bus. Unit” (which meant “Business Unit”). But unfortunately, the non-human AI engine saw that abbreviated word and thought that “Bus” was a vehicle, and that “Unit” was a military group, so it classified the transaction with the words “Bus. Unit” as a military vehicle. Because the tool is a black box, no matter how much the (now former) customer complained and tried to intervene with the vendor to change the way its tool classified the “Bus. Unit” transaction, the vendor would not (and could not) make any changes to the hard-wired/hard-coded system, because any change would affect all of its customers.

So what ends up happening is that these vendors then resort to “tried-and-true” manual processing to try to accurately classify any misclassified spend data (such as the aforementioned “Bus. Unit” or “Polaris” occurrences). But then another can of worms gets opened with the typical inefficiency problems faced with manual off-shore processing of spend data.

To somehow appease their customers, these vendors might have a nice reporting & analysis user interface (UI). But at the end of the day, that is the same approach so many of the aforementioned spend analysis vendors take. Many vendors have spent a lot of time on a reporting & analysis UI and ignored the problems on the data acquisition side of the equation. Unfortunately, if the data is inaccurate (“garbage in, garbage out”), no one cares what the reporting & analysis tool looks like.

In summary, for AI engines and expert systems to work pretty well they need to be combined with additional business-specific rules. In this one-on-one setup, customers must have the ability to impact the rules/processing. In other words, spend data cannot be processed in a vacuum without customer input. There are some indications that Zycus has been making major strides in this regard.

What About Search-based Spend Analysis?

For its part, Endeca has tried to retrofit its nifty search and discovery engine into a spend visibility tool (among other applications, see my earlier blog post on the vendor). The way the solution works is that the vendor builds a custom UI with some key terms listed on each Web page, which users may want to link together (such as categories and commodities), and then the search engine finds all similarities, patterns, and trends in the data once users search on (or select) that term.

And there have been successful spend analysis implementations by Endeca. For example, Euroffice, the UK's largest online supplier of stationery and office supplies has been using Endeca to find out, e.g., all customers that have three times (and more) bought printer cartridges to the value of 100 sterling pounds but they have never bought a printing paper. Several largest car manufacturers in the world use the offering to analyze their costs of honoring their warranties.

I was also able to confirm that Panjiva has purchased a license from Endeca to enable client search capabilities within the Panjiva application. Panjiva is the leading intelligence platform for sourcing executives, with detailed information on 1.5 million companies that do business across borders. Sourcing executives use Panjiva to create short-lists of potential suppliers (with their risk profiles) and to keep tabs on both competitors and existing suppliers.

Not too shabby for a solution that was only recently unveiled. Information discovery is an important component of analytics, which in itself is an important component of the spend analysis application suite. But they’re only components, i.e.,  “necessary, but not sufficient.”

At this stage, Endeca is still a search technology expert, with not many sourcing and spend management experts to help customers with their data cleansing and classification. The spend data here is searched on an “as-is” basis. So if the customer has any misclassified transactions, the information can soon become corrupt.

With no internal seasoned category spend experts, Endeca cannot do much about optimal spend data classification. The vendor might suggest the use an ETL solution such as SilverStream Software before deploying its search engine on the customer’s spend data. And that ETL tool will also have the exact same challenges as outlined previously.

If one thinks about it, any search and discovery engine is only intended to identify and summarize data that is in a particular system. Without a domain expert’s input, the engine is not designed to change that data, provide trending information, identify opportunities based on cross-sections or discrepancies in the data, or incorporate spend metrics or contracts for performance measuring or tracking maverick spending. It’s just a search engine, after all: remember that Microsoft Bing “information overload” commercial where the guy pretending to be a search engine starts blabbing out all sorts of unrelated and useless facts once someone says a single word to him?

So, What might be the Solution Here?

Lately, innovative up-and-coming spend analytics vendors such as Spend Radar and Rosslyn Analytics have been increasingly making it on to the spend analysis short-lists, especially for those selections with smaller IT budgets to spend (no pun intended). These specialist BI tools are agile enough to access both OLAP cubes and operational databases in their native format. In fact, even more value can be derived out of a BI solution when multiple data sources are accessed and composite data mashups are produced.

These vendor’s BI platforms enable end-users to combine disparate data sources that are not already mapped within a data warehouse schema. For its part, Spend Radar leverages a dynamic on-demand database that can alter its database schema (adding and altering fields and/or tables as needed) in order to accommodate the needs of the system as those needs change.

Look for a separate article on Spend Radar’s value proposition. For this post, it suffices to say that the vendor provides system rules, algorithms, intelligence, etc. that are tailored on a customer-by-customer basis. This ensures the highest level of spend data accuracy and flexibility for ongoing growth/changes, and enables the vendor to support direct spend or raw goods and materials used in the manufacture of products (whereas its peers can really only handle indirect materials or office supplies and other expenses that do not go into a finished product).

I might be wrong, but I am not currently aware of another spend data manager tool like Spend Radar’s that provides system-managed auto-processing and accommodates unlimited spend data taxonomies. By solving this data acquisition problem and by making the reporting & analysis intuitive and user friendly via Google-like analytic tools (see image below), Spend Radar has made spend analysis fast, accurate, inexpensive, repeatable, and easy to use for hundreds of enterprise (thus solving the aforementioned challenges that most other vendors still face).


Dear readers, what are your opinions in regard to spend analysis solutions? Have you found them useful and beneficial or rather an overkill that has forced you to resort back to manual and semi-automated (Excel-based) spend analysis?
comments powered by Disqus

Recent Searches
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Others

©2014 Technology Evaluation Centers Inc. All rights reserved.