“Big Data” in the Age of Mass Customization and Product Proliferation

Many organizations struggle to make sense of their big data owing partly to product proliferation. Traditional analytics and data management tools simply do not provide visibility into the sheer number of product variants and options available as well as customer buying trends. See how Emcien’s pattern-based analytics enables analysis of customer buying patterns so that companies can improve the customer experience and increase profits.
For decades, individual business users and enterprises have been trying to unlock their transactional and other data volumes to provide the proverbial coveted “actionable intelligence.” Yet, it is no secret that traditional analytics tools have not been friendly to ordinary (i.e., non-techie, power) business users. I for one will admit to still grappling with the concepts of pivot tables in Microsoft Excel, let alone using PowerPivot and other recent add-on features. The other issue is that users must know what they are looking for beforehand (based on experience, intuition, and whatnot) so that pesky online analytic processing (OLAP) cubes and inquiries can even be set up. 

To address this conundrum, some vendors have come up with nifty business intelligence (BI) visualization tools such as dashboards and scorecards that non– information technology (IT) business users, particularly executives, can personalize and use without much training. Again, to configure a dashboard or scorecard, one must know what to look for beforehand, and these offered sets of parameters to choose from can quickly become irrelevant in today’s fast moving business.

In summary, database/data warehouse, OLAP cubes, and other BI tools as well as Apache Hadoop “big-data” parsing processes are designed for IT departments, not business unit managers. These processes only look for specific data without discovering new relevant data and hidden correlations. They cannot solve specific business problems, reveal actionable insight, or recommend specific courses of action for the following reasons:

  • Their focus is on data storage and reporting (again, they are designed as tools for seasoned IT staff).
  • They don’t offer much of out-of-the-box (OOTB) functionality.
  • They require massive IT support and/or experienced team of data scientists.
  • They will not automatically reveal critical business information that users don’t think to ask for.
  • They typically tackle only one of the three dimensions of the big-data challenge: velocity, volume, or variety—but not all three.
  • Not all problems in business are “hadoop-able” —i.e., supported by dataintensive distributed applications.

To illustrate the last bullet point, large sparse graphs are not “hadoop-able.” There is no obvious way to cut them up, which is what Hadoop does for “big-data” situations—it breaks the problem (data) into more manageable pieces, and sends them out to different processors. This parsing inevitably breaks the naturally occurring connections, correlations, and patterns that distributed analytics cannot recapture afterwards.

Data Discovery Apps Help (to a Degree)

Jorge García’s recent blog post talks about a slew of data discovery tools that are specifically aimed at connecting users to a wide variety of data source types (structured, semi-structured, and non-structured) and enabling users to freely explore the data within. There are no predefined data drill paths here, so business users can interact with data the way they want to and easily create visualizations that suit their own purposes. As such, they boast a flexibility and freshness that traditional BI solutions might find hard to match, as they tend to follow a stricter methodology, with structured paths for exploring the data and following well-defined rules.

Data discovery tools, aka “agile BI” tools, are giving non-technical users some freedom to search/locate and explore information in a friendly but serious way to uncover insight. But even these solutions cannot handle the sheer volume, variety, and velocity of the so-called big-data phenomenon in some business situations. Think of law enforcement and anti-terrorist activities to both tactically and strategically gather information on criminally predicated people, groups, locations, and events. Imagine the ability to monitor real-time streaming text in the form of social media and other data streams to surface critical correlations of the kinds of words and activities that merit serious attention. Then imagine that a non-IT user can then identify and geolocate “persons of interest” and their networks, zooming in to the kinds of connections and conversations that are cause for concern. Traditional BI and discovery tools cannot really assess and pinpoint persons of interest, revealing their identities, associations, communications, and locations.

Variety—Big Data Problem for Supply Chains

One real-life example is handling the proliferation of product variants during this time of mass customization and fastidious and fickle customers. Pleasing every customer whim and wish comes at a hefty cost for multiple departments in an enterprise, including sales, product development/engineering, and supply chain operations (manufacturing and inventory management). Traditional data management tools don't factor in parts, configurations, and other data dimensions based on customer buying patterns. They cannot monitor customers’ buying patterns by, say, product line, option/variant, region, and market segment, and cannot compute the most popular feature groups by market segment. In addition, they cannot detect trends by product mix, market segment, and features for a product line with hundreds or even thousands of possible features.

Yet, the product mix is the single biggest driver of material cost, inventory, and sales velocity. The lack of visibility into actual sales of mushrooming product variants and options, and into buying trends typically leads to poor demand planning and slow response time. When supply chain operations are disconnected from actual customer buying patterns, volatility across the entire enterprise may occur, with a potential revenue reduction of up to 25 percent.

Too many choices overwhelm both the seller and buyer (customer), reduce sales productivity, and create unneeded stock-keeping unit (SKU) proliferation. Sales representatives can spend a significant part of their time on solution definition, configuring, and pricing, and then tracking deliveries. Rampant SKU proliferation with no visibility into actual customer buying patterns also has a negative impact on the supply chain in terms of increased total overhead cost, high inventory exposure, high supplier risk, high volatility, and a poor demand consumption (by actual orders). If one also factors in the effect of handling all of those product variants on product management, the total impact on revenue can be even more substantial.

All too often, supply chain managers, product managers, and sales executives don’t keep up with the numbers of product variants within their organization. The automotive industry, for example, will brag about having the ability to manufacture a custom car in under a few days. Yet, if that car sits on the lot for weeks or months, what's the point of making so many so quickly? SKU rationalization and the ability to determine the smallest possible set of configurations to capture the demand most efficiently can improve product availability and reduce inventory. SKU rationalization can also improve parts usage, reduce inventory, and improve capital utilization (inventory turns).

 

Featured Software Research:

Big Data, Mobility, and Green IT: Innovations for Manufacturers and Distributors

Your business is in good company if it is considering big data, mobility, and/or green information technology (IT) solutions. Before making an investment decision, however, it should evaluate its options relative to actual business strategies and operating requirements. Only then will it put itself in a position to make a well-informed investment decision. Download this article to learn more. Read More

Linked Enterprise Data: Principles, Uses, and Benefits

Over time, as a result of operating pressures, the proliferation of tools, and changing technology, the corporate information system (IS) has become fragmented. Whereas the database formed the heart of the system 20 years ago, today there are countless sources of data: ERP, customer relations systems, sales management, internal directories, intranets, extranets, websites, etc. Yet this huge quantity of data is not being mined for the wealth and scope of its potential by either employees or information... Read More

You may also be interested in these related documents:

6 Reasons Big Data Requires Rethinking Your Middleware

The proliferation of process automation, sensor networks, user-generated content, machine-to-machine connections, and mobile computing is driving the far-reaching trend referred to as “big data.” It takes new tools and techniques to collect, process, and distribute this enormous amount of information. Download this white paper and know how to overcome six of the challenges you’ll face as you strive to capitalize on big data. Read More

3 Signs: Is Your Product Configurator Still Relevant to Your Customers Needs?

Stiff competition, declining profit margins, and changing consumer requirements have made mass customization necessary. But time-strapped customers don’t tolerate long sales cycles or unnecessary complexity. A self-service product configurator is one way to give customers what they want, and in a short period of time. Discover the three signs your product configurator may not be meeting your—or your customers’—needs. Read More

Optimizing the Lead-to-order Process

A superior lead-to-order (LTO) process is essential in today's environment of mass customization. Companies striving to build and maintain market share require an LTO process supported by robust IT and product configuration capabilities. Learn how optimizing your LTO process can help you implement a successful mass customization strategy, and how the benefits of an optimized LTO process relate to lean manufacturing. Read More
 
comments powered by Disqus