Standardizing Data Migration

In the motion picture industry, studios separate responsibilities for creating content from responsibilities for distributing content. The people who make the movies option the scripts, hire the talent, and film the scenes. The distributors of the films, on the other hand, figure out how to package and deploy the films. They need to know which theaters require 30 millimeter versus 70 millimeter formats, or even IMAX. They also deal with DVD packaging, including different international DVD formats. The industry understands the importance of having a supply chain that differentiates between the roles of content creation, content packaging, and distribution.
In IT we’re very quick to point to our operational systems as creators and owners of data. But maybe the solution is that IT establishes a functional team that’s responsible for data packaging and distribution, just like the movie industry.
Traditionally data formats and standards have fallen into the realm of the architecture team. Unfortunately this is typically a paper-only activity without teeth. A data distribution team wouldn’t focus on paperwork. They would be focused on data logistics, receiving content from the various source systems and packaging the data for consumption by other systems. This isn’t about implementing a specific platform to store or move data. It’s about active management of corporate data content.
One of the biggest development challenges is the hunting expedition that developers go on to find and acquire the data they need. Most aren’t aware of all their choices, let alone the optimal systems of record.
Currently every application, data mart, data warehouse, reporting system that needs data from another system follows a specific set of procedures to obtain that data. Each system requests different data formats, different delivery schedules, and different content. Everything is custom, there are few if any standards, and there are no economies of scale.
This will also unburden the various application teams from building and maintaining the never ending volume of custom extract requests. The only way to stop the madness is to compartmentalize content creation from data packaging and distribution. This means establishing a data supply chain that separates data creators from data distribution from consumers. Who knew IT infrastructure was just like the movies?
Improving BI Development Efficiency: Standard Data Extracts

A few years ago, a mission to Mars failed because someone forgot to convert U.S. measurement units to metric measurement units. Miles weren’t converted to kilometers.
I thought of this fiasco when reading a blog post recently that insisted that the only reasonable approach for moving data into a data warehouse was to position the data warehouse as the “hub” in a hub-and-spoke architecture. The assumption here is that data is formatted differently on diverse source systems, so the only practical approach is to copy all this data onto the data warehouse, where other systems can retrieve it
I’ve written about this topic in the past, but I wanted to expand a bit. I think it’s time to challenge this paradigm for the sake of BI expediency.
The problem is that the application systems aren’t responsible for sharing their data. Consequently little or no effort is paid to pulling data out of an operational system and making it available to others. This then forces every data consumer to understand the unique data in every system. This is neither efficient nor scale-able.
Moreover, the hub-and-spoke architecture itself is also neither efficient nor scalable. The way manufacturing companies address their distribution challenges is by insisting on standardized components. Thirty-plus years ago, every automobile seemed to have a set of parts that were unique to that automobile. Auto manufacturers soon realized that if they established specifications in which parts could be applied across models, they could reproduce parts, giving them scalability not only across different cars, but across different suppliers.
It’s interesting to me that application systems owners don’t aren’t measured on these two responsibilities:
- Business operation processing—ensuing that business processes are automated and supported effectively
- Supplying data to other systems
No one would argue that the integrated nature of most companies requires data to be shared across multiple systems. That data generated should be standardized: application systems should extract data and package it in a consistent and uniform fashion so that it can be used across many other systems—including the data warehouse—without the consumer struggling to understand the idiosyncrasies of the system it came from.
Application systems should be obligated to establish standard processes whereby their data is availed on a regular basis (weekly, daily, etc.). Since most extracts are column-record oriented, the individual values should be standardized—they should be formatted and named in the same way.
Can you modify every operational system to have a clean, standard extract file on Day 1? Of course not. But as new systems are built, extracts should be built with standard data. For every operational system, a company can save hundreds or even thousands of hours every week in development and processing time. Think of what your BI team could do with the resulting time—and budget money!
photo by jason b42882
Your Company’s Data Supply Chain

photo by BotheredByBees
At Baseline Consulting we've been talking for several years about the concept of a data supply chain. But IT executives are only now starting to catch on to its importance.
Over the past 15 years there has been a big push to standardize on off-the-shelf software. This allowed IT organizations to buy instead of build. We've migrated from proprietary architectures to Windows and Linux standards. We've gone from custom-built applications to packaged CRM and ERP applications. IT adopted this approach because its value is automating business processes and supporting analysis– not inventing new technologies. The problem is that moving data between all of these "packaged systems" still requires custom code.
There's no question that middleware provides value: it delivers the pre-built data pipes. Unfortunately, these are toolkits requiring developers to write code to connect their packages to the pipes. Most CIOs are blissfully unaware of the amount of custom coding middleware requires. Trust me: IT spends an enormous amount of money on supporting such data migration solutions. Many IT shops still view middleware as sacred ground.
The data warehousing world has enthusiastically adopted ETL tools to reduce custom coding so they can focus on the issues of data accuracy and usability. One fact lost in translation is that ETL integrates data– it's more than just a pipe. The application world has adopted EAI, ESB, and orchestration to move data quicker. However, there's no integration. Each application is responsible for integrating the data they receive.
So, there's even more custom code. Code to connect an application to the pipes. Code to integrate and cleanup the data they receive from the pipes.
Custom code to move data around isn't the answer. Orchestration, message passing, and data movement just creates a labyrinth of pipes. There are no economies of scale. The data doesn't get better.
Walmart learned years ago that it was impractical to have a custom (and separate) distribution system for every supplier. They knew the cost benefits of a standard distribution system; this meant they needed to standardize the size of the trailers, the size of the boxes, and the way the boxes were packed and shipped. The benefits of a supply chain is that standardization occurs at the most cost effective point: the source. Walmart's distribution success was measured by its ability to accept new suppliers and manage more shipments.
Most CIOs don't recognize that they have a data supply chain. Instead of building a custom distribution system for each suppler (each business application), they should be focused on a single data supply chain. Middleware supports the creation of custom distribution solutions, but not the standardization of data. A data supply chain can only be successful if the data is standardized. Otherwise everyone is forced to write custom code to standardize, clean, and integrate the data.