Archive | May 2010

So You Think You’re Ready for a Data Warehouse Appliance, Part 2

Forklift by Bien Stephenson via Flickr (Creative Commons license)

As I wrote in last week’s blog post, a data warehouse appliance simplifies platform and system resource administration. It doesn’t simplify the traditional time-intensive efforts of managing and integrating disparate data and addressing performance and tuning of various applications that contend for the same resources.

Many data warehouse appliance vendors offer sophisticated parallel processing environments, query optimization, and specialized storage structures to improve query processing (e.g., columnar-based engines). It’s naïve to think that taking data from an SMP (Symmetric Multi-Processing) relational database and moving it into a parallel processing environment will effectively scale without any adjustments or changes. Moving onto an appliance can be likened to moving into a new house.  When you move into a new, larger house, you quickly learn that it’s not as simple as dumping all of your stuff into the new house.  The different dimensions of the new rooms cause you realize that some of your old furniture or rugs simple don’t fit.  You inevitably have to make adjustments if you want to truly enjoy your new home.  The same goes with a data warehouse appliance; it likely has numerous features to support growth and scalability; you have to make adjustments to leverage their benefits.

Companies that expect to simply dump their data from a few legacy data marts over to a new appliance should expect to confront some adjustments or their likely to experience some unpleasant surprises. Here are some that we’ve already seen.

Everyone agrees that the biggest cost issue behind building a data warehouse is ETL design and development. Hoping to migrate existing ETL jobs into a new hardware and processing environment without expecting rework is short-sighted.  While you can probably force fit your existing job streams, you’ll inevitably misuse the new system, waste system resources, and dramatically reduce the lifespan of the appliance. Each appliance has its own way of handling the intensive resource requirements of data loading – in much the same way that each incumbent database product addresses these same situations. If you’ve justified an appliance through the benefits of consolidating multiple data marts (that contain duplicate data), it only makes sense to consolidate and integrate the ETL processes to prevent processing duplication and waste.

To assume that because you’ve built your ETL architecture leveraging the latest and greatest ETL software technology that you won’t have to review the underlying ETL architecture is also misguided.  While there’s no question that migrating tool-based ETL jobs to a new platform can be much easier than lower-level code, the issue at hand isn’t the source and destination– it’s the underlying table structures.  Not every table will change in definition on a new platform, but the largest (and most used) table content is the most likely candidate for review and redesign.  Each appliance handles data distribution and database design differently. Consequently, since the underlying table structures are likely to require adjustment, plan on a redesign of the actual ETL process too.

I’m also surprised by the casual attitude regarding technical training.  After all, it’s just a SQL database, right? But application developers and data warehouse development staff need to understand the differences of the appliance product (after all, it’s a different database version or product).  While most of this knowledge can be gained through reading the manuals – when was the last time the DBAs or database developers actually had a full-set of manuals—much less the time required to read them?  The investment in training isn’t significant—usually just a few days of classes. If you’re going to provide your developers with a product that claims to bigger, better, and faster than its competitors, doesn’t it make sense to prepare them adequately to use it?

There’s also an assumption that—since most data warehouse appliance vendors are software-only—that there are no hardware implications. On the contrary, you should expect to change your existing hardware. The way memory and storage are configured on a data warehouse appliance can differ from a general-purpose server, but it’s still rare that the hardware costs are factored into the development plan. And believing that older servers can be re-purposed has turned out to be a myth.  If you ‘re attempting to support more storage, more processing, and more users, how can using older equipment (with the related higher maintenance costs) make financial sense?

You could certainly fork-lift your data, leave all the ETL jobs alone, and not change any processing.  Then again, you could save a fortune on a new data warehouse appliance and simply do nothing. After all, no one argues with the savings associated with doing nothing—except, of course, the users that need the data to run your business.

photo by Bien Stephenson via Flickr (Creative Commons License)

So You Think You’re Ready for a Data Warehouse Appliance

Toasters by meddygarnet via Flickr

Many of our clients have asked us about whether it’s time to consider replacing their aging data warehouses with data warehouse appliance technologies. I chock up this emerging interest to the reality that data warehouse life spans are 3 to 4 years and platforms need to be refreshed.   Given the recent crop of announcements by vendors like Oracle and Teradata along with the high visibility of newer players like Netezza, Paraccel, and Vertica.

The benefit of a data warehouse appliance is that includes all of the hardware and software in a preconfigured solution that dramatically simplifies running and managing a data warehouse. (Some of the vendors have taken that one step further and actually sell software that is setup to work with specially defined commodity hardware configurations). Given the price/performance differences between the established data warehouse products and the newer data warehouse appliances, it only makes sense that these products be considered as alternatives to simply upgrading the hardware.

The data warehouse appliance market is arguably not new. In the 1980s companies like Britton-Lee and Teradata argued that database processing was different and would perform better with purpose-designed hardware and software. Many have also forgotten these pioneers argued that the power of commodity microprocessors vastly exceeded the price/performance of their mainframe processor competitors.

The current-generation appliance vendors have been invited to the table because of the enormous costs that have evolved in managing the enormous data volumes and operational access associated with today’s data warehouses. Most IT shops have learned that database scalability doesn’t just mean throwing more hardware and storage at the problem.  The challenge in managing these larger environments is understand the dynamics of the data content and the associated processing.  That’s why partitioning the data across multiple servers or simply removing history doesn’t work – for every shortcut taken to reduce the data quantity, there’s an equal impact to user access and the single version of truth.  This approach also makes data manipulation and even system support dramatically more complicated.

It’s no surprise that these venture capital backed firms would focus on delivering a solution that was simpler to configure and manage.  The glossy sales message of data warehouse appliance vendors comes down’ to something like: “We’ve reduced the complexity of running a data warehouse..  Just install our appliance like a toaster, and watch it go!”   There’s no question that many of these appliance vendors have delivered when it comes to simplifying platform management and configuration; the real challenge is addressing the management and configuration issues that impact a growing data warehouse: scalable load processing, a flexible data architecture, and manageable query processing.

We’ve already run into several early-adopters that think all that is necessary is to simply fork-lift their existing data warehouse structures onto their new appliance.  While this approach may work initially, the actual longevity of the appliance – or its price/performance rationale will soon evaporate.  These new products can’t work around bad data, poor design habits, and the limitations of duplicate data; their power is providing scalability across enormous data and processing volumes.  An appliance removes the complexities of platform administration. But no matter what appliance you purchase, and no matter how much horsepower it has, data architecture and data administration are still required.

In order to leverage the true power of an appliance, you have to expect to focus effort towards integrating data in a structure that leverages the scalability strengths of the product. While the appliances are SQL-based, the way they process loads, organize data, and handle queries can be dramatically different than their incumbent data marts and data warehouses. It’s naïve to think that a new appliance can provide processing scalability without any adjustments.  If it was that simple, the incumbent vendors would have already packaged that in their existing products.

In Part 2 of this post, I’ll elaborate on the faulty assumptions of many companies that acquire data warehouse appliances, and warn you against making these same mistakes.

photo by meddygarnet via Flickr (Creative Commons License)

%d bloggers like this: