Tag Archive | data management

Is Your IT Team Prepared for a Data-Driven Business?

I wrote last time about the challenges that companies have in their transition to becoming data driven.  Much has been written about the necessity of the business audience needing to embrace change. I thought I’d spend a few words discussing the other participant in a company’s data-driven transition:  the Information Technology (IT) organization.

One of the issues that folks rarely discuss is that many IT organizations haven’t positioned themselves to support a data-driven culture.  While most have spent a fortune on technology, the focus is always about installing hardware, building platforms, acquiring software, developing architectures, and delivering applications.   IT environments focus on streamlining the construction and maintenance of systems and applications.  While this is important, that’s only half the solution for a data-driven organization.  A data-driven culture (or philosophy) requires that all of a company’s business data is accessible and usable.  Data has to be packaged for sharing and use. 

Part of the journey to becoming data-driven is ensuring that there’s a cultural adjustment within IT to support the delivery of applications and data.  It’s not just about dropping data files onto servers for users to copy. It’s about investing in the necessary methods and practices to ensure that data is available and usable (without requiring lots of additional custom development). 

Some of the indicators that your IT organization isn’t prepared or willing to be data-driven include

  • There’s no identified Single Version Of Truth (SVOT).

There should be one place where the data is stored.  While this is obvious, the lack of a single agreed to data location creates the opportunity to have multiple data repositories and multiple (and conflicting) sets of numbers.   Time is wasted disputing accuracy instead of being focused on business analysis and decision making.   

  • Data sharing is a courtesy, not an obligation.

How can a company be data driven if finding and accessing data requires multiple meetings and multiple approvals for every request? If we’re going to run the business by the numbers, we can’t waste time begging or pleading for data from the various system owners.  Every application system should have two responsibilities: processing transactions and sharing data. 

  • There’s no investment in data reuse.

The whole idea of technology reuse has been a foundational philosophy for IT for more than 20 years:  build once, use often.  While most IT organizations have embraced this for application development, it’s often overlooked for data.  Unfortunately, data sharing activities are often built as a one-off, custom endeavor.  Most IT teams manage 100’s or 1000’s of file extract programs (for a single system) and have no standard for moving data packets between applications.  There’s no reuse; every new request gets their own extract or service/connection.

  • Data accuracy and data correction is not a responsibility

Most IT organizations have invested in data quality tools to address data correction and accuracy, but few ever use them.  It’s surprising to me that any shop allows new development to occur without requiring the inclusion of a data inspection and correction process.  How can a business person become data driven if they can’t trust the data?  How can you expect them to change if the IT hasn’t invested in fixing the data when it’s created (or at least shared)?   

It’s important to consider that enabling IT to support a data-driven transition isn’t realistic without investment.  You can’t expect staff members that are busy with their existing duties to absorb additional responsibilities (after all, most IT organizations have a backlog).  If a company wants to transition to a data-driven philosophy, you have to allow the team members to learn new skills to support the additional activities.  And, there needs to be staff members available to do the work.   

There’s only one reason to transition to being a data-driven organization; it’s about more profit, more productivity, and more business success.  Consequently, there should be funds available to allow IT to support the transition. 

Your Company’s Problem with Data may be Fear

I just read this article by Ethan Knox, “Is Your Company Too Dumb to be Data Driven” and was intrigued to read what many people have discussed for years.  I’ve spent nearly half my career helping clients make the transition from running the business by tribal knowledge and gut instinct to running the business by facts and numbers.  It’s a hard transition.  One that takes vision, motivation, discipline, and courage to change.  It also takes a willingness to learn something new.     

While this article covers a lot of ground, I wanted to comment on one of points made in the article: the mistake of “build it and they will come”.  This occurs when an organization is enthusiastic about data and decides to build a data warehouse (or data lake) and load it with all the data from the company’s core application systems (sales, finance, operations, etc.)  The whole business case depends on the users flocking to the system, using new business intelligence or reporting tools, and uncovering numerous high value business insights.  All too often, the results reflect a large monolithic data platform that contains lots of content but hasn’t been designed to support analysis or decision making by the masses.

There are numerous problems with this approach – and the path to data and analytics enlightenment is littered with mistakes where companies took this approach.   Don’t assume that successful companies that have embraced data and analytics didn’t make this mistake (it’s a very common mistake).  Successful companies were those that were willing to learn from their mistakes – and have a culture where new project efforts are carefully scoped to allow mistakes, learning, and evolution.   It’s not that they’re brilliant; successful companies understand that transitioning to being data driven company requires building knowledge.  And, the process of learning takes time, includes mistakes, requires self-analysis, and must be managed and mentored carefully.  They design their projects assuming mistakes and surprises occur, so they fail fast and demand continual measurement and corrective action.  It’s not about the methodology or development approach.  A fail-fast philosophy can work with any type of development methodology (agile, iterative, waterfall). The path to data enlightenment will include lots of mistakes.

Do you remember high school math?  When you were presented with a new concept, you were given homework that allowed you to learn, gain experience, and understand the concept through the act of “doing”.  Homework was often graded based on effort, not accuracy (if you did it, you got credit, whether or not it was correct).  Where is it written that (upon graduation) learning something new wouldn’t require the act of “doing” and making mistakes to gain enlightenment?  By the way, who has ever succeeded without making mistakes?

The point the article frequently references it that business engagement is critical.  It’s not about the users participating a few times (requirements gathering and user acceptance testing); it’s about users being engaged to review results and participate in the measurement and corrective action.  It’s about evolving from a culture where the relationship is customer/ provider to a team where everyone succeeds or fails based on business measurement.

It’s not that a company is too dumb to succeed with data; it’s that they’re often too fearful of mistakes to succeed.  And in the world of imperfect data, exploding data volumes, frequent technology changes, and a competitive business environment, mistakes are an indication of learning.   Failure isn’t a reflection of mistakes, it’s a reflection of poor planning, lack of measurement, and an inability to take corrective action.

Data Strategy Component: Assemble

assemble

This blog is 4th in a series focused on reviewing the individual Components of a Data Strategy.  This edition discusses the component Assemble and the numerous details involved with sourcing, cleansing, standardizing, preparing, integrating, and moving the data to make it ready to use.

The definition of Assemble is:

“Cleansing, standardizing, combining, and moving data residing in multiple locations and producing a unified view”

In the Data Strategy context, Assemble includes all of the activities required to transform data from its host-oriented application context to one that is “ready to use” and understandable by other systems, applications, and users.

Most data used within our companies is generated from the applications that run the company (point-of-sale, inventory management, HR systems, accounting) .  While these applications generate lots of data, their focus is on executing specific business functions; they don’t exist to provide data to other systems.  Consequently, the data that is generated is “raw” in form; the data reflects the specific aspects of the application (or system of origin).  This often means that the data hasn’t been standardized, cleansed, or even checked for accuracy.   Assemble is all of the work necessary to convert data from a “raw” state to one that is ready for business usage.

I’ve identified 5 facets to consider when developing your Data Strategy that are commonly employed to make data “ready to use”.  As a reminder (from the initial Data Strategy Component blog), each facet should be considered individually.  And because your Data Strategy goals will focus on future aspirational goals as well as current needs, you’ll likely want to consider different options for each.  Each facet can target a small organization’s issues or expand to focus on a large company’s diverse needs.

Identification and Matching

Data integration is one of the most prevalent data activities occurring within a company; it’s a basic activity employed by developers and users alike.   In order to integrate data from multiple sources, it’s necessary to determine the identification values (or keys) from each source (e.g. the employee id in an employee list, the part number in a parts list).  The idea of matching is aligning data from different sources with the same identification values.   While numeric values are easy to identify and match (using the “=” operator), character-based values can be more complex (due to spelling irregularities, synonyms, and mistakes). 

Even though it’s highly tactical, Identification and matching is important to consider within a Data Strategy to ensure that data integration is processed consistently. And one of the (main) reasons that data variances continue to exist within companies (despite their investments in platforms, tools, and repositories) is because the need for standardized Identification and Matching has not been addressed.

Survivorship

Survivorship is a pretty basic concept: the selection of the values to retain (or survive) from the different sources that are merged.  Survivorship rules are often unique for each data integration process and typically determined by the developer.  In the context of a data strategy, it’s important to identify the “systems of reference” because the identification of these systems provide clarity to developers and users to understand which data elements to retain when integrating data from multiple systems.

Standardize / Cleanse

The premise of data standardization and cleansing is to identify inaccurate data and correct and reformat the data to match the requirements (or the defined standards) for a specific business element. This is likely the single most beneficial process to improve the business value (and the usability) of data. The most common challenge to data standardization and cleansing is that it can be difficult to define the requirements.  The other challenge is that most users aren’t aware that their company’s data isn’t standardized and cleansed as a matter of practice.   Even though most companies have multiple tools to cleanup addresses, standardize descriptive details, and check the accuracy of values, the use of these tools is not common.

Reference Data  

Wikipedia defines reference data as data that is used to classify or categorize other data.  In the context of a data strategy, reference data is important because it ensures the consistency of data usage and meaning across different systems and business areas.  Successful reference data means that details are consistently identified, represented, and formatted the same way across all aspects of the company (if the color of a widget is “RED”,  then the value is represented as “RED” everywhere – not “R” in product information system, 0xFF0000 in inventory system, and 0xED2939 in product catalog).   A Reference Data initiative is often aligned with a company’s data strategy initiative because of its impact to data sharing and reuse.

Movement Tracking

The idea of movement is to record the different systems that a data element touches as it travels (and is processed) after the data element is created.  Movement tracking (or data lineage) is quite important when the validity and accuracy of a particular data value is questioned.  And in the current era of heightened consumer data privacy and protection, the need for data lineage and tracking of consumer data within a company is becoming a requirement (and it’s the law in California and the European Union).

The dramatic increase in the quantity and diversity of data sources within most companies over the past few years has challenged even the most technology advanced organizations.  It’s not uncommon to find one of the most visible areas of user frustration to be associated with accessing new (or additional) data sources.  Much of this frustration occurs because of the challenge in sourcing, integrating, cleansing, and standardizing new data content to be shared with users.   As is the case with all of the other components, the details are easy to understand, but complex to implement.   A company’s data strategy has to evolve and change when data sharing becomes a production business requirement and users want data that is “ready to use”.

Data Strategy Component: Store

Store

This blog is 3rd in a series focused on reviewing the individual Components of a Data Strategy.  This edition discusses storage and the details involved with determining the most effective method for persisting data and ensuring that it can be found, accessed, and used.

The definition of Store is:

“Persisting data in a structure and location that supports access and processing across the user audience”

Information storage is one of the most basic responsibilities of an Information Technology organization – and it’s an activity that nearly every company addresses effectively.  On its surface, the idea of storage seems like a pretty simple concept:  setup and install servers with sufficient storage (disk, solid state, optical, etc.) to persist and retain information for a defined period of time.  And while this description is accurate, it’s incomplete.  In the era of exploding data volumes, unstructured content, 3rd party data, and need to share information, the actual media that contains the content is the tip of the iceberg.  The challenges with this Data Strategy Component are addressing all of the associated details involved with ensuring the data is accessible and usable.

In most companies, the options of where data is stored is overwhelming.  The core application systems use special technology to provide fast, highly reliable, and efficiently positioned data. The analytics world has numerous databases and platforms to support the loading and analyzing of a seemingly endless variety of content that spans the entirety of a company’s digital existence. Most team members’ desktops can expand their storage to handle 4 terabytes of data for less than a $100.  And there’s the cloud options that provide a nearly endless set of alternatives for small and large data content and processing needs.  Unfortunately, this high degree of flexibility has introduced a whole slew of challenges when it comes to managing storage:  finding the data, determining if the data has changed, navigating and accessing the details, and knowing the origin (or lineage).

I’ve identified 5 facets to consider when developing your Data Strategy and analyzing data storage and retention. As a reminder (from the initial Data Strategy Component blog), each facet should be considered individually.  And because your Data Strategy goals will focus on future aspirational goals as well as current needs, you’ll likely to want to consider the different options for each.  Each facet can target a small organization’s issues or expand to focus on a large company’s diverse needs.

Stored Content

The most basic facet of storing data is to identify the type of content that will be stored:  raw application data, rationalized business content, or something in between.  It’s fairly common for companies to store the raw data from an application system (frequently in a data lake) as well as the cooked data (in a data warehouse).  The concept of “cooked” data refers to data that’s been standardized, cleaned, and stored in a state that’s “ready-to-use”.   It’s likely that your company also has numerous backup copies of the various images to support the recovery from a catastrophic situation.  The rigor of the content is independent of the platform where the data is stored.

Onboarding Content

There’s a bunch of work involved with acquiring and gathering data to store it and make it “ready-to-use”.  One of the challenges of having a diverse set of data from numerous sources is tracking what you have and knowing where it’s located. Any type of inventory requires that the “stuff” get tracked from the moment of creation.  The idea of Onboarding Content is to centrally manage and track all data that is coming into and distributed within your company (in much the same way that a receiving area works within a warehouse).  The core benefit of establishing Onboarding as a single point of data reception (or gathering) is that it ensures that there’s a single place to record (and track) all acquired data.  The secondary set of benefits are significant: it prevents unnecessary duplicate acquisition, provides a starting point for cataloging, and allows for the checking and acceptance of any purchased content (which is always an issue).

Navigation / Access

All too often, business people know the data want and may even know where the data is located; unfortunately, the problem is that they don’t know how to navigate and access the data where it’s stored (or created).  To be fair, most operational application systems were never designed for data sharing; they were configured to process data and support a specific set of business functions.  Consequently, accessing the data requires a significant level of system knowledge to navigate the associated repository to retrieve the data.   In developing a Data Strategy, it’s important to identify the skills, tools, and knowledge required for a user to access the data they require.  Will you require someone to have application interface and programming skills?  SQL skills and relational database knowledge?  Or, spreadsheet skills to access a flat file, or some other variation?

Change Control

Change control is a very simple concept: plan and schedule maintenance activities, identify outages, and communicate those details to everyone.  This is something that most technologists understand. In fact, most Information Technology organizations do a great job of production change control for their application environments.  Unfortunately, few if any organizations have implemented data change control.  The concept for data is just as simple:  plan and schedule maintenance activities, identify outages (data corruption, load problems, etc.), and communicate those details to everyone.  If you’re going to focus any energy on a data strategy, data change control should be considered in the top 5 items to be included as a goal and objective.

Platform Access

As I’ve already mentioned, most companies have lots of different options for housing data.  Unfortunately, the criteria for determining the actual resting place for data often comes down to convenience and availability. While many companies have architecture standards and recommendations for where applications and data are positioned, all too often the selection is based on either programmer convenience or resource availability.  The point of this area isn’t to argue what the selection criteria are, but to identify them based on core strategic (and business operation) priorities.

In your Data Strategy effort, you may find the need to include other facets in your analysis.  Some of the additional details that I’ve used in the past include metadata, security, retention, lineage, and archive access.  While simple in concept, this particular component continues to evolve and expand as the need for data access and sharing grows within the business world.

Data Strategy Component: Provision

Provision

This blog is the 2nd in a series focused on reviewing the individual Components of a Data Strategy.  This edition discusses the concept of data provisioning and the various details of making data sharable.

The definition of Provision is:

“Supplying data in a sharable form while respecting all rules and access guidelines”

One of the biggest frustrations that I have in the world of data is that few organizations have established data sharing as a responsibility.  Even fewer have setup the data to be ready to share and use by others.  It’s not uncommon for a database programmer or report developer to have to retrieve data from a dozen different systems to obtain the data they need.  And, the data arrives in different formats and files that change regularly.   This lack of consistency generates large ongoing maintenance costs and requires an inordinate amount of developer time to re-transform, prepare, fix data to be used (numerous studies have found that ongoing source data maintenance can take as much of 50% of the database developers time after the initial programming effort is completed).

Should a user have to know the details (or idiosyncrasies) of the application system that created the data to use the data? (That’s like expecting someone to understand the farming of tomatoes and manufacturing process of ketchup in order to be able to put ketchup on their hamburger).   The idea of Provision is to establish the necessary rigor to simplify the sharing of data.

I’ve identified 5 of the most common facets of data sharing in the illustration above – there are others.   As a reminder (from last week’s blog), each facet should be considered individually.  And because your Data Strategy goals will focus on future aspirational goals as well as current needs, you’ll likely to want to review the different options for each facet.  Each facet can target a small organization’s issues or expand to address a diverse enterprise’s needs. 

Packaging

This is the most obvious aspect of provisioning: structuring and formatting the data in a clear and understandable manner to the data consumer.  All too often data is packaged at the convenience of the developer instead of the convenience of the user. So, instead of sharing data as a backup file generated by an application utility in a proprietary (or binary) format, the data should be formatted so every field is labeled and formatted (text, XML) for a non-technical user to access using easily available tools. The data should also be accompanied with metadata to simplify access.

Platform Access

This facet works with Packaging and addresses the details associated with the data container.  Data can be shared via a file, a database table, an API, or one of several other methods.  While sharing data in a programmer generated file is better than nothing, a more effective approach would be to deliver data in a well-known file format (such as Excel) or within a table contained in an easily accessible database (e.g. data lake or data warehouse).

Stewardship

Source data stewardship is critical in the sharing of data.  In this context, a Source Data Steward is someone that is responsible for supporting and maintaining the shared data content (there several different types of data stewards).  In some companies, there’s a data steward responsible for the data originating from an individual source system.  Some companies (focused on sharing enterprise-level content) have positioned data stewards to support individual subject areas.  Regardless of the model used, the data steward tracks and communicates source data changes, monitors and maintains the shared content, and addresses support needs.   This particular role is vital if your organization is undertaking any sort of data self-service initiative.

Acceptance Checking

This item addresses the issues that are common in the world of electronic data sharing:  inconsistency, change, and error.  Acceptance checking is a quality control process that reviews the data prior to distribution to confirm that it matches a set of criteria to ensure that all downstream users receive content as they expect.  This item is likely the easiest of all details to implement given the power of existing data quality and data profiling tools. Unfortunately, it rarely receives attention because of most organization’s limited experience with data quality technology.

Data Audience

In order to succeed in any sort of data sharing initiative, whether in supporting other developers or an enterprise data self-service initiative, it’s important to identify the audience that will be supported.  This is often the facet to consider first, and it’s valuable to align the audience with the timeframe of data sharing support. It’s fairly common to focus on delivering data sharing for developers support first followed by technical users and then the large audience of business users.

In the era of “data is a business asset” , data sharing isn’t a courtesy, it’s an obligation.  Data sharing shouldn’t occur at the convenience of the data producer, it should be packaged and made available for the ease of the user.

Do You Need A Data Strategy?

20200608 ThinkingDataStrategy

During my time teaching Data Strategy in the class room, I’m frequently asked the question, “how do I know if I need a data strategy?”  For those of you that are deep thinkers, business strategists, or even data architects, I suspect your answer is either “yes!” or “why not?”.

When I’m asked that question, I actually think there’s a different question at hand, “Should I invest the time in developing a data strategy instead of something else?”

In today’s business world, there’s not a shortage of “to do list” items.  So, prioritizing the development of a Data Strategy means deprioritizing some other item.   In order to understand the relative priority and benefit of a Data Strategy initiative, take a look at the need, pain, or problem you’re addressing along with the quantity of people affected.  Your focus should be understanding how a Data Strategy initiative will benefit the team members’ ability to do their job.

To get started, I usually spend time up front interviewing folks to understand the strengths, weaknesses, challenges, and opportunities that exist with data within a company (or organization).  Let me share 5 questions that I always ask.

  1. Is the number of users (or organizations) building queries/reports to analyze data growing?
  2. Are there multiple reports containing conflicting information?
  3. Can a new staff member find and use data on their own, or does it require weeks or months of staff mentoring?
  4. Is data systematically inspected for accuracy (and corrected)? Is anyone responsible for fixing “broken data”?
  5. Is anyone responsible for data sharing?

While you might think these questions are a bit esoteric, each one has a specific purpose.  I’m a big fan of positioning any new strategy initiative to clearly identify the problems that are going to be solved.  If you’re going to undertake the development of a Data Strategy, you want to make certain that you will improve staff members’ ability to make decisions and be more effective at their jobs.  These questions will help you identify where people struggle getting the job done, or where there’s an unquantified risk with using data to make decisions.

So, let me offer an explanation of each question.

  1. “Is the number of users (or organizations) building queries/reports to analyze data growing”

The value of a strategy is directly proportional to the number of people that are going to be affected. In the instance of a data strategy, it’s valuable to understand the number of people that use data (hands-on) to make decisions or do their jobs. If the number is small or decreasing, a strategy initiative may not be worth the investment in time and effort.  The larger the number, the greater the impact to the effectiveness (and productivity) to the various staff members.

  1. “Are there multiple reports containing conflicting information? “

If you have conflicting details within your company that means decisions are made with inaccurate data.  That also means that there’s mistrust of information and team members are spending time confirming details.  That’s business risk and a tremendous waste of time.

  1. “Can a new staff member find and use data…”

If a new staff member can’t be self-sufficient after a week or two on the job (when it comes to data access and usage), you have a problem.  That’s like someone joining the company and not having access to office supplies, a parking space, and email.  And, if the only way to learn is to beg for time for other team members – your spending time with two people not doing their job. It’s a problem that’s being ignored.

  1. “Is data systematically inspected for accuracy (and corrected)? …”

This item is screaming for attention.  If you’re in a company that uses data to make decisions, and no one is responsible for inspecting the content, you have a problem.  Think about this issue another way:   would you purchase hamburger at the grocery store if there was a sign that stated “Never inspected.  May be spoiled.  Not our responsibility”?

  1. Is anyone responsible for data sharing?

This item gets little attention in most companies and is likely the most important of all the questions.  If data is a necessary ingredient in decision making and there isn’t anyone actively responsible for ensuring that new data assets are captured, stored, tracked, managed, and shared, you’re saying that data isn’t a business asset.  (How many assets in the company aren’t tied to someone’s responsibilities?)

If the answer to all of the questions is “no” – great.  You’re in an environment where data is likely managed in a manner that supports a multitude of team members’ needs across different organizations.  If you answered “yes” to a single question, it’s likely that an incremental investment in a tactical data management effort would be helpful.  If more than 1 question is answered “yes”, your company (and the team) will benefit from a Data Strategy initiative.

 

Data Strategy. Why it Matters

I’ve been consulting in the data management space for quite a few years, and I’m often asked about the importance and need for a Data Strategy.   

All too often, the idea of “strategy” brings the images of piles of papers, academics-styled charts, and a list of unachievable goals identifying the topic at hand, but not reflecting reality.  Developing a strategy isn’t about identifying perfection – it’s about identifying a set of goals that address problems and needs that require attention.   A solid data strategy isn’t about identifying perfection, it’s about identifying a set of goals that are achievable and good enough to improve your data environment.  A data strategy is also about identifying the tasks and activities necessary to achieve those goals.  A data strategy is more than the finish line, it’s about the path of the journey. And, it’s about making sure the journey and goal are possible.

Companies spend a fortune on data.  They purchase servers and storage farms to store the data, database management systems to manage the data, transformation tools to convert and transform the data, data quality tools to fix and standardize the content, and treasure trove of analytical tools to present content that can be understood by business people. Given all of the activities, the players, and the content, why would you not want a plan?

Unfortunately, few organizations have a Data Strategy.  They have lots of technology plans and roadmaps. They have platform and server plans; they have DBMS standards; they have storage strategies; they likely have analytical tool plans. While these are valuable, they are typically focused on an organization or function with minimal concern for all of the related upstream and downstream activities (how usable is a data warehouse if the data exists as multiple copies with different names and different formats, and hasn’t been checked/fixed for accuracy?) A data strategy is a plan that ensures that data is easy to find, easy to identify, easy to use, and easy to share across the company and across multiple functions. 

Information technologists are exceptionally strong in the world of applications, tools, and platforms.  They understand the importance of ensuring “reusability” and the benefit of an “economies-of-scale” approach. These are both just nice sound bites focused on making sure that new development work doesn’t always require reinvention.  Application strategies include identifying standards (tools, platforms, storage locations, etc.) and repeatable methods to ensure efficient construction and delivery of data that can be serviced, maintained, and upgraded. An assembly line of sorts.

The challenge with most data environments is that a data strategy rarely exists; there is no repeatable methods and practices.  Every new request requires building data and the associated deliverables from scratch.  And, once delivered, there’s a huge testing and confirmation effort to ensure that the data is accurate.   If you had a data strategy, you’d have reusable data, repeatable methods, and the details would be referenceable online instead of through tribal knowledge.  And delivery efficiency and cost would improve over time. 

Why do you need a data strategy?  Because the cost of data is growing –and it should be shrinking.  The cost of data processing has shrunk, the cost of data storage has decreased dramatically, but the cost of data delivery continues to grow.  A data strategy focuses on delivering data that is easy to find, easy to use, and easy to share.  

Data Quality, Data Maintenance

20121009 DataMaintenance

I read an interesting tidbit about data the other day:  the United States Postal Service processed more than 47 million changes of addresses in the last year.  That’s nearly 1 in 6 people. In the world of data, that factoid is a simple example of the challenge of addressing stale data and data quality.  The idea of stale data is that as data ages, its accuracy and associated business rules can change.

There’s lots of examples of how data in your data warehouse can age and degrade in accuracy and quality:  people move, area codes change, postal/zip codes change, product descriptions change, and even product SKUs can change.  Data isn’t clean and accurate forever; it requires constant review and maintenance. This shouldn’t be much of a surprise for folks that view data as a corporate asset; any asset requires ongoing maintenance in order to retain and ensure its value.  The challenge with maintaining any asset is establishing a reasonable maintenance plan.

Unfortunately, while IT teams are exceptionally strong in planning and carrying out application maintenance, it’s quite rare that data maintenance gets any attention.  In the data warehousing world, data maintenance is typically handled in a reactive, project-centric manner.  Nearly every data warehouse (or reporting) team has to deal with data maintenance issues whenever a company changes major business processes or modifies customer or product groupings (e.g. new sales territories, new product categories, etc.)  This happens so often, most data warehouse folks have even given it a name:  Recasting History.   Regardless of what you call it, it’s a common occurrence and there are steps that can be taken to simplify the ongoing effort of data maintenance.

  • Establish a regularly scheduled data maintenance window.  Just like the application maintenance world, identify a window of time when data maintenance can be applied without impacting application processing or end user access
  • Collect and publish data quality details.  Profile and track the content of the major subject area tables within your data warehouse environment. Any significant shift in domain values, relationship details, or data demographics can be discovered prior to a user calling to report an undetected data problem
  • Keep the original data.  Most data quality processing overwrites original content with new details.  Instead, keep the cleansed data and place the original values at the end of your table records. While this may require a bit more storage, it will dramatically simplify maintenance when rule changes occur in the future
  • Add source system identification and creation date/time details to every record.  While this may seem tedious and unnecessary, these two fields can dramatically simplify maintenance and trouble shooting in the future
  • Schedule a regular data change control meeting.  This too is similar in concept to the change control meeting associated with IT operations teams.  This is a forum for discussing data content issues and changes

Unfortunately, I often find that data maintenance is completely ignored. The problem is that fixing broken or inaccurate data isn’t sexy; developing a data maintenance plan isn’t always fun.   Most data warehouse development teams are buried with building new reports, loading new data, or supporting the ongoing ETL jobs; they haven’t given any attention to the quality or accuracy of the actual content they’re moving and reporting.   They simply don’t have the resources or time to address data maintenance as a proactive activity.

Business users clamor for new data and new reports; new funding is always tied to new business capabilities.  Support costs are budgeted, but they’re focused on software and hardware maintenance activities.  No one ever considers data maintenance; it’s simply ignored and forgotten.

Interesting that we view data as a corporate asset – a strategic corporate asset – and there’s universal agreement that hardware and software are simply tools to support enablement.  And where are we investing in maintenance?  The commodity tools, not the strategic corporate asset.

Photo courtesy of DesignzillasFlickr via Flickr (Creative Commons license).

Advanced Data Virtualization Capabilities

20130925 AdvancedDV

In one of my previous blogs, I wrote about Data Virtualization technology — one of the more interesting pieces of middleware technology that can simplify data management.   While most of the commercial products in this space share a common set of features and functions, I thought I’d devote this blog to discussing the more advanced features.  There are quite a few competing products; the real challenge in differentiating the products is to understand their more advanced features.

The attraction of data virtualization is that it simplifies data access.  Most IT shops have one of everything – and this includes several different brands of commercial DBMSs, a few open source databases, a slew of BI/reporting tools, and the inevitable list of emerging and specialized tools and technologies (Hadoop, Dremel, Casandra, etc.) Supporting all of the client-to-server-to-repository interfaces (and the associated configurations) is both complex and time consuming.  This is why the advanced capabilities of Data Virtualization have become so valuable to the IT world.

The following details aren’t arranged in any particular order.  I’ve identified the ones that I’ve found to be the most valuable (and interesting).  Let me also acknowledge not every DV product supports all of these features.

Intelligent data caching.  Repository-to-DV Server data movement is the biggest obstacle in query response time.  Most DV products are able to support static caching to reduce repetitive data movement (data is copied and persisted in the DV Server).  Unfortunately, this approach has limited success when there are ad hoc users accessing dozens of sources and thousands of tables.  The more effective solution is for the DV Server to monitor all queries and dynamically cache data based on user access, query load, and table (and data) access frequency.

Query optimization (w/multi-platform execution). While all DV products claim some amount of query optimization, it’s important to know the details. There are lots of tricks and techniques; however, look for optimization that understands source data volumes, data distribution, data movement latency, and is able to process data on any source platform.

Support for multiple client Interfaces.  Since most companies have multiple database products, it can be cumbersome to support and maintain multiple client access configurations.  The DV server can act as a single access point for multiple vendor products (a single ODBC interface can replace drivers for each DBMS brand).  Additionally, most DV Server drivers support multiple different access methods (ODBC, JDBC, XML, and web services).

Attribute level or value specific data security.  This feature supports data security at a much lower granularity than is typically available with most DBMS products.  Data can be protected (or restricted) at individual column values for entire table or selective rows.

Metadata tracking and management.  Since Data Virtualization is a query-centric middleware environment, it only makes sense to position this server to retrieve, reconcile, and store metadata content from multiple, disparate data repositories.

Data lineage. This item works in tandem with the metadata capability and augments the information by retaining the source details for all data that is retrieved.  This not only includes source id information for individual records but also the origin, creation date, and native attribute details.

Query tracking for usage audit. Because the DV Server can act as a centralized access point for user tool access, there are several DV products that support the capture and tracking of all submitted queries.  This can be used to track, measure, and analyze end user (or repository) access.

Workflow linkage and processing.  This is the ability to execute predefined logic against specific data that is retrieved. While this concept is similar to a macro or stored procedure, it’s much more sophisticated.  It could include the ability to direct job control or specialized processing against an answer set prior to delivery (e.g. data hygiene, external access control, stewardship approval, etc.)

Packaged Application Templates.  Most packaged applications (CRM, ERP, etc.) contain thousands of tables and columns that can be very difficult to understand and query.  Several DV vendors have developed templates containing predefined DV server views that access the most commonly queried data elements.

Setup and Configuration Wizards. Configuring a DV server to access the multiple data sources can be a very time consuming exercise; the administrator needs to define and configure every source repository, the underlying tables (or files), along with the individual data fields.  To simplify setup, a configuration wizard reviews the dictionary of an available data source and generates the necessary DV Server configuration details. It further analyzes the table and column names to simplify naming conventions, joins, and data value conversion and standardization details.

Don’t be misled into thinking that Data Virtualization is a highly mature product space where all of the products are nearly identical.  They aren’t.  Most product vendors spend more time discussing their unique features instead of offering metrics about their their core features.  It’s important to remember that every Data Virtualization product requires a server that retrieves and processes data to fulfill query requests. This technology is not a commodity, which means that details like setup/configuration time, query performance, and advanced features can vary dramatically across products.  Benchmark and test drive the technology before buying.

The Power of Data Virtualization

20130911 Doorway

I was participating in a discussion about Data Virtualization (DV) the other day and was intrigued with the different views that everyone had about a technology that’s been around for more than 10 years. For those of you that don’t participate in IT-centric, geekfest discussions on a regular basis, Data Virtualization software is middleware that allows various disparate data sources to look like a single relational database.  Some folks characterize Data Virtualization as a software abstraction layer that removes the storage location and format complexities associated with manipulating data. The bottom line is that Data Virtualization software can make a BI (or any SQL) tool see data as though it’s contained within a single database even though it may be spread across multiple databases, XML files, and even Hadoop systems.

What intrigued me about the conversation is that most of the folks had been introduced to Data Virtualization not as an infrastructure tool that simplifies specific disparate data problems, but as the secret sauce or silver bullet for a specific application. They had all inherited an application that had been built outside of IT to address a business problem that required data to be integrated from a multitude of sources.  And in each instance, the applications were able to capitalize on Data Virtualization as a more cost effective solution for integrating detailed data. Instead of building a new platform to store and process another copy of the data, they used Data Virtualization software to query and integrate data from the individual sources systems. And each “solution” utilized a different combination of functions and capabilities.

As with any technology discussion, there’s always someone that believes that their favorite technology is the best thing since sliced bread – and they want to apply their solution to every problem.  Data Virtualization is an incredibly powerful technology with a broad array of functions that enable multi-source query processing. Given the relative obscurity of this data management technology, I thought I’d review some of the more basic capabilities supported by this technology.

Multi-Source Query Processing.  This is often referred to as Query Federation. The ability to have a single query process data across multiple data stores.

Simplify Data Access and Navigation.  Exposes data as single (virtual) data source from numerous component sources. The DV system handles the various network, SQL dialect, and/or data conversion issues.

Integrate Data “On the Fly”.  This is referred to as Data Federation. The DV server retrieves and integrates source data to support each individual query. 

Access to Non-Relational Data. The DV server is able to portray non-relational data (e.g. XML data, flat files, Hadoop, etc.) as structured, relational tables.  

Standardize and Transform Data. Once the data is retrieved from the origin, the DV server will convert the data (if necessary) into a format to support matching and integration.

Integrate Relational and Non-Relational Data. Because DV can make any data source (well, almost any) look like a relational table, this capability is implicit. Keep in mind that the data (or a subset of it) must have some sort of implicit structure.  

Expose a Data Services Interface. Exposing a web service that is attached to a predefined query that can be processed by the DV server.

Govern Ad Hoc Queries. The DV Server can monitor query submissions, run time, and even complexity – and terminate or prevent processing under specific rule-based situations.

Improve Data Security.  As a common point of access, the DV Server can support another level of data access security to address the likely inconsistencies that exist across multiple data store environments.

As many folks have learned, Data Virtualization is not a substitute for a data warehouse or a data mart.  In order for a DV Server to process data, the data must be retrieved from the origin; consequently, running a query that joins tables spread across multiple systems containing millions of records isn’t practical.  An Ethernet network is no substitute for the high speed interconnect linking a computer’s processor and memory to online storage. However, when the data is spread across multiple systems and there’s no other query alternative, Data Virtualization is certainly worth investigating.

%d bloggers like this: