Archive | November 2012

Role of an Executive Sponsor

It’s fairly common for companies to assign Executive Sponsors to their large projects.  “Large” typically reflects budget size, the inclusion of cross-functional teams, business impact, and complexity.  The Executive Sponsor isn’t the person running and directing the project on a day-to-day basis; they’re providing oversight and direction.  He monitors project progress and ensures that tactics are carried out to support the project’s goals and objectives.  He has the credibility (and authority) to ensure that the appropriate level of attention and resources are available to the project throughout its entire life.

While there’s nearly universal agreement on the importance of an Executive Sponsor, there seems to be limited discussion about the specifics of the role.  Most remarks seem to dwell on the importance on breaking down barriers, dealing with roadblocks, and effectively reacting to project obstacles.  While these details make for good PowerPoint presentations, project success really requires the sponsor to exhibit a combination of skills beyond negotiation and problem resolution to ensure project success.   Here’s my take on some of the key responsibilities of an Executive Sponsor.

Inspire the Stakeholder Audience

Most executives are exceptional managers that understand the importance of dates and budgets and are successful at leading their staff members towards a common goal.  Because project sponsors don’t typically have direct management authority over the project team, the methods for leadership are different.  The sponsor has to communicate, captivate, and engage with the team members throughout all phases of the project.  And it’s important to remember that the stakeholders aren’t just the individual developers, but the users and their management.  In a world where individuals have to juggle multiple priorities and projects, one sure-fire way to maintain enthusiasm (and participation) is to maintain a high-level of sponsor engagement.

Understand the Project’s Benefits

Because of the compartmentalized structure of most organizations, many executives aren’t aware of the details of their peer organizations. Enterprise-level projects enlist an Executive Sponsor to ensure that the project respects (and delivers) benefits to all stakeholders. It’s fairly common that any significantly sized project will undergo scope change (due to budget challenges, business risks, or execution problems).  Any change will likely affect the project’s deliverables as well as the perceived benefits to the different stakeholders. Detailed knowledge of project benefits is crucial to ensure that any change doesn’t adversely affect the benefits required by the stakeholders.

Know the Project’s Details

Most executives focus on the high-level details of their organization’s projects and delegate the specifics to the individual project manager.  When projects cross organizational boundaries, the executive’s tactics have to change because of the organizational breadth of the stakeholder community. Executive level discussions will likely cover a variety of issues (both high-level and detailed).  It’s important for the Executive Sponsor to be able to discuss the brass tacks with other executives; the lack of this knowledge undermines the sponsor’s credibility and project’s ability to succeed.

Hold All Stakeholders Accountable

While most projects begin with everyone aligned towards a common goal and set of tactics, it’s not uncommon for changes to occur. Most problems occur when one or more stakeholders have to adjust their activities because of an external force (new priorities, resource contention, etc.). What’s critical is that all stakeholders participate in resolving the issue; the project team will either succeed together or fail together. The sponsor won’t solve the problem; they will facilitate the process and hold everyone accountable.

Stay Involved, Long Term

The role of the sponsor isn’t limited to supporting the early stages of a project (funding, development, and deployment); it continues throughout the life of the project.  Because most applications have a lifespan of no less than 7 years, business changes will drive new business requirements that will drive new development.  The sponsor’s role doesn’t diminish with time – it typically expands.

The overall responsibility set of an Executive Sponsor will likely vary across projects. The differences in project scope, company culture, business process, and staff resources across individual projects inevitably affect the role of the Executive Sponsor. What’s important is that the Executive Sponsor provides both strategic and tactical support to ensure a project is successful. An Executive Sponsor is more than the project’s spokesperson; they’re the project CEO that has equity in the project’s outcome and a legitimate responsibility for seeing the project through to success.

Photo “American Alligator Crossing the Road at Canaveral National 
Seashore”courtesy of Photomatt28 (Matthew Paulson) via Flickr 
(Creative Commons license).

Project Success = Data Usability

One of the challenges in delivering successful data-centric projects (e.g. analytics, BI, or reporting) is realizing that the definition of project success differs from traditional IT application projects.  Success for a traditional application (or operational) project is often described in terms of transaction volumes, functional capabilities, processing conformance, and response time; data project success is often described in terms of business process analysis, decision enablement, or business situation measurement.  To a business user, the success of a data-centric project is simple: data usability.

It seems that most folks respond to data usability issues by gravitating towards a discussion about data accuracy or data quality; I actually think the more appropriate discussion is data knowledge.  I don’t think anyone would argue that to make data-enabled decisions, you need to have knowledge about the underlying data.  The challenge is understanding what level of knowledge is necessary.  If you ask a BI or Data Warehouse person, their answer almost always includes metadata, data lineage, and a data dictionary.  If you ask a data mining person, they often just want specific attributes and their descriptions — they don’t care about anything else.  All of these folks have different views of data usability and varying levels (and needs) for data knowledge.

One way to improve data usability is to target and differentiate the user audience based on their data knowledge needs.  There are certainly lots of different approaches to categorizing users; in fact, every analyst firm and vendor has their own model to describe different audience segments.  One of the problems with these types of models is that they tend to focus heavily on the tools or analytical methods (canned reports, drill down, etc.) and ignore the details of data content and complexity. The knowledge required to manipulate a single subject area (revenue or customer or usage) is significantly less than the skills required to manipulate data across 3 subject areas (revenue, customer, and usage).  And what exacerbates data knowledge growth is the inevitable plethora of value gaps, inaccuracies, and inconsistencies associated with the data. Data knowledge isn’t just limited to understanding the data; it includes understanding how to work around all of the imperfections.

Here’s a model that categories and describes business users based on their views of data usability and their data knowledge needs

Level 1: “Can you explain these numbers to me?”

This person is the casual data user. They have access to a zillion reports that have been identified by their predecessors and they focus their effort on acting on the numbers they get. They’re not a data analyst – their focus is to understand the meaning of the details so they can do their job. They assume that the data has been checked, rechecked, and vetted by lots of folks in advance of their receiving the content. They believe the numbers and they act on what they see.

Level 2: “Give me the details”

This person has been using canned reports, understands all the basic details, and has graduated to using data to answer new questions that weren’t identified by their predecessors.  They need detailed data and they want to reorganize the details to suit their specific needs (“I don’t want weekly revenue breakdowns – I want to compare weekday revenue to weekend revenue”).  They realize the data is imperfect (and in most instances, they’ll live with it).  They want the detail.

Level 3: “I don’t believe the data — please fix it”

These folks know their area of the business inside/out and they know the data. They scour and review the details to diagnose the business problems they’re analyzing.  And when they find a data mistake or inaccuracy, they aren’t shy about raising their hand. Whether they’re a data analyst that uses SQL or a statistician with their favorite advanced analytics algorithms, they focus on identifying business anomalies.  These folks are the power users that are incredibly valuable and often the most difficult for IT to please.

Level 4: “Give me more data”

This is subject area graduation.  At this point, the user has become self-sufficient with their data and needs more content to address a new or more complex set of business analysis needs. Asking for more data – whether a new source or more detail – indicates that the person has exhausted their options in using the data they have available.  When someone has the capacity to learn a new subject area or take on more detailed content, they’re illustrating a higher level of data knowledge.

One thing to consider about the above model is that a user will have varying data knowledge based on the individual subject area.  A marketing person may be completely self-sufficient on revenue data but be a newbie with usage details.  A customer support person may be an expert on customer data but only have limited knowledge of product data.  You wouldn’t expect many folks (outside of IT) to be experts on all of the existing data subject areas. Their knowledge is going to reflect the breadth of their job responsibilities.

As someone grows and evolves in business expertise and influence, it’s only natural that their business information needs would grow and evolve too.  In order to address data usability (and project success), maybe it makes sense to reconsider the various user audience categories and how they are defined.  Growing data knowledge isn’t about making everyone data gurus; it’s about enabling staff members to become self-sufficient in their use of corporate data to do their jobs.

Photo “Ladder of Knowledge” courtesy of degreezero2000 via Flickr (Creative Commons license).

The Formula for Analytics Success: Data Knowledge


Companies spend a small fortune continually investing and reinvesting in making their business analysts self-sufficient with the latest and greatest analytical tools. Most companies have multiple project teams focused on delivering tools to simplify and improve business decision making. There are likely several standard tools deployed to support the various data analysis functions required across the enterprise: canned/batch reports, desktop ad hoc data analysis, and advanced analytics. There’s never a shortage of new and improved tools that guarantee simplified data exploration, quick response time, and greater data visualization options, Projects inevitably include the creation of dozens of prebuilt screens along with a training workshop to ensure that the users understand all of the new whiz bang features associated with the latest analytic tool incarnation.  Unfortunately, the biggest challenge within any project isn’t getting users to master the various analytical functions; it’s ensuring the users understand the underlying data they’re analyzing.

If you take a look at the most prevalent issue with the adoption of a new business analysis tool is the users’ knowledge of the underlying data.  This issue becomes visible with a number of common problems:  the misuse of report data, the misunderstanding of business terminology, and/or the exaggeration of inaccurate data.  Once the credibility or usability of the data comes under scrutiny, the project typically goes into “red alert” and requires immediate attention. If ignored, the business tool quickly becomes shelfware because no one is willing to take a chance on making business decisions based on risky information.

All too often the focus on end user training is tool training, not data training. What typically happens is that an analyst is introduced to the company’s standard analytics tool through a “drink from a fire hose” training workshop.  All of the examples use generic sales or HR data to illustrate the tool’s strengths in folding, spindling, and manipulating the data.  And this is where the problem begins:  the vendor’s workshop data is perfect.  There’s no missing or inaccurate data and all of the data is clearly labeled and defined; classes run smoothly, but it just isn’t reality  Somehow the person with no hands-on data experience is supposed to figure out how to use their own (imperfect) data. It’s like someone taking their first ski lesson on a cleanly groomed beginner hill and then taking them up to the top of an a black diamond (advanced) run with step hills and moguls.  The person works hard but isn’t equipped to deal with the challenges of the real world.  So, they give up on the tool and tell others that the solution isn’t usable.

All of the advanced tools and manipulation capabilities don’t do any good if the users don’t understand the data. There are lots of approaches to educating users on data.  Some prefer to take a bottom-up approach (reviewing individual table and column names, meanings, and values) while others want to take a top-down approach (reviewing subject area details, the associated reports, and then getting into the data details).  There are certainly benefits of one approach over the other (depending on your audience); however, it’s important not to lose sight of the ultimate goal: giving the users the fundamental data knowledge they need to make decisions.  The fundamentals that most users need to understand their data include a review of

The above details may seem a bit overwhelming if you consider that most companies have mature reporting environments and multi-terabyte data warehouses.  However, we’re not talking about training someone to be an expert on 1000 data attributes contained within your data warehouse; we’re talking about ensuring someone’s ability to use an initial set of reports or a new tool without requiring 1-on-1 training.  It’s important to realize that the folks with the greatest need for support and data knowledge are the newbies, not the experienced folks.

There are lots of options for imparting data knowledge to business users:  a hands-on data workshop, a set of screen videos showing data usage examples, or a simple set of web pages containing definitions, textual descriptions, and screen shots. Don’t get wrapped up in the complexities of creating the perfect solution – keep it simple.  I worked with a client that deployed their information using a set of pages constructed with PowerPoint that folks could reference in a the company’s intranet. If your users have nothing – don’t’ worry about the perfect solution – give them something to start with that’s easy to use.

Remember that the goal is to build users’ data knowledge that is sufficient to get them to adopt and use the company’s analysis tools.  We’re not attempting to convert everyone into data scientists; we just want them to use the tools without requiring 1-on-1 training to explain every report or data element.

Photo courtesy of NASA.  Nasa Ames Research Center engineer H Julian “Harvey” Allen illustrating data knowledge (relating to capsule design for the Mercury program)

Improving Data Integration the Old Fashioned Way

IT organizations have spent enormous sums of money over the past 10-15 years attacking productivity.  They’ve acquiring data integration tools, implemented improved development methodologies, and even reengineered requirements gathering methods to ensure business priority alignment. And the result of all of this investment?  Today’s data integration developers are easily 10x to 20x more productive than the COBOL programmers of the past. This shouldn’t be a surprise to anyone – writing, compiling, linking, and testing 3rd generation code is much slower than today’s GUI-based, drag-and-drop development tools.   The tools work; developers are faster, quicker, and better.

So, why does it still seem to take an eternity and cost a fortune to acquire and integrate new data into an existing report?   The bottleneck has moved upstream: finding and extracting source data is complicated and time consuming.  We’ve invested in our Integration Competency Centers to create an assembly line to streamline the process of transforming and converting data that is loaded into databases or applications.  Unfortunately, we’ve not devoted any effort in simplifying access or understanding the actual raw source data that feeds the assembly line.

Henry Ford didn’t invent the assembly line, he revolutionized it. One of the changes that he introduced to the assembly line was simplifying and standardizing parts and the actual assembly process. Prior to Ford’s assembly line, car assembly was a custom effort that required highly trained craftsmen to shape, tool, and fit parts by hand (in a very time consuming process). The parts weren’t always uniform, so the craftsmen had to spend a significant amount of time fitting the parts together.

In most IT environments, source system access and data content varies across the different application systems dramatically.  This forces developers to become data craftsmen in order to deal with the data idiosyncrasies associated with the numerous source systems common to most companies. Every system stores data in a custom and unique manner; it takes a lot of time to search and analyze source system data in order to identify the necessary content.  (A popular ERP package stores its details in more than 10,000 tables) So, each new request often requires developers to create “from scratch” code to access and manipulate new data from a source system. If you dig a bit, you’ll probably find that many of your application systems generate dozens or hundreds (yes, hundreds) of custom extracts to deliver data to support the various production business needs within your company.

While most folks might think that custom extracts are a reasonably decent solution, they’re not.  In fact, they’re a problem that will only get worse with time.  (Remember, every extract requires development time and ongoing support.)  You’ll be better off consolidating all of those extracts into a single set that includes all of the data.  This will reduce processing time, reduce storage, reduce maintenance, and ultimately save a lot of money. You’ll have to spend some time designing and building these new extracts and getting folks to migrate to using them, but the benefits will be significant. (One of my clients was able to defer a platform upgrade due to the CPU and storage reduction caused by the consolidation and removal of all of the custom extracts).

Standardizing source data to reduce the data craftsmen problem isn’t rocket science, but it’s more than simply creating a data dump or generating a backup file.  You need to deliver data in a manner that can be quickly and easily consumed by other systems.  This means that the content needs to be reformatted from the unique (sometimes indecipherable) format of the host application into a format that everyone else can use. This can be easily addressed by delivering data into database tables or flat files (I know one client that delivers data in tab delimited spreadsheet format).  The data should reflect the values generated by the source system in a format that everyone can understand – the content shouldn’t be modified for cleansed (this is source data, not content ready for business consumption). Delivery should occur in a frequent and regular basis along with a plan for archiving a decent amount of history.

This isn’t a new concept; this was a common approach in the days when custom coded IBM mainframe applications were all the rage. Back then, data sharing was a priority and every application generated standard extracts to reduce I/O and storage costs.  There was also an extreme sensitivity to developer time.  Requesting a custom extract was frowned upon and rarely approved.  Finding and accessing the data was as simple as referencing the extract files that were made available from every application system.

When it comes to improving the delivery speed of new data to business users, maybe we can learn something from Henry Ford and the world of mainframe development.

%d bloggers like this: