Archive | best practices RSS for this section

Shadow IT: Déjà Vu All Over Again

20131209 ShadowITDejaVu

I’m a bit surprised with all of the recent discussion and debate about Shadow IT.  For those of you not familiar with the term, Shadow IT refers to software development and data processing activities that occur within business unit organizations without the blessing of the Central IT organization.  The idea of individual business organizations purchasing technology, hiring staff members, and taking on software development to address specific business priorities isn’t a new concept; it’s been around for 30 years.

When it comes to the introduction of technology to address or improve business process, communications, or decision making, Central IT has traditionally not been the starting point.  It’s almost always been the business organization.  Central IT has never been in the position of reengineering business processes or insisting that business users adopt new technologies; that’s always been the role of business management.  Central IT is in the business of automating defined business processes and reducing technology costs (through the use of standard tools, economies-of-scale methods, commodity technologies).   It’s not as though Shadow IT came into existence to usurp the authority or responsibilities of the IT organization.  Shadow IT came into existence to address new, specialized business needs that the Central IT organization was not responsible for addressing.

Here’s a few examples of information technologies that were introduced and managed by Shadow IT organizations to address specialized departmental needs.

  • Word Processing. Possibly the first “end user system” (Wang, IBM DisplayWrite, etc.) This solution was revolutionary in reducing the cost of  documentation
  • The minicomputer.  This technology revolution of the 70’s and 80’s delivered packaged, departmental application systems (DEC, Data General, Prime, etc.)  The most popular were HR, accounting, and manufacturing applications.
  • The personal computer.  Many companies created PC support teams (in Finance) because they required unique skills that didn’t exist within most companies.
  • Email, File Servers, and Ethernet (remember Banyan, Novell, 3com).  These tools worked outside the mainframe OLTP environment and required specialized skills.
  • Data Marts and Data Warehouses.  Unless you purchased a product from IBM, the early products were often purchased and managed by marketing and finance.
  • Business Intelligence tools.  Many companies still manage analytics and report development outside of Central IT.
  • CRM and ERP systems.  While both of these packages required Central IT hardware platforms, the actual application systems are often supported by separate teams positioned within their respective business areas.

The success of Shadow IT is based on their ability to respond to specialized business needs with innovative solutions.  The technologies above were all introduced to address specific departmental needs; they evolved to deliver more generalized capabilities that could be valued by the larger corporate audience.  The larger audience required the technology’s ownership and support to migrate from the Shadow IT organization to Central IT.  Unfortunately, most companies were ill prepared to support the transition of technology between the two different technology teams.

Most Central IT teams bristle at the idea of inheriting a Shadow IT project.  There are significant costs associated with transitioning a project to a different team and a larger user audience.  This is why many Central IT teams push for Shadow IT to adopt their standard tools and methods (or for the outright dissolution of Shadow IT).  Unfortunately applying low-cost, standardized methods to deploy and support a specialized, high-value solution doesn’t work (if it did, it would have been used in the first place).  You can’t expect to solve specialized needs with a one-size-fits-all approach.

A Shadow IT team delivers dozens of specialized solutions to their business user audience; the likelihood that any solution will be deployed to a larger audience is very small.  While it’s certainly feasible to modify the charter, responsibilities, and success metrics of a Centralized IT organization to support both specialized unique and generalized high volume needs, I think there’s a better alternative:  establish a set of methods and practices to address the infrequent transition of Shadow IT projects to Central IT.  Both organizations should be obligated to work with and respond to the needs and responsibilities of the other technology team.

Most companies have multiple organizations with specific roles to address a variety of different activities.  And organizations are expected to cooperate and work together to support the needs of the company.  Why is it unrealistic to have Central IT and Shadow IT organizations with different roles to address the variety of (common and specialized) needs across a company?

Advertisements

My Dog Ate the Requirements, Part 2

DogAteRequirements2

There’s nothing more frustrating than not being able to rely upon a business partner.  There’s lots of business books about information technology that espouses the importance of Business/IT alignment and the importance of establishing business users as IT stakeholders. The whole idea of delivering business value with data and analytics is to provide business users with tools and data that can support business decision making.  It’s incredibly hard to deliver business value when half of the partnership isn’t stepping up to their responsibilities.

There’s never a shortage of rationale as to why requirements haven’t been collected or recorded.  In order for a relationship to be successful, both parties have to participate and cooperate.  Gathering and recording requirements isn’t possible if the technologist doesn’t meet with the users to discuss their needs, pains, and priorities.  Conversely, the requirements process won’t succeed if the users won’t participate. My last blog reviewed the excuses that technologists offered for explaining the lack of documented requirements; this week’s blog focuses on remarks I’ve heard from business stakeholders.

  • “I’m too busy.  I don’t have time to talk to developers”
  • “I meet with IT every month, they should know my requirements”
  • “IT isn’t asking me for requirements, they want me to approve SQL”
  • “We sent an email with a list of questions. What else do they need?”
  • “They have copies of reports we create. That should be enough.”
  • “The IT staff has worked here longer than I have.  There’s nothing I can tell them that they don’t already know”
  • “I’ve discussed my reporting needs in 3 separate meetings; I seem to be educating someone else with each successive discussion”
  • “I seem to answer a lot of questions.  I don’t ever see anyone writing anything down”
  • “I’ll meet with them again when they deliver the requirements I identified in our last discussion.
  • “I’m not going to sign off on the requirements because my business priorities might change – and I’ll need to change the requirements.

Requirements gathering is really a beginning stage for negotiating a contract for the creation and delivery of new software.  The contract is closed (or agreed to) when the business stakeholders agree to (or sign-off on) the requirements document.  While many believe that requirements are an IT-only artifact, they’re really a tool to establish responsibilities of both parties in the relationship.

A requirements document defines the data, functions, and capabilities that the technologist needs to build to deliver business value.  The requirements document also establishes the “product” that will be deployed and used by the business stakeholders to support their business decision making activities. The requirements process holds both parties accountable: technologists to build and business stakeholders to use. When two organizations can’t work together to develop requirements, it’s often a reflection of a bigger problem.

It’s not fair for business stakeholders to expect development teams to build commercial grade software if there’s no participation in the requirements process.  By the same token, it’s not right for technologists to build software without business stakeholder participation. If one stakeholder doesn’t want to participate in the requirements process, they shouldn’t be allowed to offer an opinion about the resulting deliverable.  If multiple stakeholders don’t want to participate in a requirements activity, the development process should be cancelled.  Lack of business stakeholder participation means they have other priorities; the technologists should take a hint and work on their other priorities.

Advanced Data Virtualization Capabilities

20130925 AdvancedDV

In one of my previous blogs, I wrote about Data Virtualization technology — one of the more interesting pieces of middleware technology that can simplify data management.   While most of the commercial products in this space share a common set of features and functions, I thought I’d devote this blog to discussing the more advanced features.  There are quite a few competing products; the real challenge in differentiating the products is to understand their more advanced features.

The attraction of data virtualization is that it simplifies data access.  Most IT shops have one of everything – and this includes several different brands of commercial DBMSs, a few open source databases, a slew of BI/reporting tools, and the inevitable list of emerging and specialized tools and technologies (Hadoop, Dremel, Casandra, etc.) Supporting all of the client-to-server-to-repository interfaces (and the associated configurations) is both complex and time consuming.  This is why the advanced capabilities of Data Virtualization have become so valuable to the IT world.

The following details aren’t arranged in any particular order.  I’ve identified the ones that I’ve found to be the most valuable (and interesting).  Let me also acknowledge not every DV product supports all of these features.

Intelligent data caching.  Repository-to-DV Server data movement is the biggest obstacle in query response time.  Most DV products are able to support static caching to reduce repetitive data movement (data is copied and persisted in the DV Server).  Unfortunately, this approach has limited success when there are ad hoc users accessing dozens of sources and thousands of tables.  The more effective solution is for the DV Server to monitor all queries and dynamically cache data based on user access, query load, and table (and data) access frequency.

Query optimization (w/multi-platform execution). While all DV products claim some amount of query optimization, it’s important to know the details. There are lots of tricks and techniques; however, look for optimization that understands source data volumes, data distribution, data movement latency, and is able to process data on any source platform.

Support for multiple client Interfaces.  Since most companies have multiple database products, it can be cumbersome to support and maintain multiple client access configurations.  The DV server can act as a single access point for multiple vendor products (a single ODBC interface can replace drivers for each DBMS brand).  Additionally, most DV Server drivers support multiple different access methods (ODBC, JDBC, XML, and web services).

Attribute level or value specific data security.  This feature supports data security at a much lower granularity than is typically available with most DBMS products.  Data can be protected (or restricted) at individual column values for entire table or selective rows.

Metadata tracking and management.  Since Data Virtualization is a query-centric middleware environment, it only makes sense to position this server to retrieve, reconcile, and store metadata content from multiple, disparate data repositories.

Data lineage. This item works in tandem with the metadata capability and augments the information by retaining the source details for all data that is retrieved.  This not only includes source id information for individual records but also the origin, creation date, and native attribute details.

Query tracking for usage audit. Because the DV Server can act as a centralized access point for user tool access, there are several DV products that support the capture and tracking of all submitted queries.  This can be used to track, measure, and analyze end user (or repository) access.

Workflow linkage and processing.  This is the ability to execute predefined logic against specific data that is retrieved. While this concept is similar to a macro or stored procedure, it’s much more sophisticated.  It could include the ability to direct job control or specialized processing against an answer set prior to delivery (e.g. data hygiene, external access control, stewardship approval, etc.)

Packaged Application Templates.  Most packaged applications (CRM, ERP, etc.) contain thousands of tables and columns that can be very difficult to understand and query.  Several DV vendors have developed templates containing predefined DV server views that access the most commonly queried data elements.

Setup and Configuration Wizards. Configuring a DV server to access the multiple data sources can be a very time consuming exercise; the administrator needs to define and configure every source repository, the underlying tables (or files), along with the individual data fields.  To simplify setup, a configuration wizard reviews the dictionary of an available data source and generates the necessary DV Server configuration details. It further analyzes the table and column names to simplify naming conventions, joins, and data value conversion and standardization details.

Don’t be misled into thinking that Data Virtualization is a highly mature product space where all of the products are nearly identical.  They aren’t.  Most product vendors spend more time discussing their unique features instead of offering metrics about their their core features.  It’s important to remember that every Data Virtualization product requires a server that retrieves and processes data to fulfill query requests. This technology is not a commodity, which means that details like setup/configuration time, query performance, and advanced features can vary dramatically across products.  Benchmark and test drive the technology before buying.

Role of an Executive Sponsor

It’s fairly common for companies to assign Executive Sponsors to their large projects.  “Large” typically reflects budget size, the inclusion of cross-functional teams, business impact, and complexity.  The Executive Sponsor isn’t the person running and directing the project on a day-to-day basis; they’re providing oversight and direction.  He monitors project progress and ensures that tactics are carried out to support the project’s goals and objectives.  He has the credibility (and authority) to ensure that the appropriate level of attention and resources are available to the project throughout its entire life.

While there’s nearly universal agreement on the importance of an Executive Sponsor, there seems to be limited discussion about the specifics of the role.  Most remarks seem to dwell on the importance on breaking down barriers, dealing with roadblocks, and effectively reacting to project obstacles.  While these details make for good PowerPoint presentations, project success really requires the sponsor to exhibit a combination of skills beyond negotiation and problem resolution to ensure project success.   Here’s my take on some of the key responsibilities of an Executive Sponsor.

Inspire the Stakeholder Audience

Most executives are exceptional managers that understand the importance of dates and budgets and are successful at leading their staff members towards a common goal.  Because project sponsors don’t typically have direct management authority over the project team, the methods for leadership are different.  The sponsor has to communicate, captivate, and engage with the team members throughout all phases of the project.  And it’s important to remember that the stakeholders aren’t just the individual developers, but the users and their management.  In a world where individuals have to juggle multiple priorities and projects, one sure-fire way to maintain enthusiasm (and participation) is to maintain a high-level of sponsor engagement.

Understand the Project’s Benefits

Because of the compartmentalized structure of most organizations, many executives aren’t aware of the details of their peer organizations. Enterprise-level projects enlist an Executive Sponsor to ensure that the project respects (and delivers) benefits to all stakeholders. It’s fairly common that any significantly sized project will undergo scope change (due to budget challenges, business risks, or execution problems).  Any change will likely affect the project’s deliverables as well as the perceived benefits to the different stakeholders. Detailed knowledge of project benefits is crucial to ensure that any change doesn’t adversely affect the benefits required by the stakeholders.

Know the Project’s Details

Most executives focus on the high-level details of their organization’s projects and delegate the specifics to the individual project manager.  When projects cross organizational boundaries, the executive’s tactics have to change because of the organizational breadth of the stakeholder community. Executive level discussions will likely cover a variety of issues (both high-level and detailed).  It’s important for the Executive Sponsor to be able to discuss the brass tacks with other executives; the lack of this knowledge undermines the sponsor’s credibility and project’s ability to succeed.

Hold All Stakeholders Accountable

While most projects begin with everyone aligned towards a common goal and set of tactics, it’s not uncommon for changes to occur. Most problems occur when one or more stakeholders have to adjust their activities because of an external force (new priorities, resource contention, etc.). What’s critical is that all stakeholders participate in resolving the issue; the project team will either succeed together or fail together. The sponsor won’t solve the problem; they will facilitate the process and hold everyone accountable.

Stay Involved, Long Term

The role of the sponsor isn’t limited to supporting the early stages of a project (funding, development, and deployment); it continues throughout the life of the project.  Because most applications have a lifespan of no less than 7 years, business changes will drive new business requirements that will drive new development.  The sponsor’s role doesn’t diminish with time – it typically expands.

The overall responsibility set of an Executive Sponsor will likely vary across projects. The differences in project scope, company culture, business process, and staff resources across individual projects inevitably affect the role of the Executive Sponsor. What’s important is that the Executive Sponsor provides both strategic and tactical support to ensure a project is successful. An Executive Sponsor is more than the project’s spokesperson; they’re the project CEO that has equity in the project’s outcome and a legitimate responsibility for seeing the project through to success.

Photo “American Alligator Crossing the Road at Canaveral National 
Seashore”courtesy of Photomatt28 (Matthew Paulson) via Flickr 
(Creative Commons license).

Underestimating the Project Managers

By Evan Levy

One of the most misunderstood roles on a BI team is the Project Manager. All too often the role is defined as an administrative set of activities focused on writing and maintaining the project plan, tracking the budget, and monitoring task completion. Unfortunately IT management rarely understands the importance of domain knowledge—having BI experience—and leadership skills.

To assign a BI project manager who has no prior BI experience is an accident waiting to happen. Think about a homeowner who decides to build a new house. He retains a construction company and the foreman has never built a house before. You’d want fundamental knowledge of demolition, framing, plumbing, wiring, and so on. The foreman would need to understand that the work was being done in the right way.

Unfortunately IT managers think they can position certified project managers on BI teams without any knowledge of BI-specific development processes, business decision-making, data content, or technology. We often find ourselves coaching these project managers on the differences in BI development, or introducing concepts like staging areas or federated queries. This is time that could be better spent transferring knowledge and formalizing development processes with a more seasoned project lead.

In order for a project team to be successful, the project manager should have strong leadership skills. The ability to communicate a common goal and ensure focus is both art and science. But BI project managers often behave more like bureaucrats, requesting task completion percentages and reviewing labor hours. They are rarely invested in whether the project is adhering to development standards, if permanent staff is preparing to take ownership of the code, or whether the developers are collaborating.

An effective BI project manager should be a project leader. He or she should understand that the definition of success is not a completed project plan or budget spreadsheet, but rather that the project delivers usable data and fulfills requirements. The BI project manager should instill the belief that success doesn’t mean task completion, but delivery against business goals.

%d bloggers like this: