I’m a bit surprised with all of the recent discussion and debate about Shadow IT. For those of you not familiar with the term, Shadow IT refers to software development and data processing activities that occur within business unit organizations without the blessing of the Central IT organization. The idea of individual business organizations purchasing technology, hiring staff members, and taking on software development to address specific business priorities isn’t a new concept; it’s been around for 30 years.
When it comes to the introduction of technology to address or improve business process, communications, or decision making, Central IT has traditionally not been the starting point. It’s almost always been the business organization. Central IT has never been in the position of reengineering business processes or insisting that business users adopt new technologies; that’s always been the role of business management. Central IT is in the business of automating defined business processes and reducing technology costs (through the use of standard tools, economies-of-scale methods, commodity technologies). It’s not as though Shadow IT came into existence to usurp the authority or responsibilities of the IT organization. Shadow IT came into existence to address new, specialized business needs that the Central IT organization was not responsible for addressing.
Here’s a few examples of information technologies that were introduced and managed by Shadow IT organizations to address specialized departmental needs.
- Word Processing. Possibly the first “end user system” (Wang, IBM DisplayWrite, etc.) This solution was revolutionary in reducing the cost of documentation
- The minicomputer. This technology revolution of the 70’s and 80’s delivered packaged, departmental application systems (DEC, Data General, Prime, etc.) The most popular were HR, accounting, and manufacturing applications.
- The personal computer. Many companies created PC support teams (in Finance) because they required unique skills that didn’t exist within most companies.
- Email, File Servers, and Ethernet (remember Banyan, Novell, 3com). These tools worked outside the mainframe OLTP environment and required specialized skills.
- Data Marts and Data Warehouses. Unless you purchased a product from IBM, the early products were often purchased and managed by marketing and finance.
- Business Intelligence tools. Many companies still manage analytics and report development outside of Central IT.
- CRM and ERP systems. While both of these packages required Central IT hardware platforms, the actual application systems are often supported by separate teams positioned within their respective business areas.
The success of Shadow IT is based on their ability to respond to specialized business needs with innovative solutions. The technologies above were all introduced to address specific departmental needs; they evolved to deliver more generalized capabilities that could be valued by the larger corporate audience. The larger audience required the technology’s ownership and support to migrate from the Shadow IT organization to Central IT. Unfortunately, most companies were ill prepared to support the transition of technology between the two different technology teams.
Most Central IT teams bristle at the idea of inheriting a Shadow IT project. There are significant costs associated with transitioning a project to a different team and a larger user audience. This is why many Central IT teams push for Shadow IT to adopt their standard tools and methods (or for the outright dissolution of Shadow IT). Unfortunately applying low-cost, standardized methods to deploy and support a specialized, high-value solution doesn’t work (if it did, it would have been used in the first place). You can’t expect to solve specialized needs with a one-size-fits-all approach.
A Shadow IT team delivers dozens of specialized solutions to their business user audience; the likelihood that any solution will be deployed to a larger audience is very small. While it’s certainly feasible to modify the charter, responsibilities, and success metrics of a Centralized IT organization to support both specialized unique and generalized high volume needs, I think there’s a better alternative: establish a set of methods and practices to address the infrequent transition of Shadow IT projects to Central IT. Both organizations should be obligated to work with and respond to the needs and responsibilities of the other technology team.
Most companies have multiple organizations with specific roles to address a variety of different activities. And organizations are expected to cooperate and work together to support the needs of the company. Why is it unrealistic to have Central IT and Shadow IT organizations with different roles to address the variety of (common and specialized) needs across a company?
In order to prepare for the cooking gauntlet that often occurs with the end of year holiday season, I decided to purchase a new rotisserie oven. The folks at Acme Rotisserie include a large amount of documentation with their rotisserie. I reviewed the entire pile and was a bit surprised by the warranty registration card. The initial few questions made sense: serial number, place of purchase, date of purchase, my home address. The other questions struck me as a bit too inquisitive: number of household occupants, household income, own/rent my residence, marital status, and education level. Obviously, this card was a Trojan horse of sorts; provide registration details –and all kinds of other personal information. They wanted me to give away my personal information so they could analyze it, sell it, and make money off of it.
Companies collecting and analyzing consumer data isn’t anything new –it’s been going on for decades. In fact, there are laws in place to protect consumer’s data in quite a few industries (healthcare, telecommunications, and financial services). Most of the laws focus on protecting the information that companies collect based on their relationship with you. It’s not the just details that you provide to them directly; it’s the information that they gather about how you behave and what you purchase. Most folks believe behavioral information is more valuable than the personal descriptive information you provide. The reason is simple: you can offer creative (and highly inaccurate) details about your income, your education level, and the car you drive. You can’t really lie about your behavior.
I’m a big fan of sharing my information if it can save me time, save me money, or generate some sort of benefit. I’m willing to share my waist size, shirt size, and color preferences with my personal shopper because I know they’ll contact me when suits or other clothing that I like is available at a good price. I’m fine with a grocer tracking my purchases because they’ll offer me personalized coupons for those products. I’m not okay with the grocer selling that information to my health insurer. Providing my information to a company to enhance our relationship is fine; providing my information to a company so they can share, sell, or otherwise unilaterally benefit from it is not fine. My data is proprietary and my intellectual property.
Clearly companies view consumer data to be a highly valuable asset. Unfortunately, we’ve created a situation where there’s little or no cost to retain, use, or abuse that information. As abuse and problems have occurred within certain industries (financial services, healthcare, and others), we’ve created legislation to force companies to responsibly invest in the management and protection of that information. They have to contact you to let you know they have your information and allow you to update communications and marketing options. It’s too bad that every company with your personal information isn’t required to behave in the same way. If data is so valuable that a company retains it, requiring some level of maintenance (and responsibility) shouldn’t be a big deal.
It’s really too bad that companies with copies of my personal information aren’t required to contact me to update and confirm the accuracy of all of my personal details. That would ensure that all of the specialized big data analytics that are being used to improve my purchase experiences were accurate. If I knew who had my data, I could make sure that my preferences were up to date and that the data was actually accurate.
It’s unfortunate that Acme Rotisserie isn’t required to contact me to confirm that I have 14 children, an advanced degree in swimming pool construction, and that I have Red Ferrari in my garage. It will certainly be interesting to see the personalized offers I receive for the upcoming Christmas shopping season.
I was recently asked about my opinion for the potential of Hadoop replacing a company’s data warehouse (DW). While there’s lots to be excited about when it comes to Hadoop, I’m not currently in the camp of folks that believe it’s practical to use Hadoop to replace a company’s DW. Most corporate DW systems are based on commercial relational database products and can store and manage multiple terabytes of data and support hundreds (if not thousands) of concurrent users. It’s fairly common for these systems to handle complex, mixed workloads –queries processing billions of rows across numerous tables along with simple primary key retrieval requests all while continually loading data. The challenge today is that Hadoop simply isn’t ready for this level of complexity.
All that being said, I do believe there’s a huge opportunity to use Hadoop to replace a significant amount of processing that is currently being handled by most DWs. Oh, and data warehouse user won’t be affected at all.
Let’s review a few fundamental details about the DW. There’s two basic data processing activities that occur on a DW: query processing and transformation processing. Query processing is servicing the SQL that’s submitted from all of the tools and applications on the users’ desktops, tablets, and phones. Transformation processing is the workload involved with converting data from their source application formats to the format required by the data warehouse. While the most visible activity to business users is query processing, it is typically the smaller of the two. Extracting and transforming the dozens (or hundreds) of source data files for the DW is a huge processing activity. In fact, most DWs are not sized for query processing; they are sized for the daily transformation processing effort.
It’s important to realize that one of the most critical service level agreements (SLAs) of a DW is data delivery. Business users want their data first thing each morning. That means the DW has to be sized to deliver data reliably each and every business morning. Since most platforms are anticipated to have a 3+ year life expectancy, IT has to size the DW system based on the worst case data volume scenario for that entire period (end of quarter, end of year, holidays, etc.) This means the DW is sized to address a maximum load that may only occur a few times during that entire period.
This is where the opportunity for Hadoop seems pretty obvious. Hadoop is a parallel, scalable framework that handles distributed batch processing and large data volumes. It’s really a set of tools and technologies for developers, not end users. This is probably why so many ETL (extract, transformation, and load) product vendors have ported their products to execute within a Hadoop environment. It only makes sense to migrate processing from a specialized platform to commodity hardware. Why bog down and over invest in your DW platform if you can handle the heavy lifting of transformation processing on a less expensive platform?
Introducing a new system to your DW environment will inevitably create new work for your DW architects and developers. However, the benefits are likely to be significant. While some might view such an endeavor as a creative way to justify purchasing new hardware and installing Hadoop, the real reason is to extend the life of the data warehouse (and save your company a bunch of money by deferring a DW upgrade)
There’s nothing more frustrating than not being able to rely upon a business partner. There’s lots of business books about information technology that espouses the importance of Business/IT alignment and the importance of establishing business users as IT stakeholders. The whole idea of delivering business value with data and analytics is to provide business users with tools and data that can support business decision making. It’s incredibly hard to deliver business value when half of the partnership isn’t stepping up to their responsibilities.
There’s never a shortage of rationale as to why requirements haven’t been collected or recorded. In order for a relationship to be successful, both parties have to participate and cooperate. Gathering and recording requirements isn’t possible if the technologist doesn’t meet with the users to discuss their needs, pains, and priorities. Conversely, the requirements process won’t succeed if the users won’t participate. My last blog reviewed the excuses that technologists offered for explaining the lack of documented requirements; this week’s blog focuses on remarks I’ve heard from business stakeholders.
- “I’m too busy. I don’t have time to talk to developers”
- “I meet with IT every month, they should know my requirements”
- “IT isn’t asking me for requirements, they want me to approve SQL”
- “We sent an email with a list of questions. What else do they need?”
- “They have copies of reports we create. That should be enough.”
- “The IT staff has worked here longer than I have. There’s nothing I can tell them that they don’t already know”
- “I’ve discussed my reporting needs in 3 separate meetings; I seem to be educating someone else with each successive discussion”
- “I seem to answer a lot of questions. I don’t ever see anyone writing anything down”
- “I’ll meet with them again when they deliver the requirements I identified in our last discussion.
- “I’m not going to sign off on the requirements because my business priorities might change – and I’ll need to change the requirements.
Requirements gathering is really a beginning stage for negotiating a contract for the creation and delivery of new software. The contract is closed (or agreed to) when the business stakeholders agree to (or sign-off on) the requirements document. While many believe that requirements are an IT-only artifact, they’re really a tool to establish responsibilities of both parties in the relationship.
A requirements document defines the data, functions, and capabilities that the technologist needs to build to deliver business value. The requirements document also establishes the “product” that will be deployed and used by the business stakeholders to support their business decision making activities. The requirements process holds both parties accountable: technologists to build and business stakeholders to use. When two organizations can’t work together to develop requirements, it’s often a reflection of a bigger problem.
It’s not fair for business stakeholders to expect development teams to build commercial grade software if there’s no participation in the requirements process. By the same token, it’s not right for technologists to build software without business stakeholder participation. If one stakeholder doesn’t want to participate in the requirements process, they shouldn’t be allowed to offer an opinion about the resulting deliverable. If multiple stakeholders don’t want to participate in a requirements activity, the development process should be cancelled. Lack of business stakeholder participation means they have other priorities; the technologists should take a hint and work on their other priorities.
I received a funny email the other day about excuses that school children use to explain why they haven’t done their homework. The examples were pretty creative: “my mother took it to be framed”, “I got soap in my eyes and was blinded all night”, and (an oldie and a goody) –“my dog ate my homework”. It’s a shame that such a creative approach yielded such a high rate of failure. Most of us learn at an early age that you can’t talk your way out of failure; success requires that you do the work. You’d also think that as people got older and more evolved, they’d realize that there’s very few shortcuts in life.
I’m frequently asked to conduct best practice reviews of business intelligence and data warehouse (BI/DW) projects. These activities usually come about because either users or IT management is concerned with development productivity or delivery quality. The review activity is pretty straight forward; interviews are scheduled and artifacts are analyzed to review the various phases, from requirements through construction to deployment. It’s always interesting to look at how different organizations handle architecture, code design, development, and testing. One of the keys to conducting a review effort is to focus on the actual results (or artifacts) that are generated during each stage. It’s foolish to discuss someone’s development method or style prior to reviewing the completeness of the artifacts. It’s not necessary to challenge someone approach if their artifacts reflect the details required for the other phases.
And one of the most common problems that I’ve seen with BI/DW development is the lack of documented requirements. Zip – zero –zilch – nothing. While discussions about requirements gathering, interview styles, and even document details occur occasionally, it’s the lack of any documented requirements that’s the norm. I can’t imagine how any company allows development to begin without ensuring that requirements are documented and approved by the stakeholders. Believe it or not, it happens a lot.
So, as a tribute to the creative school children of yesterday and today, I thought I would devote this blog to some of the most creative excuses I’ve heard from development teams to justify their beginning work without having requirements documentation.
- “The project’s schedule was published. We have to deliver something with or without requirements”
- “We use the agile methodology, it’s doesn’t require written requirements”
- “The users don’t know what they want.”
- “The users are always too busy to meet with us”
- “My bonus is based on the number of new reports I create. We don’t measure our code against requirements”
- “We know what the users want, we just haven’t written it down”
- “We’ll document the requirements once our code is complete and testing finished”
- “We can spend our time writing requirements, or we can spend our time coding”
- “It’s not our responsibility to document requirements; the users need to handle that”
- “I’ve been told not to communicate with the business users”
Many of the above items clearly reflect a broken set of management or communication methods. Expecting a development team to adhere to a project schedule when they don’t have requirements is ridiculous. Forcing a team to commit to deliverables without requirements challenges conventional development methods and financial common sense. It also reflects leadership that focuses on schedules, utilization and not business value.
A development team that is asked to build software without a set of requirements is being set up to fail. I’m always astonished that anyone would think they can argue and justify that the lack of documented requirements is acceptable. I guess there are still some folks that believe they can talk their way out of failure.
I read an interesting tidbit about data the other day: the United States Postal Service processed more than 47 million changes of addresses in the last year. That’s nearly 1 in 6 people. In the world of data, that factoid is a simple example of the challenge of addressing stale data and data quality. The idea of stale data is that as data ages, its accuracy and associated business rules can change.
There’s lots of examples of how data in your data warehouse can age and degrade in accuracy and quality: people move, area codes change, postal/zip codes change, product descriptions change, and even product SKUs can change. Data isn’t clean and accurate forever; it requires constant review and maintenance. This shouldn’t be much of a surprise for folks that view data as a corporate asset; any asset requires ongoing maintenance in order to retain and ensure its value. The challenge with maintaining any asset is establishing a reasonable maintenance plan.
Unfortunately, while IT teams are exceptionally strong in planning and carrying out application maintenance, it’s quite rare that data maintenance gets any attention. In the data warehousing world, data maintenance is typically handled in a reactive, project-centric manner. Nearly every data warehouse (or reporting) team has to deal with data maintenance issues whenever a company changes major business processes or modifies customer or product groupings (e.g. new sales territories, new product categories, etc.) This happens so often, most data warehouse folks have even given it a name: Recasting History. Regardless of what you call it, it’s a common occurrence and there are steps that can be taken to simplify the ongoing effort of data maintenance.
- Establish a regularly scheduled data maintenance window. Just like the application maintenance world, identify a window of time when data maintenance can be applied without impacting application processing or end user access
- Collect and publish data quality details. Profile and track the content of the major subject area tables within your data warehouse environment. Any significant shift in domain values, relationship details, or data demographics can be discovered prior to a user calling to report an undetected data problem
- Keep the original data. Most data quality processing overwrites original content with new details. Instead, keep the cleansed data and place the original values at the end of your table records. While this may require a bit more storage, it will dramatically simplify maintenance when rule changes occur in the future
- Add source system identification and creation date/time details to every record. While this may seem tedious and unnecessary, these two fields can dramatically simplify maintenance and trouble shooting in the future
- Schedule a regular data change control meeting. This too is similar in concept to the change control meeting associated with IT operations teams. This is a forum for discussing data content issues and changes
Unfortunately, I often find that data maintenance is completely ignored. The problem is that fixing broken or inaccurate data isn’t sexy; developing a data maintenance plan isn’t always fun. Most data warehouse development teams are buried with building new reports, loading new data, or supporting the ongoing ETL jobs; they haven’t given any attention to the quality or accuracy of the actual content they’re moving and reporting. They simply don’t have the resources or time to address data maintenance as a proactive activity.
Business users clamor for new data and new reports; new funding is always tied to new business capabilities. Support costs are budgeted, but they’re focused on software and hardware maintenance activities. No one ever considers data maintenance; it’s simply ignored and forgotten.
Interesting that we view data as a corporate asset – a strategic corporate asset – and there’s universal agreement that hardware and software are simply tools to support enablement. And where are we investing in maintenance? The commodity tools, not the strategic corporate asset.
Photo courtesy of DesignzillasFlickr via Flickr (Creative Commons license).
In one of my previous blogs, I wrote about Data Virtualization technology — one of the more interesting pieces of middleware technology that can simplify data management. While most of the commercial products in this space share a common set of features and functions, I thought I’d devote this blog to discussing the more advanced features. There are quite a few competing products; the real challenge in differentiating the products is to understand their more advanced features.
The attraction of data virtualization is that it simplifies data access. Most IT shops have one of everything – and this includes several different brands of commercial DBMSs, a few open source databases, a slew of BI/reporting tools, and the inevitable list of emerging and specialized tools and technologies (Hadoop, Dremel, Casandra, etc.) Supporting all of the client-to-server-to-repository interfaces (and the associated configurations) is both complex and time consuming. This is why the advanced capabilities of Data Virtualization have become so valuable to the IT world.
The following details aren’t arranged in any particular order. I’ve identified the ones that I’ve found to be the most valuable (and interesting). Let me also acknowledge not every DV product supports all of these features.
Intelligent data caching. Repository-to-DV Server data movement is the biggest obstacle in query response time. Most DV products are able to support static caching to reduce repetitive data movement (data is copied and persisted in the DV Server). Unfortunately, this approach has limited success when there are ad hoc users accessing dozens of sources and thousands of tables. The more effective solution is for the DV Server to monitor all queries and dynamically cache data based on user access, query load, and table (and data) access frequency.
Query optimization (w/multi-platform execution). While all DV products claim some amount of query optimization, it’s important to know the details. There are lots of tricks and techniques; however, look for optimization that understands source data volumes, data distribution, data movement latency, and is able to process data on any source platform.
Support for multiple client Interfaces. Since most companies have multiple database products, it can be cumbersome to support and maintain multiple client access configurations. The DV server can act as a single access point for multiple vendor products (a single ODBC interface can replace drivers for each DBMS brand). Additionally, most DV Server drivers support multiple different access methods (ODBC, JDBC, XML, and web services).
Attribute level or value specific data security. This feature supports data security at a much lower granularity than is typically available with most DBMS products. Data can be protected (or restricted) at individual column values for entire table or selective rows.
Metadata tracking and management. Since Data Virtualization is a query-centric middleware environment, it only makes sense to position this server to retrieve, reconcile, and store metadata content from multiple, disparate data repositories.
Data lineage. This item works in tandem with the metadata capability and augments the information by retaining the source details for all data that is retrieved. This not only includes source id information for individual records but also the origin, creation date, and native attribute details.
Query tracking for usage audit. Because the DV Server can act as a centralized access point for user tool access, there are several DV products that support the capture and tracking of all submitted queries. This can be used to track, measure, and analyze end user (or repository) access.
Workflow linkage and processing. This is the ability to execute predefined logic against specific data that is retrieved. While this concept is similar to a macro or stored procedure, it’s much more sophisticated. It could include the ability to direct job control or specialized processing against an answer set prior to delivery (e.g. data hygiene, external access control, stewardship approval, etc.)
Packaged Application Templates. Most packaged applications (CRM, ERP, etc.) contain thousands of tables and columns that can be very difficult to understand and query. Several DV vendors have developed templates containing predefined DV server views that access the most commonly queried data elements.
Setup and Configuration Wizards. Configuring a DV server to access the multiple data sources can be a very time consuming exercise; the administrator needs to define and configure every source repository, the underlying tables (or files), along with the individual data fields. To simplify setup, a configuration wizard reviews the dictionary of an available data source and generates the necessary DV Server configuration details. It further analyzes the table and column names to simplify naming conventions, joins, and data value conversion and standardization details.
Don’t be misled into thinking that Data Virtualization is a highly mature product space where all of the products are nearly identical. They aren’t. Most product vendors spend more time discussing their unique features instead of offering metrics about their their core features. It’s important to remember that every Data Virtualization product requires a server that retrieves and processes data to fulfill query requests. This technology is not a commodity, which means that details like setup/configuration time, query performance, and advanced features can vary dramatically across products. Benchmark and test drive the technology before buying.
I was participating in a discussion about Data Virtualization (DV) the other day and was intrigued with the different views that everyone had about a technology that’s been around for more than 10 years. For those of you that don’t participate in IT-centric, geekfest discussions on a regular basis, Data Virtualization software is middleware that allows various disparate data sources to look like a single relational database. Some folks characterize Data Virtualization as a software abstraction layer that removes the storage location and format complexities associated with manipulating data. The bottom line is that Data Virtualization software can make a BI (or any SQL) tool see data as though it’s contained within a single database even though it may be spread across multiple databases, XML files, and even Hadoop systems.
What intrigued me about the conversation is that most of the folks had been introduced to Data Virtualization not as an infrastructure tool that simplifies specific disparate data problems, but as the secret sauce or silver bullet for a specific application. They had all inherited an application that had been built outside of IT to address a business problem that required data to be integrated from a multitude of sources. And in each instance, the applications were able to capitalize on Data Virtualization as a more cost effective solution for integrating detailed data. Instead of building a new platform to store and process another copy of the data, they used Data Virtualization software to query and integrate data from the individual sources systems. And each “solution” utilized a different combination of functions and capabilities.
As with any technology discussion, there’s always someone that believes that their favorite technology is the best thing since sliced bread – and they want to apply their solution to every problem. Data Virtualization is an incredibly powerful technology with a broad array of functions that enable multi-source query processing. Given the relative obscurity of this data management technology, I thought I’d review some of the more basic capabilities supported by this technology.
Multi-Source Query Processing. This is often referred to as Query Federation. The ability to have a single query process data across multiple data stores.
Simplify Data Access and Navigation. Exposes data as single (virtual) data source from numerous component sources. The DV system handles the various network, SQL dialect, and/or data conversion issues.
Integrate Data “On the Fly”. This is referred to as Data Federation. The DV server retrieves and integrates source data to support each individual query.
Access to Non-Relational Data. The DV server is able to portray non-relational data (e.g. XML data, flat files, Hadoop, etc.) as structured, relational tables.
Standardize and Transform Data. Once the data is retrieved from the origin, the DV server will convert the data (if necessary) into a format to support matching and integration.
Integrate Relational and Non-Relational Data. Because DV can make any data source (well, almost any) look like a relational table, this capability is implicit. Keep in mind that the data (or a subset of it) must have some sort of implicit structure.
Expose a Data Services Interface. Exposing a web service that is attached to a predefined query that can be processed by the DV server.
Govern Ad Hoc Queries. The DV Server can monitor query submissions, run time, and even complexity – and terminate or prevent processing under specific rule-based situations.
Improve Data Security. As a common point of access, the DV Server can support another level of data access security to address the likely inconsistencies that exist across multiple data store environments.
As many folks have learned, Data Virtualization is not a substitute for a data warehouse or a data mart. In order for a DV Server to process data, the data must be retrieved from the origin; consequently, running a query that joins tables spread across multiple systems containing millions of records isn’t practical. An Ethernet network is no substitute for the high speed interconnect linking a computer’s processor and memory to online storage. However, when the data is spread across multiple systems and there’s no other query alternative, Data Virtualization is certainly worth investigating.
I presented a webinar a few weeks back that challenged the popular opinion that the only way to be successful with data science was to hire an individual that has a swiss army knife of data skills and business acumen. (The archived webinar link is http://goo.gl/Ka1H2I )
While I can’t argue on the value of such abilities, my belief is that these types of individuals are very rare, and the benefits of data science is something that can be valued by every company. Consequently, my belief is that you can approach data science successfully through building a team of focused staff members, providing they cover 5 role areas: Data Services, Data Engineer, Data Manager, Production Development, and the Data Scientist.
I received quite a few questions during and after the August 12th webinar, so I thought I would devote this week’s blog to those questions (and answers). As is always the case with a blog, feel free to comment, respond, or disagree – I’ll gladly post the feedback below.
Q: In terms of benefits and costs, do you have any words of wisdom in building a business case that can be taken to business leadership for funding
A: Business case constructs vary by company. What I encourage folks to focus on is the opportunity value in supporting a new initiative. Justifying an initial data science initiative shouldn’t be difficult if your company already supports individuals analyzing data on their desktops. We often find collecting the existing investment numbers and the results of your advanced analytics team (SAS, R, SPSS, etc.) often justifies delving into the world of Data Science
Q: One problem is that many business leaders do not have a concept of what goes into a scientific discovery process. They are not schooled as scientists.
A: You’re absolutely correct. Most managers are focused on establishing business process, measuring progress, and delivering results. Discovery and exploration isn’t always a predictable process. We often find that initial Data Science initiatives are more likely to be successful if the environment has already adopted the value of reporting and advanced analytics (numerical analysis, data mining, prediction, etc.) If your organization hasn’t fully adopted business intelligence and desktop analysis, you may not be ready for Data Science. If your organization already understands the value of detailed data and analysis – you might want to begin with a more focused analytic effort (e.g. identifying trend indicators, predictive details, or other modeling activities.) We’ve seen data science deliver significant business value, but it also requires a manager that understands the complexities and issues of data exploration and discovery.
Q: One of the challenges that we’ve seen in my company is the desire to force fit Data Science into a traditional IT waterfall development method instead of realizing the benefits of taking a more iterative or agile approach. Is there danger in this approach?
A: We find that the when organizations already have an existing (and robust) business intelligence and analytics environments, there’s a tendency to follow the tried and true practices of defined requirements, documented project plans, managed development, and scheduled delivery. One thing to keep in mind is that the whole premise of Data Science is analyzing data to uncover new patterns or knowledge. When you first undertake a Data Science initiative, it’s about exploration and discovery, not structured deliverables. It’s reasonable to spin up a project team (preferably using an iterative or agile methodology) once the discovery has been identified and there’s tangible business value to build and deploy a solution using the discovery. However, it’s important to allow the discovery to happen first.
You might consider reading an article from DJ Patil (“Building Data Science Teams“) that discusses the importance of having a Production Development role that I mentioned. This is the role that takes on the creation of a production deliverable from the raw artifacts and discoveries made by the Data Science team
Q: It seems like your Data Engineer has a similar role and responsibility set as a Data Warehouse architect or ETL developer
A: The Data Engineers are a hybrid of sorts. They handle all of the data transformation and integration activities and they are also deeply knowledgeable of the underlying data sources and the content. We often find that the Data Warehouse Architect and ETL Developer are very knowledgeable about the data structures of source and target systems, but they aren’t typically knowledgeable on social media content, external sources, unstructured data, and the lower details of the specific data attributes. Obviously, these skills vary from organization to organization. If the individuals in your organization are intimate with this level of knowledge, they may be able to cover the activities associated with a Data Engineer.
Q : What is the difference between the Data Engineers and Data Management team members?
A: Data Engineers focus on retrieving and manipulation data from the various data stores (external and internal). They deal with data transformation, correction, and integration. The Data Management folks support the Data Engineers (thus the skill overlap) but focus more on managing and tracking the actual data assets that are going to be used by data scientists and other analysts within the company (understanding the content, the values, the formats, and the idiosyncrasies).
Q: Isn’t there a risk in building a team of folks with specialized skills (instead of having individuals with a broad set of knowledge). With specialists, don’t we risk freezing the current state of the art, making the organization inflexible to change? Doesn’t it also reduce everyone’s overall understanding of the goal (e.g. the technicians focus on their tools’ functions, not the actual results they’re being expected to deliver)
A: While I see your perspective, I’d suggest a slightly different view. The premise of defining the various roles is to identify the work activities (and skills) necessary to complete a body of work. Each role should still evolve with skill growth — to ensure individuals can handle more and more complex activities. There will continue to be enormous growth and evolution in the world of Data Science in the variety of external data sources, number of data interfaces, and the variety of data integration tools. Establishing different roles ensures there’s an awareness of the breadth of skills required to complete the body of work. It’s entirely reasonable for an individual to cover multiple roles; however, as the workload increases, it’s very likely that specialization will be necessary to support the added work effort. Henry Ford used the assembly line to revolutionize manufacturing. He was able to utilize less skilled workers to handle the less sophisticated tasks so he could ensure his craftsmen continued to focus on more and more specialized (and complex) activities. Data integration and management activities support (and enable) Data Science. Specialization should be focused on the less complex (and more easily staffed) roles that will free up the Data Scientist’s time to allow them to focus on their core strengths.
Q: : Is this intended to be an Enterprise wide team?
A: We’ve seen Data Science teams be positioned as an organizational resource (e.g. specific to support marketing analytics); we’ve also seen teams set up as an enterprise resource. The decision is typically driven by the culture and needs of your company.
Q: Where is the business orientation in the data team? Do you need someone that knows what questions to ask and then take all of the data and distill it down to insights that a CEO can implement.
A: The “business orientation” usually resides with the Data Scientist role. The Data Science team isn’t typically setup to respond to business user requests (like a traditional BI team); they are usually driven by the Data Scientist that understands and is tasked with addressing the priority needs of the company. The Data Scientist doesn’t work in a vacuum; they have to interact with key business stakeholders on a regular basis. However, Data Science shouldn’t be structured like a traditional applications development team either. The teams is focused on discovery and exploration – not core IT development. Take a look at one of the more popular articles on the topic, “Data Scientist: the sexiest job of the 21st century” by Tom Davenport and DJ Patil http://goo.gl/CmCtv9
Photo courtesy of National Archive via Flickr (Creative Commons license).
I’ve been intrigued with all of the attention that the world of Data Science has received. It seems that every popular business magazine has published several articles and it’s become a mainstream topic at most industry conferences. One of the things that struck me as odd is that there’s a group of folks that actually believe that all of the activities necessary to deliver new business discoveries with data science can be reasonably addressed by finding individuals that have a cornucopia of technical and business skills. One popular belief is that a Data Scientist should be able to address all of the business and technical activities necessary to identify, qualify, prove, and explain a business idea with detailed data.
If you can find individuals that comprehend the peculiarities of source data extraction, have mastered data integration techniques, understand parallel algorithms to process tens of billions of records, have worked with specialized data preparation tools, and can debate your company’s business strategy and priorities – Cool! Hire these folks and chain their leg to the desk as soon as possible.
If you can’t, you might consider building a team that can cover the various roles that are necessary to support a Data Science initiative. There’s a lot more to Data Science than simply processing a pile of data with the latest open source framework. The roles that you should consider include:
Manages the various data repositories that feed data to the analytics effort. This includes understanding the schemas, tracking the data content, and making sure the platforms are maintained. Companies with existing data warehouses, data marts, or reporting systems typically have a group of folks focused on these activities (DBAs, administrators, etc.).
Responsible for developing and implementing tools to gather, move, process, and manage data. In most analytics environments, these activities are handled by the data integration team. In the world of Big Data or Data Science, this isn’t just ETL development for batch files; it also includes processing data streams and handling the cleansing and standardization of numerous structured and unstructured data sources.
Handles the traditional data management or source data stewardship role; the focus is supporting development access and manipulation of data content. This includes tracking the available data sources (internal and external), understanding the location and underlying details of specific attributes, and supporting developers’ code construction efforts.
Responsible for packaging the Data Scientist discoveries into a production ready deliverable. This may include (one or) many components: new data attributes, new algorithms, a new data processing method, or an entirely new end-user tool. The goal is to ensure that the discoveries deliver business value.
The team leader and the individual that excels at analyzing data to help a business gain a competitive edge. They are adept at technical activities and equally qualified to lead a business discussion as to the benefits of a new business strategy or approach. They can tackle all aspects of a problem and often lead the interdisciplinary team to construct an analytics solution.
There’s no shortage of success stories about the amazing data discoveries uncovered by Data Scientists. In many of those companies, the Data Scientist didn’t have an incumbent data warehousing or analytics environment; they couldn’t pick up the phone to call a data architect, there wasn’t any metadata documentation, and their company didn’t have a standard set of data management tools. They were on their own. So, the Data Scientist became “chief cook and bottle washer” for everything that is big data and analytics.
Most companies today have institutionalized data analysis; there are multiple data warehouses, lots of dashboards, and even a query support desk. And while there’s a big difference between desktop reporting and processing social media feedback, much of the “behind the scenes” data management and data integration work is the same. If your company already has an incumbent data and analytics environment, it makes sense to leverage existing methods, practices, and staff skills. Let the Data Scientists focus on identifying the next big idea and the heavy analytics; let the rest of the team deal with all of the other work.