Author: Eugene Khazin, Principal and Co-founder, Prime TSR
What’s interesting about analytics projects, from my experience, is that they all start out so promising. Someone presents a mock dashboard of what data they can visualize once the project is complete, and immediately everyone is sold on it.
The dashboard looks something like this:
Look how beautiful that is. I mean, who doesn’t want to use data to make better decisions, gain operational efficiencies, and get an edge on the competition?
The reality, however, is starkly different. A majority of enterprise analytics projects fail. Not just fail, but fail before they even start.
Why does this happen? I became really intrigued and wanted to find out the answer, so I got our team together to conduct a study by interviewing 20 senior IT executives in the Chicago area regarding their biggest challenges with analytics projects.
What we discovered was a little surprising to say the least. It had nothing to do with the visualization, data science team, or analytics products. The #1 issue that caused many of these analytics projects to fail?
The inability to get the right data to be analyzed.
Data science is the easy part. Getting the right data, and getting the data ready for analyses, is much more difficult. ~McKinsey
Companies simply struggled to get the right data. There are 3 major pain points we discovered:
1. Nobody could agree on which system was the “system of record” or even if the data could be trusted.
One of the IT executive’s responses stood out to me:
“What we didn’t anticipate was the huge political battle internally on which data elements and which systems could be trusted. It fueled the negativity and demise of our analytics project.”
The good news is that there is a much better solution to this typical enterprise problem. I’ll get to that shortly.
2. Consuming data from multiple sources was almost an impossible task.
You know that nice looking dashboard? Well, it needs data. A lot of it. And 99% of the time the data comes from multiple systems and sources. And, as you can guess, getting access to that data was a huge challenge for the following reasons:
- It’s all over the organization. Some departments would rather hoard their data than share it with teams.
- Not clean, not organized, and not available continuously.
3. The project was doomed from the beginning.
The scope of the project was too big. Twelve months was simply not long enough to get the insights needed to justify the cost. A classic case of over-promising and under-delivering.
They all loved the dashboard, but nobody knew how to get the data. It was an afterthought.
One of the responses laid out exactly how it started and how it failed.
- We felt that we needed to do analytics and AI because we could use this information to streamline operations.
- We hired an expensive team of data scientists and invested in the infrastructure to do the analysis since we had all the data.
- The data science team began to work on the project and the ramp up time was longer than we expected it to be.
- We cut the project and data science team because we couldn’t show results in the 12 months the project existed.
- The reason? The data wasn’t in one location and we struggled to get the data science team the information they needed.
The hardest part about listening to this is to know that the executive’s problem still exists.
WHAT SHOULD HAVE HAPPENED VS. WHAT ACTUALLY HAPPENS.
Back to the original point. If data scientists don’t have the right data that’s easily accessible, then an analytics project will most likely fail, and honestly, data scientists become a huge overhead.
Here are my three guiding principles to making your next analytics project a success:
1. Make it easy for business and IT functions to access data.
Jeff Bezos infamously had a mandate in 2002 that forced every team to expose their data and functionality through service interfaces, as well as designing each interface to be consumed externally.
Amazon, an online bookstore company at the time, essentially kickstarted the entire cloud movement by allowing third-party developers to use Amazon’s internal services (AWS, S3, etc) for their own business.
Every executive meeting I have, one of my first points I stress is a change in how the company views data. Either you make your data easily accessible by IT and Business or you will have a hard time succeeding.
2. Make the data usable.
For the executives we interviewed, their teams struggled to create any value from the data simply because they couldn’t get the right systems data into one central location.
The reason data lakes and data warehouses have grown in importance in many organizations is because they solve a massive problem that enterprises face every day: It is hard to centralize data because of the amount of data, various types of data (structured and unstructured), and how long it will take teams to ingest, cleanse, and transform the data for every single source.
3. Get traction by focusing on a single use case
If an organization is new to data engineering and analytics, waterfall-style project methodologies are your worst enemy. Value should be proven in three months, not 18.
My recommendation is to focus on tangible business use cases for reporting and predictive analytics. Examples:
- Build a report that tells me which doctors and which regions prescribe the most opioids.
- Tell me which locations we should open up retail branches in. (Based on market research data, financials, geographics, etc.)
- Predict which patients will be readmitted to the hospital within 15 days.
Build your projects/proof-of-concepts around individual business use cases. You should have a clear idea of simple use cases before you start trying to transform the entire organization.
Start small and get quick wins.