BARC analysts Christian Fuchs and Nina Lorenz offer up some advice on how to use BARC Score in your financial performance management and integrated planning & analytics software selection projects.
A preview of BARC Score Financial Performance Management 2021, BARC’s overview of the market for Financial Performance Management (FPM) software tools.
A preview of BARC Score Integrated Planning & Analytics 2021, BARC’s overview of the market for Integrated Planning & Analytics (IP&A) software tools.
The assessment and classification of business software and its suppliers, for example in the software selection process, can often be a complicated task for companies. BARC Scores are designed to demystify this process.
An overview of data catalogs by BARC Analyst Timm Grosser, including tips on how to select the right data cataloging solution for your organization.
Data is essential for companies to keep up with the digital age. Everyone knows that by now. But it’s not so easy to extract the desired value from data and shine with innovative, data-driven business applications. Instead, we often see data chaos that has been growing for years in the form of fragmented data landscapes and distributed expert knowledge.
A hotly discussed technological approach to make knowledge of distributed data available is the data catalog, the “Yellow Pages” for business-relevant data. It stores information about data in the form of metadata and structures, and makes it searchable.
A data catalog tool achieves its usefulness primarily through three essential points:
- covering information needs quickly and easily
- capturing and curating metadata (knowledge) as efficiently (automated) as possible
- providing a platform for the exchange of knowledge for “all”
In addition, functions for data governance and/or data access are valuable.
Finding the right tool can be more complicated than you might expect. The market for data catalogs is anything but transparent. As with other trending areas, the range of products is exploding and we are now aware of more than 90 solutions with data cataloging functions operating worldwide. But not all data cataloging is the same. These offerings vary in focus, content, features and supported use cases. The following table provides an overview of the basic tool types for data cataloging. Basically, there are options for specific use cases (as part of a BI or analytics user tool, as part of an environment) and offerings that provide a comprehensive, independent solution (specialists, as part of a data governance (DG)/data management (DM) platform).
Pay particular attention to interfaces and transparent, open metadata models for metadata exchange with other catalogs and systems when selecting a data catalog. This offers you a number of advantages:
– You avoid vendor lock-in and can use the tool’s capabilities in a targeted manner
– You can more easily transfer catalogs from different areas or environments to a parent catalog
– It allows easier migration or integration with more powerful tools or tools with a different focus
|Catalog scenario||Characteristics||Tool examples|
|…homemade||Rudimentary catalog functions||Excel, Confluence, Wiki, …|
|…as part of a BI/analytics tool||Catalog functions related to the data/artifacts in the environment||Alteryx, Qlik, Tableau, …|
|…as part of an environment||Catalog functions related to technical data/artifacts in the environment||Amazon, Cloudera, Google, …|
|…as specialist||Comprehensive catalog functions related to data and partly artifacts from different tools/environments, added functionality such as data governance||Alation, Waterline, Zeenea, …|
|…as part of a data governance/DM platform||Comprehensive catalog functions related to data and partly artifacts from different tools/environments. Additional functionality from the portfolio (e.g., workflows, data quality, etc.)||Collibra, Infogix, Informatica, SAP, …|
Table 1: Data Cataloging tool types
When selecting a data catalog, its functions should be carefully checked. A checklist should normally include:
– Adapters and functions for metadata integration and exchange
– Supported content (e.g., supported metadata types, openness and extensibility of the metadata model)
– Functions and machine support for the maintenance (curation) of metadata
– Functions and machine support for catalog use and search/navigation/analysis of metadata
– Ease of use
– Support for collaboration
– Further data management functions (e.g., for data governance, data preparation, data quality and data protection)
We are also happy to support you directly – with our best practice experience, established process models and numerous templates – through the entire selection process from requirements gathering to the creation of a shortlist, proof of concept support and deciding which tool to use. This gives you greater decision security, saves you time and resources and provides you with a partner who can help to create a data cataloging roadmap which is both transparent and acceptable to management and relevant stakeholders.
Here is a common scenario, seen in organizations all over the planet. I’m talking about self-service, which is meant to be a good idea.
For quite a while now, self-service BI has been all the rage. Understandably, business users are often discouraged, disappointed or outright annoyed by the lack of support for BI they get from their local IT departments, and so they often decided to take matters into their own hands. In most cases, Microsoft Excel comes to the rescue, but when that approach doesn’t scale or is otherwise not manageable anymore, those functional departments are looking for a more pragmatic solution. They called it self-service BI, but it was really more self-help. This is where companies such as Qlik or Tableau have seen huge success from generating demand and adoption through grassroots movements, which often happened behind IT’s back. After a while, those DIY implementations became a little more complex and the newly generated shadow IT groups needed help.
What usually happens next?
After IT’s aggravation has subsided, and the rogue projects have been reigned in again, Big IT decides to give the business what they say they want and sets out to enable their users through controlled self-service BI. Or so they thought. Because it is not about the tool. Now, the functional departments are creating reports, dashboards, and are analyzing data to their hearts’ content, and all with IT’s blessing, yet they still don’t seem to be happy. Why? Because they often rely on self-service definitions (for lack of a better description). Only those organizations that base their BI infrastructure on a commonly agreed data model and semantic layer, the confusion over what means revenue, how many customers have churned, or the profitability of a certain product, can be kept to a minimum. Many others look at their beautiful self-serviced dashboards, nicely rendered on the latest mobile gadget, and have the same funny feeling as before that the figures they are looking at do not represent the truth. So, be careful what you wish for when you say you want “self-service” as there are still a few traps along the way. Some infrastructure guardrails are necessary.
I get it. Responding to an organisation’s Request For Proposal (RFP) is not everyone’s favorite way to spend time. After all, first you need to read and understand the requirements described in the RFP to begin with, then figure out whether your organization has something to offer in this field, whether necessary personnel is available, come up with competitive pricing, and when that’s all done, write it all up in a comprehensive proposal that stands out from the rest. And all of that well knowing that it may be a total waste of time, if another provider is selected. It’s tough to be a vendor sometimes.
I am currently involved in an RFP for what can be considered an end-to-end BI implementation, from data integration and data quality to the data warehouse, its data model, and finally various BI front-ends. Almost a green field approach, which is kinda rare these days.
So after carefully developing the requirements list and turning it into a concise RFP, it’s now the time to review the responses from about 10 solution providers, some of which are global organisations, others smaller and more locally operating vendors. And it became very clear that the size of the organization is no indication for the quality of the response. Similarly, a vendor’s grand reputation can be at a stark contrast to a very sloppy proposal from the same company. Sometimes it seems as if the vendor isn’t even interested in the engagement. So why respond at all then?
However, what gets me real mad is when a vendor sends a 50 page proposal from which 40 pages are pure propaganda, five pages are boilerplate stuff, and another five pages interesting content about the topic at hand. If I want to read a vendor’s marketing blurb, I’ll go and visit their website. But when I want to learn about a vendor’s approach, capabilities, pricing, skills, etc., they should better spare me the slogans. After all, we didn’t just pull their name out of thin air, but thoroughly research them before inviting them to the RFP in the first place. So there is little need for a standard marketing pitch.
I recently received an invitation to a webinar named “Your checklist for Cloud BI success!” from Yellowfin, a BI vendor from down under, headquartered in Melbourne. In the announcement, I read
By the end of 2014, Gartner expects almost 50% of organizations to deliver their mission-critical BI via the cloud.
Say what? The end of 2014 is not even a whole month away, and by that time, half of all organizations are using BI in the cloud, even for what’s considered mission-critical decisions? No way, José. I mean, this incredible adoption of BI in the cloud must either have happened in a parallel universe, or something is totally off with that statement. I know my former colleagues at Gartner well, and I doubt that anyone dealing with BI trends would come up with such a prediction.
I asked the folks at Yellowfin for some clarification as to where they found that statement, and received the following references, two media outlets and one blog:
- ComputerWorld: Cloud BI: Going where the data lives
- Brittenford Systems: Cloud Business Intelligence for the Enterprise: Follow the Data
- Midsize Insider: Cloud Computing and Business Intelligence Today
After scanning those texts, I came across related sections discussing BI in the cloud. In the ComputerWorld article, I read:
Researchers at Gartner say that 2014 may be the tipping point for cloud BI. In each of the last four years, around 30% of respondents to a Gartner survey said they’d run their mission-critical BI in the cloud. This year, however, nearly half — 45% — said they would adopt cloud BI.
The Brittenford blog says the same thing, word by word. Looks like the “blogger” just pulled that section from the CW article.
Researchers at Gartner say that 2014 may be the tipping point for cloud BI. In each of the last four years, around 30% of respondents to a Gartner survey said they’d run their mission-critical BI in the cloud. This year, however, nearly half (45 percent) said they would adopt cloud BI.
Marissa Tejada from the Midsize Insider also references the CW article, the message is the same.
In a recent ComputerWorld article, Gartner analysts revealed their insights on recent industry trends around the cloud and BI. Within the last four years, about 30 percent of firms preferred to run their mission-critical BI through cloud computing. That percentage is set to increase to 45 percent this year.
Wait a minute, those 30% or 45% from the first two references said they would adopt cloud BI. Which means they did not, or not yet. In the Midsize Insider article, those same percentages preferred cloud BI, which sounds as if they are already underway, which I really doubt.
Bottom line: Lots of companies consider cloud BI as an option. That’s it, no surprise there. Now, I personally could consider a lot of things, for example, base jumping from a skyscraper or putting a Hello Kitty tattoo on my forehead. Doesn’t mean I’ll do either. Same thing will be true for many of those organizations that said they’d consider cloud BI. They’ll never do it.
The much more interesting thing to know is obviously how many organizations have a significant (!) number of their user population consume BI via the cloud today. Still relatively few. According to the 2014 BARC BI Survey, which includes a breakdown of cloud BI adoption by vendor from over 2000 end-user respondents, only 10% of organizations use cloud BI at all. The percentage of total users is obviously even smaller than that. If only the BI vendors with a large customer base would tell us what percentage of their BI revenue is actually coming from cloud BI subscriptions, because that would provide a good indication of adoption. I’m fairly sure that it is still in low single-digit percentages. So far, no vendor has objected.
Still, as for the Yellowfin statement, that started this whole investigation:
By the end of 2014, Gartner expects almost 50% of organizations to deliver their mission-critical BI via the cloud.
Maybe it’s just a simple misunderstanding, maybe it’s wishful thinking. In any case, it clearly distorts the facts, which isn’t helping anyone.
Coincidentally, Yellowfin, a provider of cloud BI solutions, is leading the cloud BI adoption statistics by quite a margin. Get the whole BI Survey here.
Welcome to my blog. After a blogging hiatus of about two years, where I just didn’t feel like it, I am back online and ready to pick up some of the topics where I left off. And I will certainly be willing to pick up a glove if need be. There are quite a few controversial subjects in the land of BI, big data, and overall data management, and I will start sharing my views going forward.
One of my favorite discussion topics over the last 10 years is “BI strategy” or rather the lack thereof. I may sound like a broken record, but at closer inspection of organizations’ attempts at business intelligence, it becomes glaringly obvious that most implementations are not based on a real strategy, but on faith and hope.
So why is it that hardly any organization has a true BI strategy and so many BI implementations fail to deliver? Here are a few attempts at an explanation:
- Lack of executive commitment
- Shortsighted focus on just quick wins
- Lack of communication
- Internal politics
- Not invented here syndrome
- Extreme time pressure
I could go on. It is interesting, though, that hardly any stumbling blocks are of technological nature. Pretty much all problems that relate to a lack of a BI strategy are homemade. At the same time, when I review a document that is called “BI strategy”, it almost exclusively focuses on the technology bits, as if that is what it is all about. Of course, BI would never work without technology, but the more important and much harder topics to think about are people–related: requirements, steering, stewardship, or program management.
As a reference, here are some of the suggested chapter headings of a potential strategy document. Feel free to contact me to discuss.