Data fabric: A dream too soon?
Julian Thomas, Principal Consultant at PBT Group
In my previous blog I discussed the differences between data fabric and data mesh while also touching on the complexities of adopting the former into business practices. The reality is that most organisations are waiting for a globally recognised methodology to be available before considering transitioning to the data fabric.
However, the size and agile nature of startups and SMEs make them the more likely candidates to begin experimenting with this approach. While this might help drive momentum for more broader adoption, it is still a long journey to follow before the data fabric becomes mainstream.
One of the reasons for the slow uptake is the diverse components that comprise the data fabric. These include virtualisation, artificial intelligence to automate processes, the dynamically loading and processing of data, and predictive analytics, to name a few. Each of these require different sets of technologies and capabilities that can come at a significant expense.
Just consider the traditional information management environment and the associated costs. Not many organisations have been able to implement information management holistically and if that is the case, it is unlikely that the data fabric will gain significant momentum in the near future.
Is there an immediate need for the data fabric? Maybe in 10 years’ time it will be easier to integrate everything more effectively. Of course, technologists are saying that it needs to be done now, however businesses are not yet looking for a completely automated, AI-driven pipeline.
Further complicating things is just how few large organisations are performing data science effectively. Things like risk scoring, campaign management, and fraud detection are still not pervasive at many companies. Businesses have barely scratched the surface of doing the basics of data analysis right without even looking at the opportunities that AI and robotic process automation can provide.
The data fabric is a very tough sell both from cost and skills perspectives. This does not mean companies should ignore aspects of the data fabric. There is an opportunity to organically evolve to a data fabric state as opposed to a big bang implementation.
To accomplish this, companies must take elements out of the ‘dream state’ (the perfect environment) and look how best to realistically implement them now. Decision-makers must examine the company’s existing processes and identify how to optimise the availability of their data.
Most corporates already have the technology in place where they can efficiently expose and share data. Unfortunately, modern data warehouses are running on expensive software and server while still being constrained by the vision of the people limiting the potential of improving processes.
Using the principle of improving the quality of the existing data ecosystem, a company can speed up the flow of information. This could very well be the most basic aspect needed to make the data fabric a reality in the years to come.