The business of ensuring that corporate data assets are reliable and re-useable – Enterprise Information Management (EIM) – has never mattered more. Because we are increasingly leveraging more data, more often, to measure and optimise more business processes. And because “garbage in, garbage out” is as true today as it was when it was first coined, way back when.
Unfortunately, traditional EIM methods and processes are either at breaking point in most organisations – or are, in fact, already broken.
Traditional approaches to EIM leverage a set of well-known, well-understood and interconnected methods: Meta-Data Management; Data Quality Management; Data Integration; Master Data Management; and Data Access and Security.
These are woven together through constructs like Data and Architecture Governance Boards, which define policies, principles, rules and standards that enable the design of end-to-end solutions with EIM capabilities built-in, not bolted-on after the fact.
But here’s a dirty little secret. Very few companies have either the discipline or the resources to rigorously apply all of these methods and processes to the data that are already captured in their existing Data Warehouses.
If you assume that your bank is protecting the copy of your Personally Identifiable Information (PII) that it stores in its Data Warehouse with strong encryption, for example, you are likely being generous.
Pushing the Limits
The situation gets worse, when you consider that organisations today need to not only deal with data from within, but also data that originate outside the corporation, like social media data. These types of data typically need to be interpreted in multiple different ways depending on the context of the analysis.
That’s why more-and-more “Logical Data Warehouses” are being deployed to extend and enhance the capability of existing decision support systems so that they can capture and analyse much more data.
But we have to acknowledge that something is going to have to give. Not because EIM doesn’t matter in the brave new world of the Logical Data Warehouse, but rather because the processes and organisational models that aren’t quite good enough today just won’t scale to “100x” volumes and complexities, which is where the “Sentient Enterprise” is going to be tomorrow.
Evolution is Inevitable
All of which means that Information Management and associated models of governance are going to have to evolve – indeed, are already evolving at leading organisations that are adapting to big data.
If we want to know what the future of EIM looks like, we need only look to Wikipedia and Facebook. Because what Wikipedia and Facebook teach us is that social models of content curation and collaboration do scale.
The Future of EIM
Now before the veteran EIM practitioners throw up their hands in horror, I am not suggesting the wholesale, laissez-faire abandonment of data access rules and policies. Or that organisations give up on integrating data that are frequently re-used, shared and compared across different departments. Or that those same organisations stop worrying about the accuracy of the financial metrics that they report to Wall Street.
But what I’m saying is that organisations will increasingly need to crowd-source a lot of their meta-data. They’ll need to know when to live with “good enough” quality for some, less critical data. They will need to galvanise the entire organisation – and indeed partners and suppliers outside it – to the task of figuring out when and how different data can be leveraged for different purposes. And to find ways of making that knowledge not merely available, but easily accessible.
In other words, they will need to build a Corporate Data Catalogue that looks and feels a lot like Wikipedia, but which borrows the “like” and “share” concepts from Facebook.
This post first appeared on Forbes TeradataVoice on 11/03/2016.