The Best Way to Monetise your Data – Just Use it!

Monday March 10th, 2014

It seems monetisation of data is a hot topic at the moment. It would seem it may be easy to make money monetising our data for others to use rather than doing things in our own businesses with our data.

So what exactly is monetising data?  There seems to be two schools of thought on this, or two ways of looking at it. One is something along the lines of “selling your data”, or more politically correct would be allowing external organisations to gain utility from the data and insights you have built. This may be in the form of actually selling data enrichment or enhancement services (“polishing their data”), providing targeted access to your customers anonymously through your own channels (“customer access”) or giving others access to sanitised aggregates insights of your market in some way (“profile miners”). The labels I have used are completely arbitrary, my own invention. The other school of thought I will touch on further down.

For me there is a flaw in this approach. To achieve any of these three results requires significant work and expertise from your own organisation. Lets’ explore that a bit further.

“Data polishers”

This service is normally performed by dedicated data enrichment bureaus and the objective is to augment data. You would be provided with a list of prospects or individuals by your enrichment service customer and you ‘sell’ certain variables that are filled in or appended to this provided list. This is not a trivial task and you have to be able to manage diverse datasets very well, match data across datasets based on incomplete information and not cross the line of selling privacy sensitive information. Data you have collected through your own investigation and analysis is normally ok but data provided by customers to you is far more sensitive and should comply to your data privacy policy. Organisations that have achieved pristine data quality and the procedures that make this possible may be able to achieve this in a cost effective manner.

“Customer accessors”

This is a favourite amongst telecommunication operators or shopping centre operators. Here you provide access to your distribution channels (mobile phones, kiosks etc.) to well profiled but anonymous customers of yours so that others can gain access to your customers. The classic, often worn example is the offer of a coffee to a potential customer when within a distance of a certain store within a specific timeframe. Here the location is provided by a service from the telco network or a software application on the handset. The organisation who wants to make the offer ‘buys’ access to a profile, a location, through a channel and at a time and anyone matching this combination is presented the offer. This requires that your underlying infrastructure can execute complex multi-offer campaigns, in near real-time utilising customer supplied datasets. Alternatively you have built a very secure, web based portal and API providing access to this same service by external parties, and can provide SLAs that match.

“Profile miners”

In this scenario data we have collected about customer behaviour is aggregated and profiled to create a generalised insight into the profile of customers we have. This is used as input to segmentation to help distinguish customer groups from one another and thereby should improve response rates to offers and approaches. By providing this aggregated insight smaller organisations can gain information about the broader market than they are able to with their own limited resources. To achieve this outcome you have to be proficient in managing sophisticated modelling procedures with a relatively high update frequency. You also must ensure that the profiles you create cannot be reverse engineered so that privacy is not compromised.

The conundrum or flaw to me is that each of these scenarios takes a lot of work and expertise. Either you have to be able to manage large and diverse datasets with often very low quality data (the “data polishers”). The infrastructure and skills to do this are quite specific and valuable skills with limited availability in their own right.

Or, you have to be able to securely execute complex offers over a distribution network not built originally for that purpose while structuring it as a value added service to your own customers (the “customer accessors”) so that your own customers are not annoyed with the activity. Or, you could choose to perform the sophisticated activities of the “Profile miners” and gain deep insight into the customer base, understanding the minutiae of customer behaviour and response patterns and providing this in a sanitised manner on a high frequency basis.

You then have to take these complex tasks, operationalise them, price them, package and support them and sell them as an offering thereby creating a completely new busines.

This brings me to the other school of thought for data monetisation. You could choose to do all of these things for your own business, with less pressure and probably larger payoff. I have seen many businesses struggle to get their own data under control and the best way for most businesses to monetise their data, is to do it for themselves. Use data to improve your own business processes. Your own business activities will benefit from this focus and the risk is if you try do it for another revenue stream you forget which business actually makes the money. Taking your eye off the ball to build services for external offer takes a lot of work, deep expertise and management attention. These could all be better leveraged inside your own business.

Therefore the best way to monetise data, is to use it productively and efficiently in your own business. A key to achieving this is to have a scalable, manageable platform and architecture that supports the major activities that yield value from data, namely:

  • Enterprise data access – the single view of the enterprise that supports daily decision making
  • Data discovery – the ability to find new sources of value in data
  • Data storage and pre-processing – leverage NoSQL and Hadoop platforms to cater for large volumes of data in which you don’t yet understand completely but that can seamlessly move into the discovery or enterprise environments
  • Analytics – supporting technologies and techniques that make forward looking analysis of data possible using all the above sources of data without having to move data around.

Craig Rodger is a senior Pre-sales Consultant with Teradata ANZ focusing on advanced analytics. He has spent 20 years in the IT industry working on how to get value out of systems rather than getting things into them. Having been a member of a number of executive management teams in software, technology and consulting companies and helping build a number of technology business ventures he joined an advanced analytics vendor. Connect with Craig Rodger via Linkedin.

The following two tabs change content below.

Leave a Reply

Your email address will not be published. Required fields are marked *