Monthly Archives: February 2017

Customer Analytics – Then and Now

Tuesday February 28th, 2017

I am sure you have heard it said before that fashion and trends repeat every twenty years. As it happens, in 1997 I joined the Marketing Workbench team at Harrah’s Entertainment. Marketing Workbench was designed to be marketing’s analytic environment to drive customer segmentation through predictive analysis. It was this career move that started my focus on customer analytics.

I decided to have a look at what has changed (or not) in customer analytics from then to now. I actually left Harrah’s in 2003, so my “then” includes six years. I decided to limit my trip down memory lane to three key areas:
– Data Platform,
– Analytics, and
– Customer Interactions.

Monica Woolmer - Customer Analytics 1Data Platform – The underlying database technology utilised (no surprise here) was Teradata. Since then many features and functions have been added and the amount of data supported has increased exponentially. The biggest change to me with regards to the data platform for analytics available today is the introduction of Hadoop (HDFS) and the recognition that there is a now an analytic ecosystem rather than a single data platform.

Another current capability that was not present back then is cloud-based technology. According to Teradata’s recent Data Warehouse Survey, more than 90 percent of our customers aim to have a hybrid cloud environment by 2020. Hybrid Cloud is a computing environment which uses a mix of on-premises, managed cloud and public cloud orchestrated to work together. Hybrid Cloud is all about flexibility and deployment options. Read more Hybrid Cloud in the CITO Research article ‘Analytics without Borders‘.

Monica Woolmer - Customer Analytics 2Analytics – Analytics were a core strength of Harrah’s and many predictive models (e.g., customer life time value, share of wallet) were developed and utilised to improve customer segmentation. Predictive models continue to exist today and they benefit by being able to run against more data points than ever before. As illustrated in the diagram below, analytics has evolved from Descriptive (e.g., executive dashboard) to Predictive (e.g., propensity score) to Prescriptive (e.g., automated decisions):

Monica Woolmer - Customer Analytics 3
Another new capability today is the advancement of visualisation techniques. This includes not only using visualisations (e.g., geospatial representations) as the actual user interface to the data but also includes new types of visualisations. For customer analytics we rely heavily on Flow visualisation for Customer Path analysis; Hierarchy visualisation for Relationship analysis; and Affinity visualisation for Product Affinity analysis.

Monica Woolmer - Customer Analytics 6
Above: Flow Visualisation Example showing the paths to in store purchases.

Monica Woolmer - Customer Analytics 4Customer Interactions – At Harrah’s, Marketing Workbench evolved from daily batch loads to message-based loads to ultimately publishing customer insights onto the message hub. In many ways Harrah’s was on the leading edge of using the results of analytics in customer interactions. For example, utilising the customer’s predicted life-time-value score in determining at what rate a hotel room should be offered.

Monica Woolmer - Customer Analytics 5The major difference to today is the ubiquitous use of mobile devices. Customers are now in charge of when and where they initiate interactions. It is up to each company to ensure that insights are shared across all channels. What is even better is in providing the visibility to all customer transactions and interactions as well all the ability to orchestrate any response such that the right message is given (or action is taken) that considers both the customer need and the priorities of the company.

So what hasn’t changed? Getting the chance to work with smart people and working together as a team to use analytics to improve customer experiences.

If someone is just starting their customer analytics career journey, I wonder what they will find in another twenty years. Perhaps customer sentiment will not solely be based on the words a customer uses, but rather include analysing their facial expressions. Perhaps robots become more prevalent in customer service.

Whatever the future holds there will be a role for customer analytics. From then to now as well as from today into the future, customer analytics will continue to provide valuable insights – for the companies that use them.

Turbo Charge Enterprise Analytics with Big Data

Wednesday February 22nd, 2017

renato-manongdo_enterprise-analytics-1We have been showing off the amazing art works drawn from numerous big data insight engagements we’ve had with Teradata, Aster and Hadoop clients. Most of these were new insights to answer business questions never before considered.

While these engagements have demonstrated the power of insights from the new analytics enabled by big data, it continues to have limited penetration to the wider enterprise analytics community. I have observed significant investment in big data training and hiring of fresh data science talents but the value of the new analytics remain a boutique capability and not yet leveraged across the enterprise.

Perhaps, we need to take a different tact. Instead of changing the analytical culture to embrace big data, why not embed big data into existing analytical processes? Change the culture from within.

How exactly do you do that? Focus big data analytics to adding new data points for analytics then make these data points available using the current enterprises data deployment and access processes. Feed the existing machinery with the new data points to turbo-charge the existing analytical insight and execution processes.

A good starting area is the organisation’s customer events library. This library is a database that contains customer behavior change indicators that provide a trigger for action and provide context for a marketing interventions. For banks, this would be significant deposit events (e.g. three standard deviations from the last 5 months average deposit) and for Telco’s significant dropped calls. Most organisations would have a version of this in place and would have dozens of these pre-defined data intervention points together with customer demographics. These data points support over 80% of the actionable analytics currently performed to drive product development and customer marketing interventions.

What new data points can be added? For example, life events that can provide context to the customer’s product behavior remains a significant blind spot for most organisation e.g. closing a home loan due to divorce, or refinance to a bigger house because of a new baby, etc. The Australian Institute of Family Studies have identified a number of these life events.

renato-manongdo_enterprise-analytics-2
Big data analytics applied to combined traditional, digital and social data sources can produce customer profiles scores that become data points for the analytical community to consume. The score can be recalculated periodically and the changes become events themselves. With these initiatives, you have embedded the big data to your existing enterprise analytical processes and moved closer to a deeper understanding to enable pro-active customer experience management.

We have had success with our clients in building some of these data points. Are you interested?

Spotting the pretenders in Data Science

Wednesday February 15th, 2017

The term “Data Scientist” is often over-used or even abused in our industry. Just the other morning I was watching TV and a news piece came on talking about the hottest careers in 2017 and data science was top of the list. Of course this is good for those who have been dealing with data for many years in one shape or another because your skills will be in demand. However the bad news is that the industry gets flooded with fakes all looking to get in on the action. It really is the wild west.

The problem with the industry is that there is not an official certification program like say Microsoft or Cisco certification programs. Therefore it is often difficult for an employer to identify how good they say they really are. Some might have a background in data and may be able to punch out some lines of SQL, but that doesn’t make a data scientist.

You can rely on the old method of making contact with references but we all know that can be fraught with danger as you’ll often reach the prospective employee’s best friend or someone who has been coached on what to say when they are called.
And most of the time, the prospective employee will be unable to show you the types of projects they have previously worked on because it may be commercially sensitive or just plain difficult to demonstrate in an interview.

Ben Davis_Data ScienceWhat makes the hiring process so much more difficult is that you are often under pressure to hire because data based projects are considered a priority within your organisation and are being carefully watched by management, therefore you must hire quickly and hire quality to deliver. The pressure is on you to get it right from the start.

So what more can one do to weed out the fake data scientists?

I’ve listed some interview questions below that will reveal how good of a data scientist they really say they are:

Q: If you had a choice of a Machine Learning algorithm, which one would you choose and why?
This is a trick question. Everyone should have a “go-to” algorithm that’s the easy part of the question, the devil lies in the 2nd part of the question the “why”. A good Data Scientist should be able to explain why they prefer the algorithm they mentioned and give an explicit answer as to it’s applicability or flexibility. If they went a step further and compared and contrasted their favourite algorithm with an alternate approach it would demonstrate an intricate knowledge of the algorithm.

Q: You’ve just made changes to an algorithm. How can you prove those changes make an improvement?
Once again you’re not seeking the obvious answer, rather testing the data scientists ability to demonstrate reason. In a research degree you have to demonstrate components of your research such as:
• The results are repeatable
• The demonstration of the before and after test are performed within a controlled environment using the same data and same hardware on both occasions.
• Ensuring that the test data is of sufficient quantity and quality to test your algorithm accurately. For example don’t test it on a small dataset and then roll it into production against a huge dataset with a lot more variables.

The key with his answer is that you are seeking to see how scientific the applicant is. Such a question would potentially give you an insight into their background, do they come from an academic background?

Q: Give an example approach for root cause analysis.
Wikipedia states that root cause analysis is “a method of problem solving used for identifying the root causes of faults or problems. A factor is considered a root cause if removal thereof from the problem-fault-sequence prevents the final undesirable event from recurring; whereas a causal factor is one that affects an event’s outcome, but is not a root cause”
This question seeks to understand if the applicant has ever performed these types of investigations in the past to troubleshoot an issue in their code. Once again we’re not looking for an explanation of what is root cause analysis, more so how root cause analysis may have been used in the past to solve something they were working on.

Each member of your team should have specific sets of skills that they bring to the table that compliments the team.

Q: Give examples of when you would use Spark versus MapReduce.
There are many answers to this question for example in-memory processing using RDD’s on small datasets is faster than MapReduce which has a higher IO overhead. You’re also looking for flexibility in a data scientist. There are many approaches a Data Scientist can take that lead to the same outcome. For example MapReduce may get to the same answer as Spark, albeit just a bit slower. But knowing when to use which approach and why is a valuable skill for a data scientist to have.

Q: Explain the central limit theorem.
Many data scientists come from a background of statistics. This question is testing a basic knowledge of statistics that any statistician should know if they are applying for a role as a Data Scientist. There’s a whole blog that compares and contrasts the role of a Data Scientist and a Statistician, however you may be seeking to build a data science team with a wide range of skills including statistics.

By the way CLT is a fundamental theorem of probabilities in that across a large distribution of data the mean of the variances will be approximately equal to the mean of the data itself. There’s many other explanations of CLT available online.

Q: What are your favourite Data Science websites?
This is attempting to find out how passionate they are about data science. A good Data Scientist would obviously bring up the usuals such as kdknuggets or Data Science Central You want to hire Data Scientists that not only use these sites, but keep their pulse on what’s happening, engage online with other like-minded individuals and you never know your next hire may come from one of these sites.

At the end of the day, you are not only assessing their knowledge but what skills and knowledge they would bring to your data science team.

In a previous blog on ‘Seven traits on successful Data Science teams‘ I discussed forming a team with varied skills. You don’t want a Data Science team of clones. Each member of your team should have specific sets of skills that they bring to the table that compliments the team. Get your interview questions formed well before the interview and you’re well on the way to building that special team.

The IoT: Redefining industry, again.

Wednesday February 8th, 2017

The Internet of Things (IoT) is much more than fitness tracking, AI toothbrushes and targeted discounts when you walk by a coffee shop. Put simply: the opportunities arising from ubiquitous connectivity and access to data will bring about the end of many of today’s industries as we know them.

For some, this new IoT-enabled world might mean a slow and painful end as they fight to remain relevant. But many others will grasp the opportunity. They’ll innovate, reinvent, redefine. And succeed. This is not speculation. It’s happening today. Let’s consider some examples from travel & transportation, as we watch the development of the nascent Mobility Industry.

Ford’s 2017 Superbowl advert presents them not as a car company, but a mobility business. It’s no longer about engine performance to get your heart racing or TV screens in the headrests to keep the kids quiet. Now, Ford can get you where you need to be, when you need to be there and in the way that suits you best. It’s not about features and functions. It’s about removing obstacles. And serving your needs.

While the Ford example is mainly about business-to-customer, the story is little different in the business-to-business world. Today, Siemens Mobility don’t present themselves as a personal mobility provider in quite the same way as Ford. But they do present themselves as the intelligent infrastructure provider that can guarantee trains run on time; city traffic flows efficiently; and metro services become more flexible to meet changing passenger needs. Siemens aren’t selling rolling stock any more. They’re selling mobility. Hmm…not so different to Ford after all.

It is true that some will see this redefinition of today’s industries as a bad thing. Especially those that can’t adapt quickly enough to ride the wave. But remember, disruption in industry is nothing new. In manufacturing, The Industrial Revolution may have started it all (at least from a European perspective) but today, we’re already on the way to Industry 4.0. Change is inevitable. And it is constant.

At the top of this blog, I cited the IoT as the key enabler for this massive disruption in how 20th Century industries will serve their customers in the 21st. And that’s true. Things that can connect to each other and exchange information are critical to all the services I’ve discussed so far. Without all those remote devices being able to communicate with each other and with the mothership, everything kinda falls apart. But it’s also clear that just communicating is not enough.

David Socha_IoT

How can Ford create a seamless travel experience for you, from home to office to social event to leisure weekend to…wherever? Just having a fleet of connected cars, buses, bicycles and…eh… gyrocopters isn’t going to cut it. Though the gyrocopters might be quite cool. I might bring that up with them. Anyway…Ford also needs to know about you and your needs; your preferences; your habits; your budget and so much more. They need to know where their fleets are; their schedule of availability; their state of repair and time to next maintenance. And that’s just on the operations side of the business.

Thinking more strategically, Ford, Siemens and others also need to learn from the equipment that’s out there so that they can remain competitive, relevant and profitable next year and next decade too. They must understand which design features have been successful and improve upon them in the next product release. They need to learn what mix of their services works in particular market conditions and predict what those conditions might be in the future. And most importantly, they need to be prepared for further disruption in what is very obviously an immature industry.

To deliver on all these promises needs more than the Internet of Things. It needs the Analytics of Things. Data and analytics are fundamental to everything that a modern mobility provider must achieve. From knowing where a customer is likely to be, who they are with and what services they are likely to call upon, to predicting the remaining useful lifespan of a braking system and scheduling repairs at a time that best suits all parties, providers must be able to analyse all their data, at a speed and cost that meets a very broad range of requirements.

Sometimes, this will mean making real-time, data-driven operational decisions. But not always. At other times, it will mean storing, then analysing massive data sets over significant time periods to identify strategic indicators that lead to policy changes. It will also mean exchanging data – and insights – efficiently with other parties. That might be to influence a connected supply chain, based on new analysis of equipment reliability. Or it might be to provide analysis to Governments and Local Authorities with interest as diverse as city congestion and, say… national security.

It’s clear: getting the data and analytics right is at least as important to the future success of companies like Ford and Siemens as any other aspect of their business. It’s not just about new gadgets. It’s not just about marketing. It’s not even just about fundamentally redefining operating models. And it’s certainly not just about the Internet of Things.

21st Century change is data driven. Embrace it, or fail.